Christoph Merdes visits Carnegie Mellon University

Christopher, PhD student at the Munich Center for Mathematical Philosophy, visited Kevin Zollman, Associate Professor & Director of Graduate Studies at Carnegie Mellon University in Pittsburgh during September 2017.

meta-image-cmuImage caption: Carnegie Mellon University, Pittsburgh, Pennsylvania

I am currently in the final stages of my PhD at the Munich Center for Mathematical Philosophy. My PhD project revolves around various problems concerning collective rationality, both on the conceptual level and in the realm of concrete phenomena.
On the more concrete side of modeling phenomena of potential collectively rational belief structures and behavioral patterns agent-based modeling and computer simulations provide a methodology that enables the analysis of wide varieties of behavioral assumptions, far beyond the narrow limits of neoclassical rational choice. Therefore, visiting Kevin Zollmann was a natural choice to further my projects, given his eminent standing in the growing, but small community of simulating philosophers. I greatly profited from numerous discussions with both him and his PhD
students of related interests.

In particular, my understanding of the possibilities of modeling boundedly rational behavior in strategy learning was deepened significantly; a topic of great importance since I am currently working on models of social belief revision under various reward schemes by cognitively limited agents. The topic of motivations and strategic behavior in opinion dynamics models has only recently rose to the attention of researchers, promising significant results for areas from legal epistemology to philosophy of science, from political philosophy to the analysis of social norms and conventions.

As a side note, I seized the opportunity to debate the potential paternalistic consequence of  rationality failure, whether individual or collective, with the resident ethicist Alex John London, leaving me with significant food for thought on the wider implications of any conclusions drawn from models of collective (ir)rationality.

I greatly enjoyed the stay, and even made significant progress on my thesis while doing so.


Michal Sikorski visits the Munich Center for Mathematical Philosophy


I was able to visit Munich Center for Mathematical Philosophy at the Ludwig-Maximilians-Universität München. During my visit there I benefited greatly from the supervision of Karolina Krzyżanowska and from the advice and knowledge of many other members of the MCMP.  During my stay I presented my recent article (written in collaboration with (Noah van Dongen  and Jan Sprenger) at the conference entitled ‘Causation, Explanation, Conditionals’.

Tom Sterkenburg visits Carnegie Mellon University

Tom Sterkenburg, PhD student at the Centrum Wiskunde & Informatica (CWI, the Dutch national research institute for mathematics and computer science) and the Department of Theoretical Philosophy of the University of Groningen, visited the Center for Formal Epistemology at Carnegie Mellon University in Pittsburgh, from February till May, 2017.

Cmu-pittsburgh-4Image caption: Carnegie Mellon University, Pittsburgh, Pennsylvania

What can we learn from data? Are we justified at all in drawing conclusions that go beyond the data themselves—and if so, what conclusions? These are fundamental epistemological questions, and it is part of the scientific approach to epistemology to investigate them with the help of the most sophisticated tools we have for learning from data: those from statistics and machine learning.

In my PhD project, I looked into a particular proposal, dating back to the 1960’s, of an abstract method that is universal in a precise sense of being able to eventually learn everything there is to learn from a stream of data. While highly idealised, it provides a focal point for the above questions of what is (im)possible in principle. In my thesis I cast this proposal as a particular offspring of Carnap’s program of inductive logic, which set out to define an objective quantitative measure of the extent to which a given body of data confirms a given hypothesis. Interestingly, this proposal ostensibly avoids an influential mathematical argument due to Putnam that was to demonstrate the sheer impossibility of what Carnap hoped to achieve. It also served as an important precursor to the information-theoretic tradition in machine learning, and has a direct line to modern work in online learning, that I analyse next in my thesis.

My research thus firmly lies where epistemology meets statistics and machine learning. There are few places in the world that bring together such a strong group of people working at precisely this intersection as the Center for Formal Epistemology at Carnegie Mellon, so I was very happy with the opportunity to spend some time there.

My three months in Pittsburgh were an intense period of being immersed in campus life, trying to strike a balance between making progress on writing my thesis (that I hoped to submit not long after my visit) and enjoying the events and interactions at the center. (I think I managed quite well.) One highlight was sitting in on Teddy Seidenfeld’s course on Savage’s Foundations of Statistics; another were the discussions with Konstantin Genin and Kevin Kelly on their latest results in formal learning theory (particularly those related to the principle of preferring simplicity, or Occam’s razor), a subject that is in important respects closely related to the framework I studied. As a bonus there was the proximity of the University of Pittsburgh Center for Philosophy of Science, where I visited a number of workshops and met up with John Norton to talk about universal methods and measures of confirmation.

All in all, I had a very instructive and productive time at Carnegie Mellon. I would like to thank the Leverhulme Trust and the University of Groningen for the financial support, and Kevin Zollman, Jan-Willem Romeijn, Richard Pettigrew, and Laura Lanceley as the people who facilitated it. (I’m glad to report that my thesis is completed now, too.)

Networking, Bayesian Style

Roland Poellinger, Postdoctoral Researcher at the Munich Center for Mathematical Philosophy (MCMP) visited Professor Jan-Willem Romeijn, Head of the Department of Theoretical Philosophy at the University of Groningen, The Netherlands, from April to May, 2017.

rsz_poellinger_post_image_img_2046Image caption: Similarity structures everywhere

When I was making plans to spend an intense research visit abroad in Spring 2017, there would not have been a better place for me than Groningen: What unites philosophers in Munich and Groningen is a strong focus on formal methods and Bayesian reasoning. In much of my work, Bayesian networks are the means of choice. Using such nets, I have looked at causal decision theory and paradox in causal reasoning. I developed ideas for integrating causal and non-causal knowledge in an extension of the Bayes net framework, and more recently I have become highly interested in using Bayes nets for reconstructing analogical arguments in science and for making explicit how analogies can help confirming scientific hypotheses. I guess, my interest in paradox also fueled my interest in analogy: In a recent book, Paul Bartha states that “[f]or Bayesians, it may seem quite clear that an analogical argument cannot provide confirmation” (Bartha, 2010). Well, that did not seem so clear to me. Bartha argues that any analogical argument E expressing the analogy relation between source and target domain is – if it is meant to support hypothesis H – necessarily already contained in one’s background knowledge K (as old evidence), such that Pr( H | E & K ) = Pr( H | K). The equality sign shouts at the Bayesian: no confirmation here! I do agree with the argument. But: Shouldn’t the powerful Bayesian framework be able to capture scientific strategies based on analogical reasoning? (See also Beebe & Poellinger, forthcoming) Many, many analogical arguments have proven fruitful for discovery and hypothesis testing in many, many research contexts: Physics, econometrics, medicine, etc. And in pharmacology – the current focus of my work – analogical arguments surface in deep and difficult questions about extrapolation. In his famous and influential paper “The Environment and Disease: Association or Causation?” (1965), Sir Austin Bradford Hill lists analogy as one of his famous guidelines towards an informed assessment of potential causes in epidemiology:

In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.

A recent paper by Landes, Osimani, and Poellinger (2017) explores the possibility of amalgamating all available, potentially heterogeneous evidence in a Bayesian reconstruction of scientific inference for the integrated probabilistic assessment of a drug’s causal side-effects: In this framework, a scientific hypothesis (i.e., a causal claim) is supported by some evidential report, if this evidence is deemed relevant to the hypothesis – most importantly, if study and target can be called analogous.

With many ideas (and many questions) about how to relate formal explications of similarity, analogy, extrapolation, and confirmation, I was more than happy to visit Prof. Jan-Willem Romeijn and his group at the Department of Theoretical Philosophy at the University of Groningen in April and May, 2017. Not only did my project benefit greatly from Jan-Willem Romeijn’s expertise in Bayesian reasoning and statistical methodology, I was also invited to present my project and speak about analogical inference patterns (see Poellinger, forthcoming) at the workshop on “Causality in Psychological Modeling” (15 May, 2017). This event was co-organized by Jan-Willem Romeijn and Markus Eronen (Groningen/Leuven) and featured Laura Bringmann (UG) Denny Borsboom (UvA), as well as Naftali Weinberger (Tilburg). Highly interesting discussions at the overlap of Bayesian reasoning, causal modeling, statistical methodology, and psychometrics are to be continued – which I am very much looking forward to.

I am thankful to Richard Pettigrew, the Leverhulme Trust, and the ERC research project “Philosophy of Pharmacology” (grant 639276; principal investigator: Barbara Osimani) for making this exchange happen, and for sparking many ideas I brought back home.


Bartha, P. F. A. (2010): By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments, Oxford University Press.

Beebe, C. & Poellinger, R. (201X): Confirmation from Analog Models. (submitted)

Hill, A. B. (1965): The Environment and Disease: Association or Causation? Proceedings of the Royal Society of Medicine, 58(5), 295–300.

Landes, J., Osimani B., Poellinger R. (2017): Epistemology of Causal Inference in Pharmacology. Towards a Framework for the Assessment of Harms. European Journal for the Philosophy of Science. DOI:

Poellinger, R. (201X) Analogy-Based Inference Patterns in Pharmacological Research. In: La Caze, A. & Osimani, B (eds.): Uncertainty in Pharmacology: Epistemology, Methods, and Decisions. Boston Studies in Philosophy of Science. Springer (forthcoming).


Jürgen Landes visits George Masterton

Jürgen Landes, Postdoctoral Researcher at the Munich Center for Mathematical Philosophy visited George Masterton, Postdoctoral Fellow in the Department of Philosophy at Lund University from May to July, 2017.


One key challenge in medical inference are the enormous sums at stake and the hence inherent vested interests. The reliability of all items of available evidence therefore needs to be carefully assessed. Intuitively, the greater the variety of sources and kinds of evidence which point in the same direction, the higher the assessed reliability and thus stronger the evidence confirms a hypothesis, ceteris paribus. This intuitive thought is known as Variety of Evidence Thesis.

Previous work, Bovens and Hartmann [2003] and Claveau [2013], on this thesis reports surprising failures in which the less diverse body of evidence for a hypothesis confirms the hypothesis more strongly than the more diverse body of evidence. Landes and Osimani [2017] have developed a framework for the formal evaluation of the Variety of Evidence Thesis under more realistic modelling assumptions regarding the reliability of scientific instruments which are closer to statistical practise. The upshot is that the Variety of Evidence Thesis fails mostly for borderline or counter-intuitive cases and holds otherwise. However, when reliability is modelled exogenously, then the Variety of Evidence Thesis does hold and it can be given a formal Bayesian justification, see Landes [2017].

These papers are embedded in an overarching framework aiming to provide a formal analysis of statistical inference, see Landes et al. [2017]. The aim here is twofold: on a more theoretical level, formal epistemology should provide a sort of lingua franca, where different conceptualisations of error and different statistical techniques can be discussed and investigated; on a more practical level, we aim to use formal epistemology as a framework for the incorporation of various evidential dimensions in the overall assessment of investigated hypotheses.

Within this context I visited George Masterton (based at Lund university) who is an expert in judgement aggregation and the philosophy of science. He is involved in the application of Laputa which is a research tool and a sandbox environment for simulating the attainment of knowledge in social networks, see Masterton and Olsson [2013], Masterton [2014]. Having successfully cooperated (Landes and Masterton [2017]), I visited George Masterton in Lund to apply Laputa to the problem of modelling reliability of evidence in medical inference and how different concepts of the notion of reliability influence rational (Bayesian) beliefs in a social world. Work on a jointly-authored manuscript has begun. Barbara Osimani has kindly agreed to participate in the joint effort. We are looking forward to a joyful and labor-intensive period of readings, discussions and writing. We would all like to thank Richard Pettigrew, the Leverhulme Trust and the European Research Council for making this cooperation a reality.

Luc Bovens and Stephan Hartmann. Bayesian Epistemology. Oxford University Press, 2003.

Francois Claveau. The Independence Condition in the Variety-of-Evidence Thesis. Philosophy of Science, 80(1):94{118, 2013. URL

Jürgen Landes. Variety of Evidence. Erkenntnis, 2017. 41 pages, submitted.

Jürgen Landes and George Masterton. Invariant Equivocation. Erkenntnis, 82:141{167, 2017. URL

Jürgen Landes and Barbara Osimani. Exact replication or varied evidence? The Varied of Evidence Thesis and its methodological implication in medical research. British Journal for the Philosophy of Science, 2017. 59 pages, submitted.

Jürgen Landes, Barbara Osimani, and Roland Poellinger. Epistemology of Causal Inference in Pharmacology. European Journal for Philosophy of Science, 2017. URL pages.

George Masterton. Topological variability of collectives and its import for social epistemology. Synthese, 191(11):2433{2443, 2014. URL

George Masterton and Erik J. Olsson. Argumentation and belief updating in social networks: a Bayesian approach. In E Ferme, D. Gabbay, and G. Simari, editors, Trends in belief revision and argumentation dynamics. College Publications, 2013.