Confirmation and Evidence Remote Presentation
12 Nov 2021 02:00 PM - 04:00 PM(America/New_York)
20211112T1400 20211112T1600 America/New_York Bayesian Models in Philosophy of Science

Bayesian models are at the core of various successful research programs in philosophy of science and formal epistemology: they are used for reconstructing and better understanding scientific reasoning, but also for developing foundational theories of belief and decision-making under uncertainty. This symposium investigates how foundational and model-based conceptions of Bayesianism relate to each other, and how they might cross-fertilize each other. More specifically, we address this general question from three perspectives: theories of valid inductive inference, understanding Bayesian models as scientific models, and comparing Bayesian models to competitors. At the end, we combine these perspectives to explain the popularity of Bayesianism in various fields of philosophy and science.

PSA 2020/2021 office@philsci.org
85 attendees saved this session

Bayesian models are at the core of various successful research programs in philosophy of science and formal epistemology: they are used for reconstructing and better understanding scientific reasoning, but also for developing foundational theories of belief and decision-making under uncertainty. This symposium investigates how foundational and model-based conceptions of Bayesianism relate to each other, and how they might cross-fertilize each other. More specifically, we address this general question from three perspectives: theories of valid inductive inference, understanding Bayesian models as scientific models, and comparing Bayesian models to competitors. At the end, we combine these perspectives to explain the popularity of Bayesianism in various fields of philosophy and science.

Modeling Limitations versus the Limitations of a Model
Symposium Paper AbstractsScientific Models / Modeling 02:00 PM - 02:30 PM (America/New_York) 2021/11/12 19:00:00 UTC - 2021/11/12 19:30:00 UTC
A great deal of work has been put into showing what Bayesianism can't do. If we think of Bayesianism as a framework for constructing models-of evidential confirmation, of norms for belief, of causal systems, etc.-how should we understand this work?
Here it's important to distinguish counterexamples from cases outside the framework's domain of applicability. A counterexample occurs when a phenomenon lies within the domain to which the framework is meant to be applied, a model is constructed of that phenomenon using the framework, and the model makes incorrect predictions about the phenomenon. For instance, if Bayesian models predict that once an agent learns a piece of evidence, that evidence may no longer confirm hypotheses for that agent, and yet old evidence can in fact confirm, then this provides a counterexample to the Bayesian approach. This is very different from a case to which the modeling framework was never intended to apply. For instance, if an agent's attitude towards a particular proposition is too loosely constrained to be appropriately captured by a real number, then this is simply a case outside the intended domain of a traditional Bayesian framework.
What should we do when we encounter such a case? Hopefully we can build a new framework that strictly extends the domain of the old. Efforts in this direction that draw on Bayesian inspiration include frameworks involving sets of probabilities, comparative confidence frameworks, and (more distantly) Dempster-Shafer models. Such frameworks can be used to model, for instance, agents who make confirmational or causal judgments among sets of propositions without being committed to precise credal judgments among those propositions.
I will examine some of these Bayes-adjacent frameworks from a modeling point of view, asking what their relative advantages and disadvantages are when compared to more traditional Bayesian models. While each approach will be shown to have its limitations, some provide more comprehensive approaches to modeling limited agents than others.

Presenters
MT
Michael Titelbaum
University Of Wisconsin-Madison
Bayesian Philosophy of Science as Scientific Philosophy
Symposium Paper AbstractsProbability and Statistics 02:30 PM - 03:00 PM (America/New_York) 2021/11/12 19:30:00 UTC - 2021/11/12 20:00:00 UTC
Formal probabilistic theories were once considered a good explication of scientific reasoning (Carnap 1950/62), but they suffered a setback following the naturalistic turn in the 1960s and 1970s. Yet, they have been resurgent over the last decades: there is an ever-increasing number of papers making use of methods and models from Bayesian inference, machine learning, evolutionary game theory and other formal theories. These techniques are used for tackling and solving central problems in philosophy of science. Does this mean that almost hundred years after Carnap's and Reichenbach's programmatic writings on "scientific philosophy", our current philosophy of science qualifies as such? In answering this question, this talk combines historical and systematic perspectives, and makes a case for Bayesian philosophy of science as a new and timely type of scientific philosophy.

First, we discuss various ways of making sense of the label "scientific philosophy", implying different views on the role of philosophy within the scientific enterprise. Second, we connect these views to historically held positions by proponents of the Vienna Circle and their intellectual surroundings (Popper, Reichenbach). Third, we argue that current philosophy of science should adopt a clever mix of them: (a) to consider philosophical research as a proper part of the scientific enterprise rather than as "applied logic", interpretation of scientific findings or necessary prolegomena for proper science; (b) to reject a purely "mathematical philosophy" in favor of a mix of formal, conceptual and empirical methods; (c) to adopt the method of explication as being central for classical problems in philosophy of science, such as giving a good theory of explanation, confirmation, or causation. (The explicative method has obvious limits, of course, for example for research at the science-policy interface.)
We then illustrate how complying with these three requirements allows Bayesian models to achieve substantial progress on philosophical problems. Specifically, it will turn out that Bayesian models can and should be understood in close analogy to scientific models and theories: they guide our reasoning and further theorizing, they have obvious limits (e.g., representation of ignorance or suspension of judgment), they provide predictions about empirical phenomena (e.g., how people judge statistical evidence or causal relationships), and so on. They are also judged according to classical criteria such as simplicity, explanatory power, and consistency with other relevant models and theories. 
After this case study on Bayesian philosophy of science, we conclude that good scientific philosophy uses-by and large-the same methods as science, and that there is a continuum between the goals and methods of proper science and those of scientific philosophy.
Presenters
JS
Jan Sprenger
University Of Turin
Stephan Hartmann
LMU Munich
Bayes, Here, There, but Not Everywhere
Symposium Paper AbstractsConfirmation and Evidence 03:00 PM - 03:30 PM (America/New_York) 2021/11/12 20:00:00 UTC - 2021/11/12 20:30:00 UTC
Bayesian analysis, when it works, works very well. It provides a level of precision unmatched by competing approaches. Compare its precise judgment of numerical probabilities with the struggle to decide which of many competing explanations is the best and whether the best is good enough. It provides a way of consistently combining many competing items of evidence, no matter how many there are. The accountancy comes naturally from the need to specify many conditional probabilities. Finally, when it works, it answers specific questions with precise statements of probability; and it answers the most general questions in philosophy of science by proofs of theorems within the probability calculus. What once seemed like intractable problems in philosophy of science are reduced to algebraic exercises in the probability calculus.
Hence it is tempting to imagine that all problems in philosophy of science can be embraced and resolved by probabilistic analysis. The muddiness of vague or ambiguous problems would be replace by the clarity of simple mathematics. However it is a temptation that must be resisted. For some problems in philosophy of science simply are muddy and ambiguous. To impose the precision of probabilistic analysis onto them is to mislead ourselves with a spurious precision.
I will argue that there is a limit to the reach of Bayesian analysis. If it is to provide added benefits, it must enable us to infer to more than can be inferred by deduction from the facts known to us. It follows that there will be environments inhospitable to the analysis. That some application is not in such an environment is a contingent matter that must be decided on a case by case basis. Outside those domains hospitable to probabilistic analysis, we must seek other ways of proceeding, once again selected for the fit to the domain at hand. It follows that there is no universal logic of inductive inference, probabilistic or otherwise, but many logics, each suited to different domains.
This multiplicity may not be apparent since Bayesian analysis has been supported for nearly a century by purported proofs that we must distribute our credences probabilistically. All these proofs turn out to be circular. They are deductive arguments. That means that their premises must be logically at least as strong as their conclusions. The assumption of probabilities is built into the premises, but in a disguised form that is easy to overlook if one is predisposed to the conclusion. If one is not predisposed to it, the premises prove at least as problematic as the conclusion of the universal applicability of probabilities. Efforts to repair the proofs compound the problem by seeking to derive the premises from still further premises in which the applicability of probabilities must again be assumed. These efforts trigger a regress that cannot end well.
Presenters John D Norton
HPS, University Of Pittsburgh
Commentary
Symposium Paper AbstractsProbability and Statistics 03:30 PM - 04:00 PM (America/New_York) 2021/11/12 20:30:00 UTC - 2021/11/12 21:00:00 UTC
A number of authors have used probability to offer quantitative measures corresponding to concepts of interest to philosophers. For example, in their 2019 book Bayesian Philosophy of Science (OUP), Sprenger and Hartmann offer quantitative measures of degree of confirmation, degree of corroboration, causal strength, and explanatory power. They offer these as Carnapian explications: precise definitions designed to replace informal concepts in contexts requiring rigor and precision. 
I wish to raise two types of questions about these quantitative measures. The first concerns their pragmatic value. For a fully Bayesian agent, whose credences and utilities are transparent, these measures may be of little practical use. Suppose that an agent begins with fully specified prior probabilities for H, E and their Boolean combinations. Upon learning E, she will update her degrees of belief to obtain a posterior probability for H. If she must take some action with respect to H -- wager on its truth -- this posterior probability (together with her utilities) provides her with all she needs. Learning that E confirms H to degree c(H, E) will not provide any further actionable information. However, for a less than ideal agent, such information might be valuable. Consider an agent who strives to be Bayesian, but does not have conscious access to numerically precise degrees of belief. Suppose she must decide whether to wager on H, and has the option to pay a premium to learn whether E is true. Would knowing the degree of confirmation c(H, E) help such an agent? Or suppose that the agent has money riding on whether H occurs, and may pay a premium to perform action C. Would learning the causal strength of C for H help her? 
A second question concerns the ways in which these measures interact. For example, the World Health Organization recently classified processed meat as a Group I carcinogen, along with tobacco and asbestos. Group I carcinogens are so classified according to the strength of the evidence for a causal relationship. This caused considerable confusion, however, since processed meat has a much smaller impact on one's chances of developing cancer than tobacco and asbestos. In the language of Sprenger and Hartmann, the evidence provided a high degree of confirmation for the hypothesis that processed meat causes cancer, but the causal strength of processed meat for cancer is low. How would this differ from a situation in which we had weak evidence for a strong causal relationship? Could the same evidence give rise to either situation? Do the two situations provide equally strong reasons to avoid red meat and processed meat? Can one combine degree of confirmation and causal strength into one single measure that can effectively guide decision-making? One context in which these two components seem to come apart is the law, which specifies standards of evidence -- beyond a reasonable doubt, more probable than not -- but says little about causal strength. 
Presenters
CH
Christopher Hitchcock
California Institute Of Technology
University of Turin
HPS, University of Pittsburgh
University of Wisconsin-Madison
California Institute of Technology
 Lorenzo Casini
Sant'Anna School of Advanced Studies
University of California Davis
Leibniz University Hannover
University of Cambridge
+29 more attendees. View All
Upcoming Sessions
418 visits