Formal Representations of Ignorance
A workshop sponsored by the University of Pittsburgh Center for Philosophy of Science
March 1718, 2017
Center for Philosophy of Science
817CL Cathedral of Learning
University of Pittsburgh
This workshop will bring together several philosophers working directly on formal representations of ignorance. While the notion of ignorance is a familiar one, philosophers have struggled to model the epistemic state in a formal logic of belief. A wellknown attempt equates ignorance with indifference in degree of belief represented by additive probability. This strategy amounts to modeling an individual’s epistemic state with an uninformative prior probability distribution such that all events in a partition of the relevant sample space are assigned equal probability. However, the attempt has proven to be futile for a number of reasons. In recent years, John Norton has issued plausible criteria a formal representation of ignorance ought to fulfill and demonstrated that the probability calculus is unable to satisfy such criteria, delivering a fatal blow to probabilists. The challenges laid out by Norton have invited some to seek amendments to classical probability or alternative models that overcome the challenges. The proposed workshop will largely focus on the problems with formal representations of ignorance and consider ways that they may be resolved or exacerbated.
Invited speakers

Peter Brössel (Ruhr University)

Jennifer Carr (UCSD)

Ben Eva (MCMP, LMU)

Susanna Rinard (Harvard)

Miriam Schoenfield (UT Austin)

Teddy Seidenfeld (CMU)
Organizers and speakers

Yann BenétreauDupin (Pitt Center)

Lee Elkin (MCMP, LMU)

John D. Norton (Pitt HPS)
List of talks
 Yann BenétreauDupin, Ignorance, Indifference, & Imprecision
I will discuss John Norton’s criteria for representing ignorance and indifference (which he put forth over a series of papers). I will argue that the Bayesian framework of imprecise probabilities can accommodate these criteria to a large extent.
Slides
Paper: “The Bayesian Who Knew Too Much”  Ben Eva, Qualitative Principles of Indifference
Following Norton (2007), we consider the possibility of generalizing the principle of indifference (PI) to a nonprobabilistic setting in a way that avoids the wellknown paradoxes that plague it in its standard formulation. We show that allowing for the existence of nonlinearly ordered degrees of confirmation/belief enables us to obtain multiple equally plausible nonprobabilistic generalizations of PI. What’s more, one’s choice of generalization has significant implications concerning the ways in which one can update on evidence to move from an initial nonprobabilistic state of indifference to a more informative and fully probabilistic epistemic state. In particular, while Norton’s original generalization of PI is fundamentally incompatible with any kind of Bayesian updating, the new generalization I present in the talk is amenable to Bayesian update rules.
Slides  Lee Elkin, Complete Ignorance Represented by Lower Probability
Slides  Susanna Rinard, Ignorance is not Maximally Imprecise Probability
I will argue that ignorance is not best represented by maximally imprecise credence models (e.g. a credence that spans the [0, 1] interval). I will also sketch some ideas about how one might revive the Principle of Indifference in a way that avoids the problems of classic formulations.
Handout
Paper: “Against Radical Imprecision”  John D. Norton, The Invariances of Ignorance
The transformations that leave a state unchanged are its invariances. We can identify the epistemic state of ignorance uniquely from them and it turns out to be nonprobabilistic. This method of invariances is a mainstay of modern physics but tends to be used only by objective Bayesians and then in a limited way. I will illustrate its use in the case of the infinite lottery, where it also gives nonstandard results.
Slides  Miriam Schoenfield, Beliefs Formed Arbitrarily
 Teddy Seidenfeld, Measure and Category: when large also is small
In this presentation I review some old and some new results about the conflicts between measuretheoretic and topological senses of being a “negligible” (or “small”) set. These results help to explain why familiar probability stronglaws cannot be reconciled with a topological perspective where Pnull sets (where the strong laws fail) also are meager sets.
Slides
Paper (unpublished manuscript): “Standards for Modest Bayesian Credences “  Jennifer Carr, The Inevitability of Groundless Beliefs
 Peter Brössel, Bayesian Strategies to Avoid the Pitfalls of Ignorance