In subjective probability theory complete ignorance of the ideal doxastic agent with respect to a particular proposition [ltr]AA[/ltr] is modeled by the agent’s having a subjective probability of .5 for [ltr]AA[/ltr] as well as its complement [ltr]W∖AW∖A[/ltr]. More generally, an agent with subjective probability Pr is said to be ignorant with respect to the partition [ltr]{A[size=13]1,…,An}{A1,…,An}[/ltr] if and only if [ltr]
Pr(Ai)=1/nPr(Ai)=1/n[/ltr]. The principle that requires an ideal doxastic agent to equally distribute her subjective probabilities in this fashion whenever, roughly, the agent lacks evidence of the relevant sort is known as the
principle of indifference. (Leitgeb & Pettigrew 2010b also present a condition that allows them to give a gradational accuracy argument for the principle of indifference.) It leads to contradictory results if the partition in question is not held fixed (see, for instance, the discussion of
Bertrand’s paradox in Kneale 1949). A more cautious version of this principle that is also applicable if the partition contains countably infinitely many elements is the
principle of maximum entropy. It requires the agent to adopt one of those probability measures Pr as her degree of belief function over (the [ltr]
σσ[/ltr]-field generated by) the countable partition [ltr]
{Ai}{Ai}[/ltr] that maximize the quantity [ltr]
−∑iPr(Ai)logPr(Ai)−∑iPr(Ai)logPr(Ai)[/ltr]. The latter is known as the
entropy of Pr with respect to the partition [ltr]
{Ai}{Ai}[/ltr]. See Paris (1994).[/size]
Suppose Sophia has hardly any enological knowledge. Her subjective probability for the proposition that a Schilcher, an Austrian wine specialty, is a white wine might reasonably be .5, as might be her subjective probability that a Schilcher is a red wine. Contrast this with the following case. Sophia knows for sure that a particular coin is fair. That is, Sophia knows for sure that the objective chance of the coin landing heads as well as its objective chance of landing tails each equal .5. Her subjective probability for the proposition that the coin will land heads on the next toss might reasonably be .5. Although Sophia’s subjective probabilities are alike in these two scenarios, there is an important epistemological difference. In the first case a subjective probability of .5 represents complete ignorance. In the second case it represents substantial knowledge about the objective chances. (The principle that, roughly, one’s prior subjective probabilities conditional on the objective chances should equal the objective chances is called theprincipal principle by Lewis 1980. For a recent discussion see Briggs 2009b.)Examples like these suggest that subjective probability theory does not provide an adequate normative account of doxastic states, because it does not allow one to distinguish between ignorance and knowledge about chances. Interval-valued probabilities (Kyburg & Teng 2001, Levi 1980, van Fraassen 1990, Walley 1991) can be seen as a reply to this objection without giving up the probabilistic framework. If the ideal doxastic agent is certain of the objective chances she continues to assign sharp probabilities as usual. However, if the agent is ignorant with respect to a proposition [ltr]AA[/ltr] she will not assign it a subjective probability of .5 (or any other sharp value, for that matter). Rather, she will assign [ltr]AA[/ltr] an entire interval [[ltr]a,b]⊆a,b]⊆[/ltr] [0,1] such that she considers any number in [[ltr]a,ba,b[/ltr]] to be a legitimate subjective probability for [ltr]AA[/ltr]. The size [ltr]b−ab−a[/ltr] of the interval [[ltr]a,ba,b[/ltr]] reflects her ignorance with respect to [ltr]AA[/ltr], that is, with respect to the partition [ltr]{A,W∖A}{A,W∖A}[/ltr]. (As suggested by the last remark, if [[ltr]a,ba,b[/ltr]] is the interval-probability for [ltr]AA[/ltr], then [[ltr]1−b,1−a1−b,1−a[/ltr]] is the interval-probability for [ltr]W∖AW∖A[/ltr].) If Sophia were the enological ignoramus that we have previously imagined her to be, she would assign the interval [0,1] to the proposition that a Schilcher is a white wine. If she is certain that the coin she is about to toss has an objective chance of .5 of landing heads and she subscribes to the principal principle, [.5,.5] will be the interval she assigns to the proposition that the coin will land heads on the next toss.Interval-valued probabilities are represented as convex sets of probability measures (a set of probability measures is convex just in case the mixture [ltr]xPr[size=13]1(⋅)+(1−x)Pr2(⋅)xPr1(⋅)+(1−x)Pr2(⋅)[/ltr] of any two probability measures [ltr]
Pr1,Pr2Pr1,Pr2[/ltr] in the set is also in the set, where [ltr]
xx[/ltr] is a real number from the unit interval [0,1]). Updating a set of probability measures is done by updating the individual probability measures in the set. Weatherson (2007) further generalizes this model by allowing evidence to delete some probability measures from the original set. The idea is that one may learn not only that various facts obtain (in which case one conditionalizes the various probability measures on the evidence received), but also that various evidential or inferential relationships hold, which are represented by the conditional probabilities of the hypotheses conditional on the data. Just as factual evidence is used to delete worlds, “inferential” evidence is used to delete probability measures. Among others, this allows Weatherson (2007) to deal with one form of the so-called
problem of old evidence (Glymour 1980) that is related to the problem of logical omniscience (Garber 1983, Jeffrey 1983b, Niiniluoto 1983).[/size]