** متابعات ثقافية متميزة ** Blogs al ssadh
هل تريد التفاعل مع هذه المساهمة؟ كل ما عليك هو إنشاء حساب جديد ببضع خطوات أو تسجيل الدخول للمتابعة.

** متابعات ثقافية متميزة ** Blogs al ssadh

موقع للمتابعة الثقافية العامة
 
الرئيسيةالأحداثالمنشوراتأحدث الصورالتسجيلدخول



مدونات الصدح ترحب بكم وتتمنى لك جولة ممتازة

وتدعوكم الى دعمها بالتسجيل والمشاركة

عدد زوار مدونات الصدح

إرسال موضوع جديد   إرسال مساهمة في موضوع
 

  Philosophical questions for IP

اذهب الى الأسفل 
كاتب الموضوعرسالة
free men
فريق العمـــــل *****
free men


التوقيع : رئيس ومنسق القسم الفكري

عدد الرسائل : 1500

الموقع : center d enfer
تاريخ التسجيل : 26/10/2009
وســــــــــام النشــــــــــــــاط : 6

 Philosophical questions for IP Empty
17032016
مُساهمة Philosophical questions for IP

This section collects some problems for IP noted in the literature.

3.1 Dilation

Consider two logically unrelated propositions [ltr]H[/ltr] and [ltr]X[/ltr]. Now consider the four “state descriptions” of this simple model as set out in Figure 1. So [ltr]a=H∩X[/ltr] and so on. Now define [ltr]Y=a∪d[/ltr]. Alternatively, consider three propositions related in the following way: [ltr]Y[/ltr] is defined as “[ltr]H[/ltr] if and only if [ltr]X[/ltr]”.
 Philosophical questions for IP Two-by-two
Figure 1: A diagram of the relationships after Seidenfeld (1994); [size=14][ltr][size=16]Y[/ltr]
 is the shaded area[/size][/size]
Further imagine that [ltr]p(H∣X)=p(H)=1/2[/ltr]. No other relationships between the propositions hold except those required by logic and probability theory. It is straightforward to verify that the above constraints require that [ltr]p(Y)=1/2[/ltr]. The probability for [ltr]X[/ltr], however, is unconstrained.
Let’s imagine you were given the above information, and took your representor to be the full set of probability functions that satisfied these constraints. Roger White suggested an intuitive gloss on how you might receive information about propositions so related and so constrained (White 2010). White’s puzzle goes like this. I have a proposition [ltr]X[/ltr], about which you know nothing at all. I have written whichever is true out of [ltr]X[/ltr] and [ltr]¬X[/ltr] on the Heads side of a fair coin. I have painted over the coin so you can’t see which side is heads. I then flip the coin and it lands with the [ltr]X[/ltr]uppermost. [ltr]H[/ltr] is the proposition that the coin lands heads up. [ltr]Y[/ltr] is the proposition that the coin lands with the “[ltr]X[/ltr]” side up.
Imagine if you had a precise prior that made you certain of [ltr]X[/ltr] (this is compatible with the above constraints since [ltr]X[/ltr] was unconstrained). Seeing [ltr]X[/ltr] land uppermost now should be evidence that the coin has landed heads. The game set-up makes it such that these apparently irrelevant instances of evidence can carry information. Likewise, being very confident of [ltr]X[/ltr] makes [ltr]Y[/ltr] very good evidence for [ltr]H[/ltr]. If instead you were sure [ltr]X[/ltr] was false, [ltr]Y[/ltr] would be solid gold evidence of [ltr]H[/ltr]’s falsity. So it seems that [ltr]p(H∣Y)[/ltr] is proportional to prior belief in [ltr]X[/ltr] (indeed, this can be proven rather easily). Given the way the events are related, observing whether [ltr]X[/ltr] or [ltr]¬X[/ltr] landed uppermost is a noisy channel to learn about whether or not [ltr]H[/ltr] landed uppermost.
So let’s go back to the original imprecise case and consider what it means to have an imprecise belief in [ltr]X[/ltr]. Among other things, it means considering possible that [ltr]X[/ltr] could be very likely. It is consistent with your belief state that [ltr]X[/ltr] is such that if you knew what proposition [ltr]X[/ltr] was, you would consider it very likely. In this case, [ltr]Y[/ltr] would be good evidence for [ltr]H[/ltr]. Note that in this case learning that the coin landed [ltr]¬X[/ltr] uppermost—call this [ltr]Y[size=13]′[/ltr]—would be just as good evidence against [ltr]H[/ltr]. Likewise, [ltr]X[/ltr] might be a proposition that you would have very low credence in, and thus [ltr]Y[/ltr] would be evidence against [ltr]H[/ltr].[/size]
Since you are in a state of ignorance with respect to [ltr]X[/ltr], your representor contains probabilities that take [ltr]Y[/ltr] to be good evidence that [ltr]H[/ltr] and probabilities that take [ltr]Y[/ltr] to be good evidence that [ltr]¬H[/ltr]. So, despite the fact that [ltr]P(H)={1/2}[/ltr] we have [ltr]P(H∣Y)=[0,1][/ltr]. This phenomenon—posteriors being wider than their priors—is known as dilation. The phenomenon has been thoroughly investigated in the mathematical literature (Walley 1991; Seidenfeld and Wasserman 1993; Herron, Seidenfeld, and Wasserman 1994; Pedersen and Wheeler 2014). Levi and Seidenfeld reported an example of dilation to Good following Good (1967). Good mentioned this correspondence in his follow up paper (Good 1974). Recent interest in dilation in the philosophical community has been generated by White’s paper (White 2010).
White considers dilation to be a problem since learning [ltr]Y[/ltr] doesn’t seem to be relevant to [ltr]H[/ltr]. That is, since you are ignorant about [ltr]X[/ltr], learning whether or not the coin landed [ltr]X[/ltr] up doesn’t seem to tell you anything about whether the coin landed heads up. It seems strange to argue that your belief in [ltr]H[/ltr] should dilate from [ltr]1/2[/ltr] to [ltr][0,1][/ltr] upon learning [ltr]Y[/ltr]. It feels as if this should just be irrelevant to [ltr]H[/ltr]. However, [ltr]Y[/ltr] is only really irrelevant to [ltr]H[/ltr] when [ltr]p(X)=1/2[/ltr]. Any other precise belief you might have in [ltr]X[/ltr] is such that [ltr]Y[/ltr] now affects your posterior belief in [ltr]H[/ltr]. Figure 2 shows the situation for one particular belief about how likely [ltr]X[/ltr] is; for one particular [ltr]p∈P[/ltr]. The horizontal line can shift up or down, depending on what the committee member we focus on believes about [ltr]X[/ltr]. [ltr]p(H∣Y)[/ltr] is a half only if the prior in [ltr]X[/ltr] is also a half. However, the imprecise probabilist takes into account all the ways [ltr]Y[/ltr] might affect belief in [ltr]H[/ltr].
 Philosophical questions for IP Joyces-diagram
Figure 2: A member of the credal committee (after Joyce (2011))
الرجوع الى أعلى الصفحة اذهب الى الأسفل
مُشاطرة هذه المقالة على: reddit

Philosophical questions for IP :: تعاليق

free men
رد: Philosophical questions for IP
مُساهمة الخميس مارس 17, 2016 1:25 pm من طرف free men
Figure 2: A member of the credal committee (after Joyce (2011))
Consider a group of agents who each had precise credences in the above coin case and differed in their priors on [ltr]X[/ltr]. They would all start out with prior of a half in [ltr]H[/ltr]. After learning [ltr]Y[/ltr], these agents would differ in their posterior opinions about [ltr]H[/ltr] based on their differing dispositions to update. The group belief would dilate. However, no agent in the group has acted in any way unreasonably. If we take Levi’s suggestion that individuals can be conflicted just like groups can, then it seems that individual agents can have their beliefs dilate just like groups can.
There are two apparent problems with dilation. First, the belief-moving effect of apparently irrelevant evidence; and second, the fact that learning some evidence can cause your belief-intervals to widen. The above comments to speak to the first of these. Pedersen and Wheeler (2014) also are focused on mitigating this worry. We turn now to the second worry.
Even if we accept dilation as a fact of life for the imprecise probabilist, it is still weird. Even if all of the above argument is accepted, it still seems strange to say that your belief in [ltr]H[/ltr] is dilated,whatever you learn. That is, whether you learn [ltr]Y[/ltr] or [ltr]Y[size=13]′[/ltr], your posterior belief in [ltr]H[/ltr] looks the same: [ltr][0,1][/ltr]. Or perhaps, what it shows to be weird is that your initial credence was precise.[/size]
Beyond this seeming strangeness, White suggests a specific way that being subject to dilation is an indicator of a defective epistemology. White suggests that dilation examples show that imprecise probabilities violate the Reflection Principle (van Fraassen 1984). The argument goes as follows:
اقتباس :
given that you know now that whether you learn [ltr][size=18]Y[/ltr] or you learn [ltr]Y[size=13]′[/ltr][/size] your credence in [ltr]H[/ltr] will be [ltr][0,1][/ltr] (and you will certainly learn one or the other), your current credence in [ltr]H[/ltr] should also be [ltr][0,1][/ltr].[/size]
The general idea is that you should set your credences to what you expect your credences to be in the future. More specifically, your credence in [ltr]X[/ltr] should be the expectation of your future possible credences in [ltr]X[/ltr] over the things you might learn. Given that, for all the things you might learn in this example your credence in [ltr]H[/ltr] would be the same, you should have that as your prior credence also. Your prior should be such that [ltr]P(H)=[0,1][/ltr]. So having a precise prior credence in [ltr]H[/ltr] to start with is irrational. That’s how the argument against dilation from reflection goes. Your prior [ltr]P[/ltr] is not fully precise though. Consider [ltr]P(H∩Y)[/ltr]. That is, the prior belief in the conjunction is imprecise. So the alleged problem with dilation and reflection is not as simple as “your precise belief becomes imprecise”. The problem is “your precise belief in [ltr]H[/ltr] becomes imprecise”; or rather, your precise belief in [ltr]H[/ltr] as represented by [ltr]P(H)[/ltr] becomes imprecise.
The issue with reflection is more basic. What exactly does reflection require of imprecise probabilists in this case? Now, it is obviously the case that each credal committee member’s prior credence is its expectation over the possible future evidence (this is a theorem of probability theory). But somehow, it is felt, the credal state as a whole isn’t sensitive to reflection in the way the principle requires. Each [ltr]p∈P[/ltr] satisfies the principle, but the awkward symmetries of the problem conspire to make [ltr]P[/ltr] as a whole violate the principle. This looks to be the case if we focus on [ltr]P(H)[/ltr] as an adequate representation of that part of the belief state. But as noted earlier, this is not an adequate way of understanding the credal state. Note that while learning [ltr]Y[/ltr] and learning [ltr]Y[size=13]′[/ltr] both prompt revision to a state where the posterior belief in [ltr]H[/ltr] is represented as an interval by [ltr][0,1][/ltr], the credal states as sets of probabilities are not the same. Call the state after learning [ltr]Y[/ltr], [ltr]P[/ltr] and the state after learning [ltr]Y[/ltr], [ltr]P′′[/ltr]. So [ltr]P={p(⋅∣Y),p∈P}[/ltr] and [ltr]P′′={p(⋅∣Y),p∈P}[/ltr]. While it is true that [ltr]P(H)=P′′(H)[/ltr], [ltr]P≠P′′[/ltr] as sets of probabilities, since if [ltr]p∈P[/ltr] then [ltr]p(Y)=1[/ltr] whereas if [ltr]p∈P′′[/ltr] then [ltr]p(Y)=0[/ltr]. So one lesson we should learn from dilation is that imprecise belief is represented by sets of functions rather than by a set-valued function (see also, Joyce 2011; Topey 2012; Bradley and Steele 2014b).[/size]
It seems that examples of dilation undermine the earlier claim that imprecise probabilities allow you to represent the difference between the weight and balance of evidence (see section 2.3): learning [ltr]Y[/ltr] appears to give rise to a belief which one would consider as representing less evidence since it is more spread out. This is so because the prior credence in the dilation case is precise, not through weight of evidence, but through the symmetry discussed earlier. We cannot take narrowness of the interval [ltr][P−(X),P[size=13]¯¯¯¯(X)][/ltr] as a characterisation of weight of evidence since the interval can be narrow for reasons other than because lots of evidence has been accumulated. So my earlier remarks on weight/balance should not be read as the claim that imprecise probabilities can always represent the weight/balance distinction. What is true is that there arecases where imprecise probabilities can represent the distinction in a way that impacts on decision making. This issue is far from settled and more work needs to be done on this topic[/size]
free men
رد: Philosophical questions for IP
مُساهمة الخميس مارس 17, 2016 1:25 pm من طرف free men

3.2 Belief inertia

Imagine there are two live hypotheses [ltr]H[size=13]1[/ltr] and [ltr]H2[/ltr]. You have no idea how likely they are, but they are mutually exclusive and exhaustive. Then you acquire some evidence [ltr]E[/ltr]. Some simple probability theory shows that for every [ltr]p∈P[/ltr] we have the following relationship (using [ltr]pi=p(E∣Hi)[/ltr] for [ltr]i=1,2[/ltr]).[/size]
[ltr]p(H[size=13]1∣E)=p(E∣H1)p(H1)p1p(H1)+p2p(H2)=p1p(H1)p2+(p1−p2)p(H1)[/ltr][/size]

If your prior in [ltr]H[size=13]1[/ltr] is vacuous—if [ltr]P(H1)=[0,1][/ltr]—then the above equation shows that your posterior is vacuous as well. That is, if [ltr]p(H1)=0[/ltr] then [ltr]p(H1∣E)=0[/ltr] and likewise for [ltr]p(H1)=1=p(H1∣E)[/ltr], and since the right hand side of the above equation is a continuous function of [ltr]p(H1)[/ltr], for every [ltr]r∈[0,1][/ltr] there is some [ltr]p(H1)[/ltr] such that [ltr]p(H1∣E)=r[/ltr]. So [ltr]P(H1∣E)=[0,1][/ltr].[/size]
It seems like the imprecise probabilist cannot learn from vacuous priors. This problem of belief inertia goes back at least as far as Walley. He says that vacuous posterior probabilities are just a consequence of adopting a vacuous prior:
اقتباس :
The vacuous previsions really are rather trivial models. That seems appropriate for models of “complete ignorance” which is a rather trivial state of uncertainty. On the other hand, one cannot expect such models to be very useful in practical problems, notwithstanding their theoretical importance. If the vacuous previsions are used to model prior beliefs about a statistical parameter for instance, they give rise to vacuous posterior previsions… However, prior previsions that are close to vacuous and make nearly minimal claims about prior beliefs can lead to reasonable posterior previsions. (Walley 1991: 93)
Recently, Joyce (2011) and Rinard (2013) have both discussed this problem. Rinard’s solution to it is to argue that this shows that the vacuous prior is never a legitimate state of belief. Or rather, that we only ever need to model your beliefs using non-vacuous priors, even if these are incomplete descriptions of your belief state. This is similar to Walley’s “non-exhaustive” representation of belief. An alternative solution to this problem, (inspired by Wilson 2001; and Cattaneo 2008; 2014), would modify the update rule in such a way that those extreme priors that give extremely small likelihoods to the evidence are excised from the representor. More work would need to be done to make this precise and show how exactly the response would go.
 

Philosophical questions for IP

الرجوع الى أعلى الصفحة 

صفحة 1 من اصل 1

 مواضيع مماثلة

-
» Philosophical Work for Rigidity.
» Le Transhumanisme en 3 questions
»  Le jeu de la vérité en 10 questions
» . Do Races Exist? Contemporary Philosophical Debates
» A century and a half before our modern fetishism of failure, a seminal philosophical case for its value.

صلاحيات هذا المنتدى:تستطيع الرد على المواضيع في هذا المنتدى
** متابعات ثقافية متميزة ** Blogs al ssadh :: Pensée-
إرسال موضوع جديد   إرسال مساهمة في موضوعانتقل الى: