** متابعات ثقافية متميزة ** Blogs al ssadh
هل تريد التفاعل مع هذه المساهمة؟ كل ما عليك هو إنشاء حساب جديد ببضع خطوات أو تسجيل الدخول للمتابعة.

** متابعات ثقافية متميزة ** Blogs al ssadh

موقع للمتابعة الثقافية العامة
 
Regress I_icon_mini_portalالرئيسيةالأحداثالمنشوراتأحدث الصورالتسجيلدخول



مدونات الصدح ترحب بكم وتتمنى لك جولة ممتازة

وتدعوكم الى دعمها بالتسجيل والمشاركة

عدد زوار مدونات الصدح

إرسال موضوع جديد   إرسال مساهمة في موضوع
 

 Regress

اذهب الى الأسفل 
كاتب الموضوعرسالة
free men
فريق العمـــــل *****
free men


التوقيع : رئيس ومنسق القسم الفكري

عدد الرسائل : 1500

الموقع : center d enfer
تاريخ التسجيل : 26/10/2009
وســــــــــام النشــــــــــــــاط : 6

Regress Empty
17032016
مُساهمةRegress

Imprecise probabilities is a theory born of our limitations as reasoning agents, and of limitations in our evidence base. If only we had better evidence, a single probability function would do. But since our evidence is weak, we must use a set. In a way, the same is true of precise probabilism. If only we knew the truth, we could represent belief with a truth-valuation function, or just a set of sentences that are fully believed. But since there are truths we don’t know, we must use a probability to represent our intermediate confidence. And indeed, the same problem arises for the imprecise probabilist. Is it reasonable to assume that we know what set of probabilities best represents the evidence? Perhaps we should have a set of sets of probabilities… Similar problems arise for theories of vagueness (Sorensen 2012). We objected to precise values for degrees of belief, so why be content with sets-valued beliefs with precise boundaries? This is the problem of “higher-order vagueness” recast as a problem for imprecise probabilism. Why is sets of probabilities the right level to stop the regress at? Why not sets of sets? Why not second-order probabilities? Why not single probability functions? Williamson (2014) makes this point, and argues that a single precise probability is the correct level at which to get off the “uncertainty escalator”. Williamson advocates the betting interpretation of belief, and his argument here presupposes that interpretation. But the point is still worth addressing: for a particular interpretation of what belief is, what sort of level of uncertainty is appropriate. For thefunctionalist interpretation suggested above, this is something of a pragmatic choice. The further we allow this regress to continue, the harder it is to deal with these belief-representing objects. So let’s not go further than we need.
We have seen arguments above that IP does have some advantage over precise probabilism, in the capacity to represent suspending judgement, the difference between weight and balance of evidence and so on. So we must go at least this far up the uncertainty escalator. But for the sake of practicality we need not go any further, even though there are hierarchical Bayes models that would give us a well-defined theory of higher-order models of belief. This is, ultimately, a pragmatic argument. Actual human belief states are probably immensely complicated neurological patterns with all the attendant complexity, interactivity, reflexivity and vagueness. We are modelling belief, so it is about choosing a model at the right level of complexity. If you are working out the trajectory of a cannonball on earth, you can safely ignore the gravitational influence of the moon on the cannonball. Likewise, there will be contexts where simple models of belief are appropriate: perhaps your belief state is just a set of sentences of a language, or perhaps just a single probability function. If, however, you are modelling the tides, then the gravitational influence of the moon needs to be involved: the model needs to be more complex. This suggests that an adequate model of belief under severe uncertainty may need to move beyond the single probability paradigm. But a pragmatic argument says that we should only move as far as we need to. So while you need to model the moon to get the tides right, you can get away without having Venus in your model. This relates to the contextual nature of appropriateness for models of belief mentioned earlier. If one were attempting to provide a complete formal characterisation of the ontology of belief, then these regress worries would be significantly harder to avoid.
Let’s imagine that we had a second order probability [ltr]μ[/ltr] defined over the set of (first order) probabilities [ltr]P[/ltr]. We could then reduce uncertainty to a single function by [ltr]p[size=13]∗(X)=∑Pμ(p)p(X)[/ltr] (if [ltr]P[/ltr] is finite, in the interests of keeping things simple I discuss only this case). Now if [ltr]p(X)[/ltr] is what is used in decision making, then there is no real sense in which we have a genuine IP model, and it cannot rationalise the Ellsberg choice, nor can it give rise to incomparability. If there is some alternative use that [ltr]μ[/ltr] is put to, a use that allows incomparability and that rationalises Ellsberg choices, then it might be a genuine rival to credal sets, but it represents just as much of a departure from the orthodox theory as IP does.[/size]
Gärdenfors and Sahlin’s Unreliable Probabilities model enriches a basic IP approach with a “reliability index” (see the historical appendix). Lyon (forthcoming) enriches the standard IP picture in a different way: he adds a privileged “best guess” probability. This modification allows for better aggregation of elicited IP estimates. How best to interpret such a model is still an open question. Other enriched IP models are no doubt available.
الرجوع الى أعلى الصفحة اذهب الى الأسفل
مُشاطرة هذه المقالة على: reddit

Regress :: تعاليق

لا يوجد حالياً أي تعليق
 

Regress

الرجوع الى أعلى الصفحة 

صفحة 1 من اصل 1

 مواضيع مماثلة

-
»  Two Crucial Problems: Bradley’s Regress and Self-exemplification

صلاحيات هذا المنتدى:تستطيع الرد على المواضيع في هذا المنتدى
** متابعات ثقافية متميزة ** Blogs al ssadh :: Pensée-
إرسال موضوع جديد   إرسال مساهمة في موضوعانتقل الى: