First published Mon Aug 14, 2006; substantive revision Tue Mar 24, 2015Contemporary analytic philosophers of mind generally use the term “belief” to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true. To believe something, in this sense, needn't involve actively reflecting on it: Of the vast number of things ordinary adults believe, only a few can be at the fore of the mind at any single time. Nor does the term “belief”, in standard philosophical usage, imply any uncertainty or any extended reflection about the matter in question (as it sometimes does in ordinary English usage). Many of the things we believe, in the relevant sense, are quite mundane: that we have heads, that it's the 21st century, that a coffee mug is on the desk. Forming beliefs is thus one of the most basic and important features of the mind, and the concept of belief plays a crucial role in both philosophy of mind and epistemology. The “mind-body problem”, for example, so central to philosophy of mind, is in part the question of whether and how a purely physical organism can have beliefs. Much of epistemology revolves around questions about when and how our beliefs are justified or qualify as knowledge.
Most contemporary philosophers characterize belief as a “propositional attitude”. Propositions are generally taken to be whatever it is that sentences express (see the entry on propositions). For example, if two sentences mean the same thing (e.g., “snow is white” in English, “Schnee ist weiss” in German), they express the same proposition, and if two sentences differ in meaning, they express different propositions. (Here we are setting aside some complications about that might arise in connection with indexicals; see the entry on indexicals.) A propositional attitude, then, is the mental state of having some attitude, stance, take, or opinion about a proposition or about the potential state of affairs in which that proposition is true—a mental state of the sort canonically expressible in the form “S A that P”, where S picks out the individual possessing the mental state, A picks out the attitude, and P is a sentence expressing a proposition. For example: Ahmed [the subject] hopes [the attitude] that Alpha Centauri hosts intelligent life [the proposition], or Yifeng [the subject] doubts [the attitude] that New York City will exist in four hundred years. What one person doubts or hopes, another might fear, or believe, or desire, or intend—different attitudes, all toward the same proposition. Contemporary discussions of belief are often embedded in more general discussions of the propositional attitudes; and treatments of the propositional attitudes often take belief as the first and foremost example.2. Types, Degrees, and Relatives of Belief
2.1 Occurrent Versus Dispositional Belief
2.2 Varieties of Implicit Belief
2.3 De Re Versus De Dicto Belief Attributions
2.4 Degree of Belief
2.5 Belief and Acceptance
2.6 Belief and Knowledge
2.7 Belief and Delusion
3. The Content of Beliefs
3.1 Fine- or Coarse-Grained?
3.2 Atomism Versus Holism
3.3 Internalism and Externalism
3.4 Frege's Puzzle
4. Can There Be Belief Without Language?
Bibliography
Academic Tools
Other Internet Resources
Related Entries
[size=30]1. What Is It to Believe?
1.1 Representationalism
It is common to think of believing as involving entities—beliefs—that are in some sense contained in the mind. When someone learns a particular fact, for example, when Kai reads that astronomers no longer classify Pluto as a planet, he acquires a new belief (in this case, the belief that astronomers no longer classify Pluto as a planet). The fact in question—or more accurately, a representation, symbol, or marker of that fact—may be stored in memory and accessed or recalled when necessary. In one way of speaking, the belief just is the fact or proposition represented, or the particular stored token of that fact or proposition; in another way of speaking, the more standard in philosophical discussion, the belief is the state of having such a fact or representation stored. (Despite the ease with which we slide between these different ways of speaking, they are importantly distinct: Contrast the state of having hot water in one's water heater—the state of being “hot-water ready”, say—with the stuff actually contained in the heater, that particular mass of water, or water in general.)
It is also common to suppose that beliefs play a causal role in the production of behavior. Continuing the example, we might imagine that after learning about the demotion of Pluto, Kai naturally turns his attention elsewhere, not consciously considering the matter for several days, until when reading an old science textbook he encounters the sentence “our solar system contains nine planets”. Involuntarily, his new knowledge about Pluto is called up from memory. He finds himself doubting the truth of the textbook's claim, and he says, “actually, astronomers no longer accept that”. It seems plausible to say that Kai's belief about Pluto, or his possession of that belief, caused, or figured in a causal explanation of, his utterance.
Various elements of this intuitive characterization of belief have been challenged by philosophers, but it is probably fair to say that the majority of contemporary philosophers of mind accept the bulk of this picture, which embodies the core ideas of the
representationalapproach to belief, according to which central cases of belief involve someone's having in her head or mind a representation with the same propositional content as the belief. (But see §2.2, below, for some caveats, and see the entry on
mental representation.) As discussed below, representationalists may diverge in their accounts of the nature of representation, and they need not agree about what further conditions, besides possessing such a representation, are necessary if a being is to qualify as having a belief. Among the more prominent advocates of a representational approach to belief are Fodor (1975, 1981, 1987, 1990), Millikan (1984, 1993), Dretske (1988), Cummins (1996), and Burge (2010).
One strand of representationalism, endorsed by Fodor, takes mental representations to be sentences in an internal
language of thought. To get a sense of what this view amounts to, it is helpful to start with an analogy. Computers are sometimes characterized as operating by manipulating sentences in “machine language” in accordance with certain rules. Consider a simplified description of what happens as one enters numbers into a spreadsheet. Inputs from the keyboard cause the computer, depending on the programs it is running and its internal state, to instantiate or “token” a sentence (in machine language) with the content (translated into English) of, for example, “numerical value 4 in cell A1”. In accordance with certain rules, the machine then displays the shape “4” in a certain location on the monitor, and perhaps, if it is implementing the rule “the values of column B are to be twice the values of column A”, it tokens the sentence “numerical value 8 in cell B1” and displays the shape “8” in another location on the monitor. If we someday construct a robot whose behavior resembles that of a human being, we might imagine it to operate along broadly the lines described above—that is, by manipulating machine-language sentences in accordance with rules, in connection with various potential inputs and outputs. Such a robot might somewhere store the machine-language sentence whose English translation is “the chemical formula for water is H
2O”. We might suppose this robot is able to act as does a human who possesses this belief because it is disposed to access this sentence appropriately on relevant occasions: When asked “of what chemical elements is water compounded?”, the robot accesses the water sentence and manipulates it and other relevant sentences in such a way that it produces a human-like response.
According to the language of thought hypothesis (see the entry on the
language of thought hypothesis), our cognition proceeds rather like such a robot's. The formulae we manipulate are not in “machine language”, of course, but rather in a species-wide “language of thought”. A sentence in the language of thought with some particular propositional content
P is a “representation” of
P. On this view, a subject believes that
P just in case she has a representation of
P that plays the right kind of role—a “belief-like” role—in her cognition. That is, the representation must not merely be instantiated somewhere in the mind or brain, but it must be deployed, or apt to be deployed, in ways we regard as characteristic of belief. For example, it must be apt to be called up for use in theoretical inferences toward which it is relevant. It must be ready for appropriate deployment in deliberation about means to desired ends. It is sometimes said, in such a case, that the subject has the proposition
P, or a representation of that proposition, tokened in her “belief box” (though of course it is not assumed that there is any literal box-like structure in the head).
Dretske's view centers on the idea of representational systems as systems with the function of tracking features of the world (for a similar view, see Millikan 1984, 1993). Organisms, especially mobile ones, generally need to keep track of features of their environment to be evolutionarily successful. Consequently, they generally possess internal systems whose function it is to covary in certain ways with the environment. For example, certain marine bacteria contain internal magnets that align with the Earth's magnetic field. In the northern hemisphere, these bacteria, guided by the magnets, propel themselves toward magnetic north. Since in the northern hemisphere magnetic north tends downward, they are thus carried toward deeper water and sediment, and away from toxic, oxygen-rich surface water. We might thus say that the magnetic system of these bacteria is a representational system that functions to indicate the direction of benign or oxygen-poor environments. In general, on Dretske's view, an organism can be said to represent
P just in case that organism contains a subsystem whose function it is to enter state
Aonly if
P holds, and that subsystem is in state
A.
To have beliefs, Dretske suggests, is to have an integrated manifold of such representational systems, acquired in part through associative learning, poised to guide behavior. Given the lack of such a complex, and the lack of associative learning, magnetosome bacteria cannot, on Dretske's view, rightly be regarded as literally possessing full-fledged beliefs. But exactly how rich an organism's representational structure must be for it to have beliefs, and in what ways, Dretske does not address, regarding it as a terminological boundary dispute, rather than a matter of deep ontological significance. (For more on belief in non-human animals see §4 below.)
Recent representational approaches sometimes especially emphasize the normative dimension of belief. That is, they emphasize the idea that it is central to a mental state's being a
belief as opposed to some other mental state (e.g., a supposition, an imagining, a desire) that it is necessarily defective in a certain way if it is false. Shah and Velleman (2005) argue that conceiving of an attitude as a belief that
P entails conceiving of it as governed by a
norm of truth, that is, as an attitude that is correct if and only if
P is true. Similarly, Burge (2010) argues that the "primary constitutive function" of believing is the production of veridical propositional representations. (See also the literature on "direction of fit": Anscombe 1957/1963; Searle 1983; and on norms of belief generally: Chan, ed. 2013.)
1.1.1. Representational Structure
If one accepts a representational view of belief, it's plausible to suppose that the relevant representations are
structured in some way—that the belief that
P &
Q, for instance, shares something structurally in common with the belief that
P. To say this is not merely to say that the belief that
P &
Q has the following property: It cannot be true unless the belief that
P is true. Consider the following possible development of Dretske's representational approach: An organism has developed a system that functions to detect whether
P is or is not the case. It's supposed to enter state alpha when
P is true; its being in alpha has the function of indicating
P. Also, the organism has developed a separate system for detecting whether
P &
Q is the case. It's supposed to enter state beta when
P &
Q is true; its being in beta has the function of indicating
P&
Q. But alpha and beta have nothing important in common other than what, in the outside world, they are supposed to represent; they have no structural similarity; one is not compounded in part from the other. Conceivably, all our beliefs could be set up in this way, having as little in common as alpha and beta—one internally unstructured representational state after another. To say that mental representations are structured is in part to deny that our minds work like that.
Among the reasons to suppose that our representations are structured, Fodor argues, are the
productivity and
systematicity of thought (Fodor 1987; Fodor and Pylyshyn 1988; Aizawa 2003). Thought and belief are “productive” in the sense that we can potentially think or believe an indefinitely large number of things: that elephants despise bowling, that 245 + 382 = 627, that river bottoms are usually not composed of blue beads. If representations are unstructured, each of these different potential beliefs must, once believed, be an entirely new state, not constructed from representational elements previously available. Similarly, thought and belief are “systematic” in the sense that an organism who thinks or believes that Mengzi repudiated Gaozi will normally also have the capacity (if not necessarily the inclination) to think or believe that Gaozi repudiated Mengzi; an organism who thinks or believes that dogs are insipid and cats are resplendent will normally also have the capacity to think or believe that dogs are resplendent and cats are insipid. If representations are structured, if they have elements that can be shuffled and recombined, the productivity and systematicity of thought and belief seem naturally to follow. Conversely, someone who holds that representations are unstructured has, at least, some explaining to do to account for these features of thought. (So also, apparently, does someone who denies that belief is underwritten or implemented by a representational system of any sort.)
Supposing representations are structured, then, what
kind of structure do they have? Fodor notes that productivity and systematicity are features not just of thought but also of language, and concludes that representational structure must be linguistic. He endorses the idea of an innate, species-wide language of thought (as discussed briefly in §1.1 above); others tie the structure more closely to the thinker's own natural (learned) language (Harman 1973; Field 1978; Carruthers 1996). However, still others assert that the representational structure underwriting belief isn't language-like at all.[/size]
الجمعة مارس 04, 2016 9:49 am من طرف free men