** متابعات ثقافية متميزة ** Blogs al ssadh
هل تريد التفاعل مع هذه المساهمة؟ كل ما عليك هو إنشاء حساب جديد ببضع خطوات أو تسجيل الدخول للمتابعة.
** متابعات ثقافية متميزة ** Blogs al ssadh

موقع للمتابعة الثقافية العامة
 
الرئيسيةالرئيسية  الأحداثالأحداث  المنشوراتالمنشورات  أحدث الصورأحدث الصور  التسجيلالتسجيل  دخول  



مدونات الصدح ترحب بكم وتتمنى لك جولة ممتازة

وتدعوكم الى دعمها بالتسجيل والمشاركة

عدد زوار مدونات الصدح

إرسال موضوع جديد   إرسال مساهمة في موضوع
 

 Consciousness, life, and death

اذهب الى الأسفل 
كاتب الموضوعرسالة
free men
فريق العمـــــل *****
free men


التوقيع : رئيس ومنسق القسم الفكري

عدد الرسائل : 1500

الموقع : center d enfer
تاريخ التسجيل : 26/10/2009
وســــــــــام النشــــــــــــــاط : 6

Consciousness, life, and death Empty
12032016
مُساهمةConsciousness, life, and death

experience, or “what it is like” to have a particular experience, see e.g., Chalmers 1995) has yielded little to the probings of neuroscience, and it is not clear whether it ever will. However, in the last decade impressive advances have been made in other realms of consciousness research. Most impressive have been the improvements in detecting altered levels of consciousness with brain imaging. Diagnosing behaviorally unresponsive patients has long been a problem for neurology, although as long as 20 years ago, neurologists had recognized systematic differences between and in the prognoses for a persistent vegetative state (PVS), a minimally conscious state (MCS), and locked-in syndrome, a syndrome in which the patient has normal levels of awareness but cannot move. Functional brain imaging has fundamentally changed the problems faced by those caring for these patients. Owen and colleagues have shown that it is possible to identify some patients mischaracterized as being in PVS by demonstrating that they are able to understand commands and follow directions (Owen et al. 2006). In these studies, both normal subjects and brain injured patients were instructed to visualize doing two different activities while in the fMRI scanner. In normal subjects these two tasks activated different parts of cortex. Owen showed that one patient diagnosed as in PVS showed this normal pattern, unlike other PVS patients, who showed no differential activation when given these instructions. This data suggests that some PVS diagnosed subjects can in fact process and understand the instructions, and that they have the capacity for sustained attention and voluntary mental action. These results were later replicated in other such patients. In a later study the same group used these imagination techniques to elicit from some patients with severe brain injury answers to Yes/No questions (Monti et al. 2010). More recent work aims to adapt these methods for EEG, a cheaper and more portable neurotechnology (Cruse et al. 2011). Neuroimaging provides new tools for evaluating and diagnosing patients with disorders of consciousness.
These studies have the potential to revolutionize the way in which patients with altered states of consciousness are diagnosed and cared for, may have bearing on when life support is terminated, and raise the possibility of allowing patients to have some control over questions regarding their care and end of life decisions. This last possibility, while in some ways alleviating some worries about how to treat severely brain-damaged individuals, raises other thorny ethical problems. One of the most pressing is how to deal with questions of competence and informed consent: These are people with severe brain damage, and even when they do appear capable on occasion of understanding and answering questions, there is still uncertainty about whether their abilities are stable, how sophisticated they are, and whether they can competently make decisions about such weighty issues (Clausen 2008; Sinnott-Armstrong 2016). Nonetheless, these methods open up new possibilities for diagnosis and treatment and for restoring a measure of autonomy and self-determination to people with severe brain damage.

5. Neuroscience and society

5.1 Neuroscience and social justice

Neuroethics must also be attentive to issues of social justice. Beyond issues that affect individuals, such as autonomy, consent, and self-determination, discussed above, are ethical issues that affect the shape of society. In this regard the concerns are not substantially different than in traditional bioethics. As neuroscience promises to offer treatments and enhancements, it must attend to issues of distributive justice and play a role in ensuring that the fruits of neuroscientific research do not go only to those who enjoy the best our society has to offer. Moreover, a growing understanding that poverty and socioeconomic status more generally have long-lasting cognitive effects raises moral questions about the social policy and the structure of our society, and the growing gap between rich and poor (Farah 2007; Noble and Farah 2013). It seems that the social and neuroscientific realities may reveal the American Dream to be largely hollow, and these findings may undercut some popular political ideologies. Justice may demand more involvement of neuroethicists in policy decisions (Giordano, Kulkarni, and Farwell 2014; Shook, Galvagni, and Giordano 2014).
Ethical issues also arise from neuroscientific research on nonhuman animals. As does traditional bioethics, neuroethics must address questions about the ethical use of animals for experimental purposes in neuroscience. In addition, however, it ought to consider questions regarding the use of animals as model systems for understanding the human brain and human cognition. Animal studies have given us the bulk of our understanding of neural physiology and anatomy, and have provided significant insight into function of conserved biological capacities. However, the further we push into unknown territory about higher cognitive functions, the more we will have to attend to the specifics of similarities and differences between humans and other species, and evaluating the model system may involve considerable philosophical work (Shanks, Greek, and Greek 2009; Shelley 2010; Nestler and Hyman 2010). In some cases, the dissimilarities may not warrant animal experiments.
Finally, neuroethics stretches seamlessly into the law (see, e.g., Vincent 2013; Morse and Roskies 2013). Neuroethical issues arise in criminal law, in particular with the issue of criminal responsibility. For example, the recognition that a large percentage of prison inmates have some history of head trauma or other abnormality raises the question of where to draw the line between the bad and the mad (Center for Disease Control 2007; Maibom 2008). Neuroethics has bearing on issues of addiction: some have characterized addiction as a brain disease or species of dysfunction, and question whether it is appropriate to hold addicts responsible for their behavior (Hyman 2007; Carter, Hall, and Illes 2011). Research has demonstrated that human brains are not fully developed until the mid-twenties,and that the areas last to develop are prefrontal regions involved in executive control and inhibition. In light of this, many have argued that juveniles should not be held fully responsible for criminal behavior. Indeed, a recent Supreme Court ruling (Roper v. Simmons, 2005) has barred the death penalty for juvenile murderers, but although anamicus brief in favor of the ruling referenced brain immaturity, the opinion itself does not rely upon it. A later case, Miller v. Alabama (2012) rules life without parole for juveniles unconstitutional, and mentions neuroscience and social science in a footnote. Other areas of law, such in tort law, employment law, and health care law also overlap with neuroethical concerns, and may well be influenced by neuroscientific discoveries (Clausen and Levy 2015; Freeman 2011; Jones, Schall, and Shen 2014).

5.2 Public perception of neuroscience

The advances of neuroscience have become a common topic in the popular media, with colorful brain images becoming a pervasive illustrative trope in news stories about neuroscience. While no one doubts that popularizing neuroscience is a positive good, neuroethicists have been legitimately worried about the possibilities for misinformation. These include worries about “the seductive allure” of neuroscience, and of misleading and oversimplified media coverage of complex scientific questions.

5.2.1 The seductive allure

There is a documented tendency for the layperson to think that information that makes reference to the brain or to neuroscience or neurology is more privileged, more objective, or more trustworthy than information that makes reference to the mind or psychology. For example, Weisberg and colleagues report that subjects with little or no neuroscience training rated bad explanations as better when they made reference to the brain or incorporated neuroscientific terminology (Weisberg et al. 2008). This “seductive allure of neuroscience” is akin to an unwarranted epistemic deference to authority. This differential appraisal extends into real-world settings, with testimony from a neuroscientist or neurologist judged to be more credible than that of a psychologist. The tendency is to view neuroscience as a hard science in contrast to “soft” methods of inquiry that focus on function or behavior. With neuroimaging methods, this belies a deep misunderstanding of the genesis and significance of the neuroscientific information. What people fail to realize is that neuroimaging information is classified and interpreted by its ties to function, so (barring unusual circumstances) it cannot be more reliable or “harder” than the psychology it relies upon.
Brain images in particular have prompted worries that the colorful images of brains with “hotspots” that accompany media coverage could themselves be misleading. If people intuitively appreciate brain images as if they were akin to a photograph of the brain in action, this could mislead them into thinking of these images as objective representations of reality, prompting them to overlook the many inferential steps and nondemonstrative decisions that underlie creation of the image they see (Roskies 2007). The worry is that the powerful pull of the brain image will lend a study more epistemic weight than is justified and discourage people from asking the many complicated questions that one must ask in order to understand what the image signifies and what can be inferred from the data. Further work, however, has suggested that once one takes into account the privilege accorded to neuroscience over psychology, the images themselves do not further mislead (Schweitzer et al. 2011).

5.2.2 Media hype

In this era of indubitably exciting progress in brain research, there is a “brain-mania” that is partially warranted but holds its own dangers. The culture of science is such that it is not uncommon for scientists to describe their work in the most dramatic terms possible in order to secure funding and/or fame. Although the hyperbole can be discounted by knowledgeable readers, those less sophisticated about the science may take it at face value. Studies have shown that the media is rarely critical of the scientific findings they report, and they tend not to present alternative interpretations (Racine et al. 2006). The result is that the popular media conveys sometimes wildly inaccurate pictures of legitimate scientific discoveries, which can fuel both overly optimistic enthusiasm as well as fear (Racine et al. 2010). One of the clear pragmatic goals of neuroethics, whether it regards basic research or clinical treatments, is to exhort and educate scientists and the media to better convey both the promise and complexities of scientific research. It is the job of both these groups to teach people enough about science in general, and brain science in particular, that they see it as worthy of respect, and also of the same critical assessment to which scientists themselves subject their own work.
It is admittedly difficult to accurately translate complicated scientific findings for the lay public, but it is essential. Overstatement of the significance of results can instill unwarranted hope in some cases, fear in others, and jadedness and suspicion going forward. None of these are healthy for the future status and funding of the basic sciences, and providing fodder for scientific naysayers has policy implications that go far beyond the reach of neuroscience.

5.3 Practical neuroethics

Medical practice and neuroscientific research raise a number of neuroethical issues, many of which are common to bioethics. For example, issues of consent, of incidental findings, of competence, and of privacy of information arise here. In addition, practicing neurologists, psychologists and psychiatrists may routinely encounter certain brain diseases or psychological dysfunctions that raise neuroethical issues that they must address in their practices (Farah 2005). Because of the overlap with traditional bioethics, these issues will not be discussed further here (articles on many of these topics can be found elsewhere in this encyclopedia). For a more detailed discussion of these more applied neuroethics issues approached from a pragmatic point of view, see, for example, Racine (2010).

6. The neuroscience of ethics

Neuroscience, or more broadly the cognitive and neural sciences, have made significant inroads into understanding the neural basis of ethical thought and social behavior. In the last decades, these fields have begun to flesh out the neural machinery underlying human capacities for moral judgment, altruistic action, and the moral emotions. The field of social neuroscience, nonexistent two decades ago, is thriving, and our understanding of the circuitry, the neurochemistry, and the modulatory influences underlying some of our most complex and nuanced interpersonal behaviors is growing rapidly. Neuroethics recognizes that the heightened understanding of the biological bases of social and moral behaviors can itself have effects on how we conceptualize ourselves as social and moral agents, and foresees the importance of the interplay between our scientific conception of ourselves and our ethical views and theories. The interplay and its effects provide reason to view the neuroscience of ethics (or more broadly, of sociality) as part of the domain of neuroethics.
Perhaps the most well-known and controversial example of such an interplay marks the beginning of this kind of exploration. In 2001, Joshua Greene scanned people while they made a series of moral and nonmoral decisions in different scenarios, including dilemmas modeled on the philosophical “Trolley Problem” (Thomson 1985). The trolley problem is an example of a moral dilemma: In one scenario, a trolley is careening down a track, and is headed for 5 people. If it hits them they will all be killed. You, an onlooker, could throw a switch to divert the trolley from the main track onto a side track, where there is only one person, who, if you throw the switch, will be killed. Should you do nothing and let the 5 be killed or switch the trolley to the side track and kill the one to save the 5? In a supposedly parallel scenario, the “footbridge” case, the trolley is headed for the 5, but you are on a footbridge above the track with a heavy man, heavy enough to stop the trolley. Should you push the man off the tracks in the way of the trolley, saving the 5 at the expense of the one? The Trolley Problem puzzle is why we seem to have different intuitions in these cases, since both involve saving 5 at the expense of one. When Greene scanned subjects faced with a series of such scenarios, he found systematic differences in the engagement of brain regions associated with moral processing in “personal” (e.g., pushing) as opposed to “impersonal” (e.g., flipping a switch) moral dilemmas. He hypothesized that emotional interference was behind the differential reaction times in judgments of permissibility in the footbridge case. In later work, Greene proposed a dual-process model of moral judgment, where relatively automatic emotion-based reactions and high-level cognitive control jointly determined responses to moral dilemmas, and he related his findings to philosophical moral theories (Greene et al. 2004, 2008). Most controversially, he suggested that there are reasons to be suspicious of our deontological judgments and interpreted his work as lending credence to utilitarian theories (Greene 2013). Greene’s work is thus a clear example of how neuroscience might affect our ethical theorizing. Claims regarding the import of neuroscience studies for philosophical questions have sparked a heated debate in philosophy and beyond, and prompted critiques and replies from scholars both within and outside of philosophy (see, e.g., Berker 2009; Kahane et al. 2011; Christensen et al. 2014). One effect of these exchanges is to highlight a problematic tendency for scientists and some philosophers to think they can draw normative conclusions from purely descriptive data; another is to illuminate the ways in which descriptive data might itself masquerade as normative.
Greene’s early studies demonstrated that neuroscience can be used in the service of examining extremely high-level behaviors and capacities, and have served as an inspiration for numerous other experiments investigating the neural basis of social and moral behavior and competences. Neuroscience has already turned its attention to phenomena such as altruism, empathy, well-being, and theory of mind, as well as to disorders such as autism and psychopathy (Sinnott-Armstrong 2007; Churchland 2012; Decety and Wheatley 2015; Zak 2013). The relevant works range from imaging studies using a variety of imaging techniques, to manipulation of hormones and neurochemicals, to purely behavioral studies. In addition, interest in moral and social neuroscience has collided synergistically with the growth of neuroeconomics, which has flourished in large part independently (Glimcher and Fehr 2013). A recent bibliography has collected almost 400 references to works in the neuroscience of ethics since 2002 (Darragh, Buniak, and Giordano 2015). We can safely assume that many more advances will be made in the years to come, and that neuroethicists will be called upon to advance, evaluate, expound upon, or deflate claims for the purported ethical implications of our new knowledge.
الرجوع الى أعلى الصفحة اذهب الى الأسفل
مُشاطرة هذه المقالة على: reddit

Consciousness, life, and death :: تعاليق

لا يوجد حالياً أي تعليق
 

Consciousness, life, and death

الرجوع الى أعلى الصفحة 

صفحة 1 من اصل 1

صلاحيات هذا المنتدى:تستطيع الرد على المواضيع في هذا المنتدى
** متابعات ثقافية متميزة ** Blogs al ssadh :: Pensée-
إرسال موضوع جديد   إرسال مساهمة في موضوعانتقل الى: