** متابعات ثقافية متميزة ** Blogs al ssadh
هل تريد التفاعل مع هذه المساهمة؟ كل ما عليك هو إنشاء حساب جديد ببضع خطوات أو تسجيل الدخول للمتابعة.
** متابعات ثقافية متميزة ** Blogs al ssadh

موقع للمتابعة الثقافية العامة
 
الرئيسيةالرئيسية  الأحداثالأحداث  المنشوراتالمنشورات  أحدث الصورأحدث الصور  التسجيلالتسجيل  دخول  



مدونات الصدح ترحب بكم وتتمنى لك جولة ممتازة

وتدعوكم الى دعمها بالتسجيل والمشاركة

عدد زوار مدونات الصدح

 

  Arguments against enhancement

اذهب الى الأسفل 
كاتب الموضوعرسالة
free men
فريق العمـــــل *****
free men


التوقيع : رئيس ومنسق القسم الفكري

عدد الرسائل : 1500

الموقع : center d enfer
تاريخ التسجيل : 26/10/2009
وســــــــــام النشــــــــــــــاط : 6

 Arguments against enhancement Empty
12032016
مُساهمة Arguments against enhancement

Opposition to enhancement can take the form of moral condemnation and/or legal prohibition or restriction, or other regulation. It is possible for these forms of opposition to come apart—for instance, to condemn cognitive enhancement on moral grounds, while permitting it legally (for example, for some of the reasons mentioned above). Below I discuss varieties of moral arguments offered against enhancement.
Harms: The simplest and most powerful argument against enhancement is the claim that brain interventions carry with them the risk of harm, risks that make the use of these interventions unacceptable. The low bar for acceptable risk is an effect of the context of enhancement: risks deemed reasonable to incur when treating a deficiency or disease with the potential benefit of restoring normal function may be deemed unreasonable when the payoff is simply augmenting performance above a normal baseline. Some suggest that no risk is justified for enhancement purposes (Heinz et al. 2012; Kass 2003a; Sandel 2004). In evaluating the strength of a harm-based argument against enhancement, several points should be considered: 1) What are the actual and potential harms and benefits (medical and social) of a given enhancement? 2) Who should make the judgments about appropriate tradeoffs? Individuals may judge differently at what point the risk/benefit threshold occurs, and their judgments may depend upon the precise natures of the risks and benefits. The distinction between moral condemnation and legal prohibition is relevant here as well, since legal strictures presuppose an answer to this latter question. Notice, too, the harm argument is toothless against enhancements that don’t pose any risks.
Unnaturalness: A number of thinkers argue, in one form or another, that use of drugs or technologies to enhance our capacities is unnatural, and the implication is that unnatural impliesimmoral (Kass 2003b; Maslen, Faulmüller, and Savulescu 2014a; DeGrazia 2005). Of course, to be a good argument, more reason has to be given both for why it is unnatural (see an argument for naturalness, above), and for why naturalness and morality align. Some arguments suggest that manipulating our cognitive machinery amounts to tinkering with “God-given” capacities, and usurping the role of God as creator can be easily understood as transgressive in a religious-moral framework. Despite the appeal of this framework to religious conservatives, a neuroethicist may want to offer a more ecumenical or naturalistic argument to support the link between unnaturaland immoral, and will have to counter the claim, above, that it is natural for humans to enhance themselves.
Diminishing human agency: Another argument suggests that the effect of enhancement will be to diminish human agency by undermining the need for real effort and allowing for success with morally-meaningless shortcuts. Human life will lose the value achieved by the process of striving for a goal and will be belittled as a result (see, e.g., Schermer 2008; Kass 2003b). Although this is a promising form of argument, more needs to be done to undergird the claims that effort is intrinsically valuable. After all, few think that we ought to abandon transportation by car for horses, walking, or bicycling, because these require more effort and thus have more moral value.
The hubris objection: This interesting argument holds that the type of attitude that seems to underlie pursuit of such interventions is morally defective in some way, or is indicative of a morally defective character trait. So, for example, Michael Sandel suggests that the attitude underlying the attempt to enhance ourselves is a “Promethean” attitude of mastery that overlooks or underappreciates the “giftedness of human life”. It is the expression and indulgence of a problematic attitude of dominion toward life to which Sandel primarily objects:
اقتباس :
The moral problem with enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes. (Sandel 2002)
Others have pushed back against this tack, arguing that the hubris objection against enhancement fundamentally misunderstands the concepts it relies upon (Kahane 2011).
Equality and Distributive Justice: One question that routinely arises with new technological advances is “who gets to benefit from them?” As with other technologies, neuroenhancements are not free. However, worries about access are compounded in the case of neuroenhancements (as they may also be with other learning technologies). As enhancements increase capacities of those who use them, they are likely to further widen the already unconscionable gap between the haves and have-nots: we can foresee that those already well-off enough to afford enhancements will use them to increase their competitive advantage against others, leaving further behind those who cannot afford them (Farah 2007; Greely et al. 2008; Academy of Medical Sciences 2012). One can imagine policy solutions to this, of course, such as having enhancements covered by health insurance, having the state distribute them to those who cannot afford them, etc. However, widespread availability of neuroenhancements will inevitably raise questions about coercion.
Coercion: The prospect of coercion is raised in several ways. Obviously, if the state decides to mandate an enhancement, treating its beneficial effects as a public health issue, this is effectively coercion. We see this currently in the backlash against vaccinations: they are mandated with the aim of promoting public health, but in some minds the mandate raises concerns about individual liberty. I would submit that the vaccination case demonstrates that at least on some occasions coercion is justified. The pertinent question is whether coercion could be justifiable for enhancement rather than for harm prevention. Although some coercive ideas, such as the suggestion that we put Prozac or other enhancers in the water supply, are unlikely to be taken seriously as policy recommendations (however, see Appel 2010), less blatant forms of coercion are more realistic threats. For example, if people immersed in tomorrow’s competitive environment are in the company of others who are reaping the benefits from cognitive enhancement, they may feel compelled to make use of the same techniques just to remain competitive, even though they would rather not use enhancements. The danger is that respecting the autonomy of some may put pressure on the autonomy of others (Tannenbaum 2014; Maslen, Faulmüller, and Savulescu 2014a; Farah 2007).
There is unlikely to be any categorical resolution of the ethics of enhancement debate. The details of a technology will be relevant to determining whether a technology ought to be made available for enhancement purposes: we ought to treat a highly enhancing technology that causes no harm differently from one that provides some benefit at noticeable cost. Moreover, the magnitude of some of the equality-related issues will depend upon empirical facts about the technologies. Are neurotechnologies equally effective for everyone? For example, there is evidence that some known enhancers such as the psychostimulants are more effective for those with deficiencies than for the unimpaired: studies suggest the beneficial effects of these drugs are proportional to the degree to which a capacity is impaired (Husain & Mehta 2011). Other reports claim that normal subjects’ capacities are not actually enhanced by these drugs, and some aspects of functioning may actually be impaired (Mattay, et al. 2000; Ilieva et al. 2013). If this is a widespread pattern, it may alleviate some worries about distributive justice and contributions to social and economic stratification, since people with a deficit will benefit proportionately more than those using the drug for enhancement purposes. (However, biology is rarely that equitable, and it would be surprising if this leveling pattern turned out to be the norm). Since the technologies that could provide enhancements are extremely diverse, ranging from drugs to implants to genetic manipulations, assessment of the risks and benefits and the way in which these technologies bear upon our conception of humanity will have to be empirically grounded.

3. Cognitive liberty

Freedom is a cornerstone value in liberal democracies and one of the most cherished kinds of freedom is freedom of thought. The main elements of freedom of thought, or “cognitive liberty” as it is sometimes called (Sententia 2013), include privacy and autonomy. Both of these can be challenged by the new developments in neuroscience. The value of, potential threat to, and ways to protect these aspects of freedom are a concern for neuroethics.

3.1 Privacy

As the framers of the U.S. constitution were well aware, freedom is intimately linked with privacy: even being monitored is considered potentially “chilling” to the kinds of freedoms our society aims to protect. One type of freedom that has been championed in American jurisprudence is “the right to be let alone” (Warren and Brandeis 1890), to be free from government or other intrusion in our private lives.
In the past, mental privacy could be taken for granted: the first person accessibility of the contents of consciousness ensured that the contents of one’s mind remained hidden to the outside world, until and unless they were voluntarily disclosed. Instead, the battles for freedom of thought were waged at the borders where thought meets the outside world—in expression—and were won with the First Amendment’s protections for those freedoms. Over the last half century, technological advances have eroded or impinged upon many traditional realms of worldly privacy. Most of the avenues for expression can be (and increasingly are) monitored by third parties. It is tempting to think that the inner sanctum of the mind remains the last bastion of real privacy.
This may still be largely true, but the privacy of the mind can no longer to be taken for granted. Our neuroscientific achievements have already made significant headway in allowing others to discern some aspects of our mental content through neurotechnologies. Noninvasive methods of brain imaging have revolutionized the study of human cognition and have dramatically altered the kinds of knowledge we can acquire about people and their minds. The threat to mental privacy is not as simple as the naive claim that neuroimaging can read our thoughts, nor are the capabilities of imaging so innocuous and blunt that we needn’t worry about that possibility. A focus of neuroethics is to determine the real nature of the threat to mental privacy and to evaluate its ethical implications, many of which are relevant to legal, medical, and other social issues. Doing so effectively will require both a solid understanding of the neuroscientific technologies and the neural bases of thought, as well as a sensitivity to the ethical problems raised by our growing knowledge and ever-more-powerful neurotechnologies. These dual necessities illustrate why neuroethicists must be trained both in neuroscience and in ethics. In what follows, I briefly discuss the most relevant neurotechnology and its limitations and then canvas a few ways in which privacy may be infringed by it.

3.1.1 An illustration: potential threats to privacy with functional MRI

One of the most prominent neurotechnologies poised to pose a threat to privacy is Magnetic Resonance Imaging, or MRI. MRI can provide both structural and functional information about a person’s brain with minimal risk and inconvenience. In general, MRI is a tool that allows researchers noninvasively to examine or monitor brain structure and activity and to correlate that structure or function with behavior. Structural or anatomical MRI provides high-resolution structural images of the brain. While structural imaging in the biosciences is not new, MRI provides much higher resolution and better ability to differentiate tissues than prior techniques such as X-rays or CT scans.
However, it is not structural but functional MRI (fMRI) that has revolutionized the study of human cognition. fMRI provides information about correlates of neuronal activity, from which neural activity can be inferred. Recent advances in analysis methods for neuroimaging data such as multi-voxel pattern analysis now allow relatively fine-grained “decoding” of brain activity (Haynes and Rees 2005; Norman et al. 2006). Decoding involves using a machine-learning algorithm to compare an observed pattern of brain activation with a database of brain activity patterns. The database is composed of experimentally established correlations between brain activity patterns and a functional variable of interest, such as a task, behavior, or mental content. The closest match allows one to (defeasibly) attribute the associated functional variable to the person being scanned. The kind of information provided by functional imaging promises to provide important evidence useful for three goals: Decoding mental content, diagnosis of mental dysfunction, and prediction of behavior/character/dysfunction. Neuroethical questions arise in all these areas.
Before discussing these issues, it is important to remember that neuroimaging is a technology that is subject to a number of significant limitations, and these technical issues limit how precise the inferences can be. For example:

  • The correlations between the fMRI signal and neural activity are rough: the signal is delayed in time from the neuronal activity, and spatially smeared, thus limiting the spatial and temporal precision of the information that can be inferred.

  • A number of dynamic factors relate the fMRI signal to activity, and the precise underlying model is not yet well-understood.

  • There is relatively low signal-to-noise ratio, necessitating averaging across trials and often across people.

  • Individual brains differ both in brain structure and in function. Variability makes determining when differences are clinically or scientifically relevant difficult and leads to noisy data. Due to natural individual variability in structure and function and to brain plasticity (especially during development), even large differences in structure or deviation from the norm may not be indicative of any functional deficiency. Cognitive strategies can also affect variability in the data. These sources of variability can complicate the analysis of data and provide even more leeway for differences to exist without dysfunction.

  • Activity in a brain area does not imply that the region is necessary for performance of the task.

  • fMRI is so sensitive to motion that it would be virtually impossible to get information from a noncompliant subject. This makes the prospect of reading content from an unwilling mind virtually impossible.


For more information about limitations and capabilities of fMRI see (Jones et al. 2009; Morse and Roskies 2013).
Without appreciating these technical issues and the resulting limits to what can legitimately be inferred from fMRI, one is likely to overestimate or mischaracterize the potential threat that it poses. In fact, much of the fear of mindreading expressed in non-scientific publications stems from a lack of understanding of the science. For example, there is no scientific basis to the worry that imaging would enable the reading of mental content without our knowing it. Only NIRS (Near InfraRed Spectroscopy), an imaging method that could theoretically be used at a distance, is the sort of method that could be employed without the subject’s knowledge, but the kind of information it provides is very crude and unsuitable for decoding mental content. Thus, fears that the government is able to remotely or covertly monitor the thoughts of citizens are unfounded.

3.1.2 Decoding of mental content

Noninvasive ways of inferring neural activity have led many to worry that mindreading is possible, not just in theory, but even now. Coupled with decoding techniques, fMRI can be used, for example, to reconstruct a visual stimulus from activity of the visual cortex while a subject is looking at a scene or to determine whether a subject is looking at a familiar face or hearing a particular sound. If mental content supervenes on the physical structure and function of our brains, as most philosophers and neuroscientists think it does, then in principle it should be possible to read minds by reading brains. Because of the potential to identify mental content, decoding raises issues about mental privacy.
Despite the remarkable advances in brain imaging technology, however, when it comes to mental content, our current abilities to “mind-read” are relatively limited (Roskies 2015a). Although some aspects of content can be decoded from neural data, these tend to be quite general and nonpropositional in character. The ability to infer semantic meaning from ideation or visual stimulation tends to work best when the realm of possible contents are quite constrained. Our current abilities allow us to infer some semantic atoms, such as representations denoting one of a prespecified set of concrete objects, but not unconstrained content or entire propositions. Of course, future advances might make worries about mindreading more pressing. For example, if we develop means for understanding how simple mental representations can be combined in order to yield complex combinations and can decode meaning from these complexes, we may one day come to be able to decode propositional thought.
Still, some worries are warranted. Even if neuroimaging is not at the stage where mindreading is possible, it can nonetheless threaten aspects of privacy in ways that should give us pause. Even now, neuroimaging provides some insights into attributes of people that they may not want known or disclosed. In some cases, subjects may not even know that these attributes are being probed, thinking they are being scanned for other purposes. A willing subject may not want certain things to be monitored. In what follows, I consider a few of these more realistic worries.
Implicit bias: Although explicitly acknowledged racial biases are declining, this may be due to a reporting bias attributable to the increased negative social valuation of racial prejudice. Much contemporary research now focuses on examining implicit racial biases, which are automatic or unconscious reflections of racial bias. With fMRI and EEG, it is possible to interrogate implicit biases, sometimes without the subject’s awareness that that is what is being measured (Chekroud et al. 2014; Richeson et al. 2003; Luo et al. 2006). (These can also be measured behaviorally, through tests like the IAT (Implicit Attitude Test), so the worry is not solely a neuroimaging worry.) While there is disagreement about how best to interpret implicit bias results (e.g., as a measure of perceived threat, as in-group/out-group distinctions, etc.) and what relevance they have for behavior, the possibility that implicit biases can be measured, either covertly or overtly, raises scientific and ethical questions (see implicit.harvard.edu). When ought this information to be collected? What procedures must be followed for subjects legitimately to consent to implicit measures? What significance should be attributed to evidence of biases? What kind of responsibility should be attributed to people who hold them? What predictive power might they hold? Should they be used for practical purposes? One can imagine obvious but controversial potential uses for implicit bias measures in legal situations, in employment contexts, in education, and in policing, all areas in which concerns of social justice are significant. See also entry on implicit bias.
Lie detection: Several neurotechnologies are being used to detect deception or neural correlates of lying or concealing information in experimental situations. For example, both fMRI measures looking for neural correlates of deception and EEG analysis techniques relying on the P300 signal in versions of the GKT (or Guilty Knowledge Test) have been used in the laboratory to detect deception with varying levels of success. These methods are subject to a variety of criticisms (Farah et al. 2014; National Research Council 2003). For example, almost all experimental studies fail to study real lying or deception, but instead investigate some version of instructed misdirection. The context, tasks, and motivations differ greatly between actual instances of lying and these experimental analogs, calling into question whether these laboratory tests are relevant in real-world situations. Few studies address the tests’ efficacy in the face of countermeasures. Moreover, accuracy, though significantly higher than chance, is far from perfect, and because of the inability to determine base rates of lying, error rates cannot be effectively assessed. Thus, we cannot establish their reliability for real-world uses (Roskies 2015a). Despite these limitations, several companies have marketed neurotechnologies for this purpose (see, e.g., No Lie MRI, Brainwave Science, Cephos—though Cephos no longer markets neuroimaging techniques for lie detection).
Character traits: Neurotechnologies have shown some promise in identifying or predicting aspects of personality or character. In an interesting study aimed at determining how well neuroimaging could detect lies, Greene and colleagues gave subjects in the fMRI scanner a prediction task in a game of chance that they could easily cheat on. By using statistical analysis the researchers could identify a group of subjects who clearly cheated and others who did not (Greene and Paxton 2009). Although they could not determine with neuroimaging on which trials subjects cheated, there were overall differences in brain activation patterns between cheaters and those who played fair and were at chance in their predictions. Moreover, Greene and colleagues repeated this study at several months remove, and found that the character trait of honesty or dishonesty was stable over time: cheaters the first time were likely to cheat (indeed, cheated even more the second time), and honest players remained honest the second time around. Also interesting was the fact that the brain patterns suggested that cheaters had to activate their executive control systems more than noncheaters, not only when they cheated, but also when deciding not to cheat. While the differential activations cannot be linked specifically to the propensity to cheat rather than to the act of cheating, the work suggests that these task-related activation patterns may reflect correlates of trustworthiness.
The prospect of using methods for detecting these sorts of traits or behaviors in real-world situations raises a host of thorny issues. What level of reliability should be required for their employment? In what circumstances ought they to be admissible as evidence in the courtroom? For other purposes? Using lie detection or decoding techniques from neuroscience in legal contexts may raise constitutional concerns in the U.S.: Is brain imaging a search or seizure as protected by the 4th Amendment (Farahany 2012a)? Would its forcible use be precluded by 5thAmendment rights (Farahany 2012b)? These questions, though troubling, might not be immediately pressing: in a recent case (United States v. Semrau 2012) the court ruled that fMRI lie detection is inadmissible, given its current state of development. However, the opinion left open the possibility that it may be admissible in the future, if methods improve. Finally, to the extent that relevant activation patterns may be found to correlate significantly with activation patterns on other tasks, or with a task-free measure such as default-network activity, it raises the possibility that information about character could be inferred merely by scanning the subjects doing something innocuous, without their knowledge of the kind of information being sought. Thus, there are multiple dimensions to the threat to privacy posed by imaging techniques.

3.1.3 Diagnosis

الرجوع الى أعلى الصفحة اذهب الى الأسفل
مُشاطرة هذه المقالة على: reddit

Arguments against enhancement :: تعاليق

free men
رد: Arguments against enhancement
مُساهمة السبت مارس 12, 2016 1:06 pm من طرف free men
Increasingly, neuroimaging information can bear upon diagnoses for diseases, and in some instances may provide predictive information prior to the onset of symptoms. Work on the default network is promising for improving diagnosis in certain diseases without requiring that subjects perform specific tasks in the scanner (Buckner, Andrews-Hanna, and Schacter 2008). For some diseases, such as in Alzheimer’s disease, MRI promises to provide diagnostic information that previously could only be established at autopsy. fMRI signatures have also been linked to a variety of psychiatric diseases, although not yet with the reliability required for clinical diagnosis. Neuroethical issues also arise regarding ways to handle incidental findings, that is, evidence of asymptomatic tumors or potentially benign abnormalities that appear in the course of scanning research subjects for non-medical purposes (Illes et al. 2006; Illes and Sahakian 2011). Because of the popularity of fMRI for basic research in cognitive neuroscience, this common issue in medical ethics has become significant for non-medical researchers.
The ability to predict future functional deficits through neuroimaging raises a host of issues, many of which have been previously addressed by genethics (the ethics of genetics), since both provide information about future disease risk. What may be different is that the diseases for which neurotechnologies are diagnostically useful are uniformly those that affect the brain, and thus potentially mental competence, mood, personality, or sense of self. As such they may raise peculiarly neuroethical questions (see next section).

3.1.4 Prediction

As discussed above, decoding methods allow one to associate observed brain activity with previously observed brain/behavior correlations. In addition, such methods can also be used to predict future behaviors, insofar as these are correlated with observations of brain activity patterns. Some studies have already reported predictive power over upcoming decisions (Soon et al. 2008). Increasingly, we will see neuroscience or neuroimaging data that will give us some predictive power over longer-range future behaviors. For example, brain imaging may allow us to predict the onset of psychiatric symptoms such as psychotic or depressive episodes (Singh, Sinnott-Armstrong, and Savulescu 2013; Arbabshirani et al. 2013; Fryer et al. 2013). In cases in which this behavior is indicative of mental dysfunction it raises questions about stigma but also may allow more effective interventions.
One confusion regarding neuro prediction should be clarified immediately: When neuroimages are said to “predict” future activity, it means they provide some statistical information regarding likelihood. Prediction in this sense does not imply that the predicted behavior necessarily will come to pass; it does not mean a person’s future is fated or determined. Although scientists occasionally make this mistake when discussing their results, the fact that brain function or structure may give us some information about future behaviors should not be interpreted as a strong challenge to free will. The prevalence of this mistake among both philosophers and scientists again illustrates the importance for neuroethicists of sophistication in both neuroscience and philosophy.
Perhaps the most consequential and most ethically difficult potential use of predictive information is in the criminal justice system. For example, there is evidence that structural brain differences are predictive of scores on the PCL-R, a tool developed to diagnose psychopathy (Hare 1991; Hart and Hare 1997). It is also well-established that psychopaths have high rates of recidivism for violent offenses. Thus, in principle neuroimaging could be used to provide information about an individual’s likelihood of recidivism. Indeed, a recent study has shown that brain scans did have predictive value for recidivism, controlling for other risk factors (Aharoni et al. 2013). Should data like that be admissible for determining sentences or parole decisions? Would that be equivalent to punishing someone for crimes they have not committed? Or is it just a neutral extension of current uses of actuarial information, such as age, gender, and income level? At an extreme, one could imagine using predictive information to detain people who have not yet committed a crime, arresting them before they do. This dystopian scenario, portrayed in the film Minority Report (Speilberg 2002), also illustrates how our abilities to predict can raise difficult ethical and policy questions when they collide with intuitions about and the value of free will and autonomy. More generally, work in neuroethics could be of significant practical use for the law, and indeed is often called by another moniker, “neurolaw”.
 

Arguments against enhancement

الرجوع الى أعلى الصفحة 

صفحة 1 من اصل 1

صلاحيات هذا المنتدى:لاتستطيع الرد على المواضيع في هذا المنتدى
** متابعات ثقافية متميزة ** Blogs al ssadh :: Pensée-
انتقل الى: