Moral responsibility

In philosophy, moral responsibility is the status of morally deserving praise, blame, reward, or punishment for an act or omission, in accordance with one's moral obligations.[1][2] Deciding what (if anything) counts as "morally obligatory" is a principal concern of ethics.

Philosophers refer to people who have moral responsibility for an action as moral agents. Agents have the capability to reflect upon their situation, to form intentions about how they will act, and then to carry out that action. The notion of free will has become an important issue in the debate on whether individuals are ever morally responsible for their actions and, if so, in what sense. Incompatibilists regard determinism as at odds with free will, whereas compatibilists think the two can coexist.

Moral responsibility does not necessarily equate to legal responsibility. A person is legally responsible for an event when a legal system is liable to penalise that person for that event. Although it may often be the case that when a person is morally responsible for an act, they are also legally responsible for it, the two states do not always coincide.

Philosophical stance

Various philosophical positions exist, disagreeing over determinism and free will

Depending on how a philosopher conceives of free will, they will have different views on moral responsibility.[3]

Incompatibilism

Metaphysical libertarianism

Metaphysical libertarians think actions are not always causally determined, allowing for the possibility of free will and thus moral responsibility. All libertarians are also incompatibilists; they think that if causal determinism were true of human action; people would not have free will. Accordingly, libertarians subscribe to the principle of alternate possibilities, which posits that moral responsibility requires that people could have acted differently.[4]

Phenomenological considerations are sometimes invoked by incompatibilists to defend a libertarian position. In daily life, we feel as though choosing otherwise is a viable option. Although this feeling doesn't firmly establish the existence of free will, some incompatibilists claim the phenomenological feeling of alternate possibilities is a prerequisite for free will.[5]

Jean-Paul Sartre suggested that people sometimes avoid incrimination and responsibility by hiding behind determinism: "...we are always ready to take refuge in a belief in determinism if this freedom weighs upon us or if we need an excuse".[6]

A similar view has it that individual moral culpability lies in individual character. That is, a person with the character of a murderer has no choice other than to murder, but can still be punished because it is right to punish those of bad character. How one's character was determined is irrelevant from this perspective. Robert Cummins, for example, argues that people should not be judged for their individual actions, but rather for how those actions "reflect on their character". If character (however defined) is the dominant causal factor in determining one's choices, and one's choices are morally wrong, then one should be held accountable for those choices, regardless of genes and other such factors.[7][8]

In law, there is a known exception to the assumption that moral culpability lies in either individual character or freely willed acts. The insanity defense—or its corollary, diminished responsibility (a sort of appeal to the fallacy of the single cause)—can be used to argue that the guilty deed was not the product of a guilty mind.[9] In such cases, the legal systems of most Western societies assume that the person is in some way not at fault, because his actions were a consequence of abnormal brain function (implying brain function is a deterministic causal agent of mind and motive).

The argument from luck

The argument from luck is a criticism against the libertarian conception of moral responsibility. It suggests that any given action, and even a person's character, is the result of various forces outside that person's control. It may not be reasonable, then, to hold that person solely morally responsible.[10] Thomas Nagel suggests that four different types of luck (including genetic influences and other external factors) end up influencing the way that a person's actions are evaluated morally. For instance, a person driving drunk may make it home without incident, and yet this action of drunk driving might seem more morally objectionable if someone happens to jaywalk along his path (getting hit by the car).[11]

This argument can be traced back to David Hume. If physical indeterminism is true, then those events that are not determined are scientifically described as probabilistic or random. It is therefore argued that it is doubtful that one can praise or blame someone for performing an action generated randomly by his nervous system (without there being any non-physical agency responsible for the observed probabilistic outcome).[12]

Hard determinism

Hard determinists (not to be confused with Fatalists) often use liberty in practical moral considerations, rather than a notion of a free will. Indeed, faced with the possibility that determinism requires a completely different moral system, some proponents say "So much the worse for free will!".[13] Clarence Darrow, the famous defense attorney, pleaded the innocence of his clients, Leopold and Loeb, by invoking such a notion of hard determinism.[14] During his summation, he declared:

What has this boy to do with it? He was not his own father; he was not his own mother; he was not his own grandparents. All of this was handed to him. He did not surround himself with governesses and wealth. He did not make himself. And yet he is to be compelled to pay.[14]

Paul the Apostle, in his Epistle to the Romans addresses the question of moral responsibility as follows: "Hath not the potter power over the clay, of the same lump to make one vessel unto honour, and another unto dishonour?"[15] In this view, individuals can still be dishonoured for their acts even though those acts were ultimately completely determined by God.

Joshua Greene and Jonathan Cohen, researchers in the emerging field of neuroethics, argue, on the basis of such cases, that our current notion of moral responsibility is founded on libertarian (and dualist) intuitions.[16] They argue that cognitive neuroscience research (e.g. neuroscience of free will) is undermining these intuitions by showing that the brain is responsible for our actions, not only in cases of florid psychosis, but also in less obvious situations. For example, damage to the frontal lobe reduces the ability to weigh uncertain risks and make prudent decisions, and therefore leads to an increased likelihood that someone will commit a violent crime.[17] This is true not only of patients with damage to the frontal lobe due to accident or stroke, but also of adolescents, who show reduced frontal lobe activity compared to adults,[18] and even of children who are chronically neglected or mistreated.[19] In each case, the guilty party can, they argue, be said to have less responsibility for his actions.[16] Greene and Cohen predict that, as such examples become more common and well known, jurors’ interpretations of free will and moral responsibility will move away from the intuitive libertarian notion that currently underpins them.

David Eagleman explains that nature and nurture cause all criminal behavior. He likewise believes that science demands that change and improvement, rather than guilt, must become the focus of the legal justice system.[20]

Greene and Cohen also argue that the legal system does not require this libertarian interpretation. Rather, they suggest that only retributive notions of justice, in which the goal of the legal system is to punish people for misdeeds, require the libertarian intuition. Many forms of ethically realistic and consequentialist approaches to justice, which are aimed at promoting future welfare rather than retribution, can survive even a hard determinist interpretation of free will. Accordingly, the legal system and notions of justice can thus be maintained even in the face of emerging neuroscientific evidence undermining libertarian intuitions of free will.

Neuroscientist David Eagleman maintains similar ideas. Eagleman says that the legal justice system ought to become more forward looking. He says it is wrong to ask questions of narrow culpability, rather than focusing on what is important: what needs to change in a criminal's behavior and brain. Eagleman is not saying that no one is responsible for their crimes, but rather that the "sentencing phase" should correspond with modern neuroscientific evidence. To Eagleman, it is damaging to entertain the illusion that a person can make a single decision that is somehow, suddenly, independent of their physiology and history. He describes what scientists have learned from brain damaged patients, and offers the case of a school teacher who exhibited escalating pedophilic tendencies on two occasions—each time as results of growing tumors.[21] Eagleman also warns that less attractive people and minorities tend to get longer sentencing—all of which he sees as symptoms that more science is needed in the legal system.[20]

Hard incompatibilism

Derk Pereboom defends a position he calls hard incompatibilism, according to which we cannot have free will (and thus cannot be morally responsible) whether determinism or indeterminism is true, supposing only events cause our actions.[22] While Pereboom acknowledges that agent causation is still a possibility, he regards it as unlikely against the backdrop of the most defensible physical theories. Without agent causation, Pereboom thinks the freedom required for moral responsibility is not in the offing.[23]

Compatibilism

Some forms of compatibilism suggest the term free will should only be used to mean something more like liberty.

Compatibilists contend that even if determinism were true, it would still be possible for us to have free will. The Hindu text The Bhagavad Gita offers one very early compatibilist account. Facing the prospect of going to battle against kinsmen to whom he has bonds, Arjuna despairs. Krishna attempts to assuage Arjuna's anxieties. He argues that forces of nature come together to produce actions, and it is only vanity that causes us to regard ourselves as the agent in charge of these actions. However, Krishna adds this caveat: "... [But] the Man who knows the relation between the forces of Nature and actions, witnesses how some forces of Nature work upon other forces of Nature, and becomes [not] their slave..." When we are ignorant of the relationship between forces of Nature, we become passive victims of nomological facts. Krishna's admonition is intended to get Arjuna to perform his duty (i.e., fight in the battle), but he is also claiming that being a successful moral agent requires being mindful of the wider circumstances in which one finds oneself.[24] Paramahansa Yogananda also said, "Freedom means the power to act by soul guidance, not by the compulsions of desires and habits. Obeying the ego leads to bondage; obeying the soul brings liberation."[25]

In the Western tradition, Baruch Spinoza echoes the Bhagavad Gita's point about agents and natural forces, writing "men think themselves free because they are conscious of their volitions and their appetite, and do not think, even in their dreams, of the causes by which they are disposed to wanting and willing, because they are ignorant [of those causes]."[23] Krishna is hostile to the influence of passions on our rational faculties, speaking up instead for the value of heeding the dictates of one's own nature: "Even a wise man acts under the impulse of his nature. Of what use is restraint?"[24] Spinoza similarly identifies the taming of one's passions as a way to extricate oneself from merely being passive in the face of external forces and a way toward following our own natures.[26]

Other views

Daniel Dennett asks why anyone would care about whether someone had the property of responsibility and speculates that the idea of moral responsibility may be "a purely metaphysical hankering".[27]

Experimental research

Mauro suggests that a sense of personal responsibility does not operate or evolve universally among humankind. He argues that it was absent in the successful civilization of the Iroquois.[28]

In recent years, research in experimental philosophy has explored whether people's untutored intuitions about determinism and moral responsibility are compatibilist or incompatibilist.[29] Some experimental work has included cross-cultural studies.[30] However, the debate about whether people naturally have compatibilist or incompatibilist intuitions has not come out overwhelmingly in favor of one view or the other, finding evidence for both views. For instance, when people are presented with abstract cases that ask if a person could be morally responsible for an immoral act when they could not have done otherwise, people tend to say no, or give incompatibilist answers. When presented with a specific immoral act that a specific person committed, people tend to say that that person is morally responsible for their actions, even if they were determined (that is, people also give compatibilist answers).[31]

The neuroscience of free will investigates various experiments that might shed light on free will.

Collective

When people attribute moral responsibility, they usually attribute it to individual moral agents.[32] However, Joel Feinberg, among others, has argued that corporations and other groups of people can have what is called ‘collective moral responsibility’ for a state of affairs.[33] For example, when South Africa had an apartheid regime, the country's government might have been said to have had collective moral responsibility for the violation of the rights of non-European South Africans.

Lack of sense of responsibility of psychopaths

One of the attributes defined for psychopathy is "failure to accept responsibility for own actions".[34]

Artificial systems

The emergence of automation, robotics and related technologies prompted the question, 'Can an artificial system be morally responsible?'[35][36][37] The question has a closely related variant, 'When (if ever) does moral responsibility transfer from its human creator(s) to the system?'.[38][39]

The questions arguably adjoin with but are distinct from machine ethics, which is concerned with the moral behavior of artificial systems. Whether an artificial system's behavior qualifies it to be morally responsible has been a key focus of debate.

Arguments that artificial systems cannot be morally responsible

Batya Friedman and Peter Kahn Jr posited that intentionality is a necessary condition for moral responsibility, and that computer systems as conceivable in 1992 in material and structure could not have intentionality.[40]

Arthur Kuflik asserted that humans must bear the ultimate moral responsibility for a computer's decisions, as it is humans who design the computers and write their programs. He further proposed that humans can never relinquish oversight of computers.[39]

Frances Grodzinsky et al. considered artificial systems that could be modelled as finite state machines. They posited that if the machine had a fixed state transition table, then it could not be morally responsible. If the machine could modify its table, then the machine's designer still retained some moral responsibility.[38]

Patrick Hew argued that for an artificial system to be morally responsible, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. He further argued that such systems are a substantial departure from technologies and theory as extant in 2014. An artificial system based on those technologies will carry zero responsibility for its behaviour. Moral responsibility is apportioned to the humans that created and programmed the system.[41]

(A more extensive review of arguments may be found in.[41])

Arguments that artificial systems can be morally responsible

Colin Allen et al. proposed that an artificial system may be morally responsible if its behaviours are functionally indistinguishable from a moral person, coining the idea of a 'Moral Turing Test'.[35] They subsequently disavowed the Moral Turing Test in recognition of controversies surrounding the Turing Test.[36]

Andreas Matthias described a 'responsibility gap' where to hold humans responsible for a machine would be an injustice, but to hold the machine responsible would challenge 'traditional' ways of ascription. He proposed three cases where the machine's behaviour ought to be attributed to the machine and not its designers or operators. First, he argued that modern machines are inherently unpredictable (to some degree), but perform tasks that need to be performed yet cannot be handled by simpler means. Second, that there are increasing 'layers of obscurity' between manufacturers and system, as hand coded programs are replaced with more sophisticated means. Third, in systems that have rules of operation that can be changed during the operation of the machine.[42]

(A more extensive review of arguments may be found in.[41])

See also

References

  1. Klein, Martha (2005). "responsibility". In Honderich, Ted. Oxford Companion to Philosophy. The term 'moral responsibility' covers (i) the having of a moral obligation and (ii) the fulfilment of the criteria for deserving blame or praise (punishment or reward) for a morally significant act or omission.
  2. Eshleman, Andrew (2009). "moral responsibility". In Zalta, Edward N. Stanford Encyclopedia of Philosophy. Many have held that one distinct feature of persons is their status [emphasis added] as morally responsible agents
  3. Peter Cave (2002). Responsibility in Law and Morality. Hart Publishing. p. 4. ISBN 1841133213. A common argument in the philosophical literature is that the essence of responsibility is to be found in what it means to be a human agent and to have free will...There is disagreement amongst philosophers about what freedom means, about whether human beings are free in the relevant sense, and about the relevance of freedom to responsibility... Nevertheless,...our responsibility practices have developed, and thrive, independently of ‘the truth’ about human freedom.
  4. Woolfolk, Robert L., Doris, John M., and Brianna, John M. (2008). "Identification, Situational Constraint, and Social Cognition: Studies in the Attribution of Moral Responsibility". In Knobe, Joshua, and Nichols, Shaun. Experimental Philosophy. New York: Oxford University Press. pp. 61–80. ISBN 978-0-19-532326-9.
  5. Nahmias, Eddy,, Morris, Stephen G., Nadelhoffer, Thomas N., and Turner, Jason (2008). "Is Incompatibilism Intuitive?". In Knobe, Joshua, and Nichols, Shaun. Experimental Philosophy. New York: Oxford University Press. pp. 81–104. ISBN 978-0-19-532326-9.
  6. Sartre, J.P. (1943)Being and Nothingness, reprint 1993. New York: Washington Square Press.
  7. Vuoso, G. (1987) "Background Responsibility and Excuse," Yale Law Journal, 96, pp. 1680–81
  8. Cummins, R. "Culpability" and Mental Disorder", p. 244
  9. Goldstein, A. M., Morse, S. J. & Shapiro, D. L. 2003 "Evaluation of criminal responsibility". In Forensic psychology. vol. 11 (ed. A. M. Goldstein), pp. 381–406. New York: Wiley.
  10. "Moral Luck". Stanford Encyclopedia of Philosophy. January 26, 2004.
  11. Nagel, Thomas. 1976, "Moral Luck", Proceedings of the Aristotelian Society Supplementary vol. 50: 137–55.
  12. Hume, D. (1740). A Treatise of Human Nature (1967 edition). Oxford University Press, Oxford. ISBN 0-87220-230-5
  13. Benditt, Theodore (1998) Philosophy Then and Now with eds. Arnold and Graham. Oxford: Blackwell Publishing, 1998. ISBN 1-55786-742-9
  14. 1 2 Darrow, Clarence, 1924, "The Plea of Clarence Darrow, in Defense of Richard Loeb and Nathan Leopold, Jr., On Trial for Murder" page reference is to the reprint in Philosophical Explorations: Freedom, God, and Goodness, S. Cahn (ed.), New York: Prometheus Books, 1989.
  15. St. Paul, "Epistle to the Romans", 9:21, King James Bible Tennessee:The Gideons International
  16. 1 2 Greene, J. Cohen, J. (2004). "For the law, neuroscience changes nothing and everything". Philosophical Transactions of the Royal Society of London B, 359, 1775–1785.
  17. Brower M.C. and Price B.H. (2001). "Neuropsychiatry of frontal lobe dysfunction in violent and criminal behaviour: a critical review". Journal of Neurology, Neurosurgery and Psychiatry, 71: 720–726.
  18. Steinberg, L., Scott, E. S. (2003). "Less guilty by reason of adolescence: developmental immaturity, diminished responsibility, and the juvenile death penalty".American Psychologist 58, 1009–1018.
  19. Teicher, M. H., Anderson, S. L., Polcari, A., Anderson, C. M., Navalta, C. P., and Kim, D. M. (2003). "The neurobiological consequences of early stress and childhood maltreatment". Neuroscience and Behavioral Reviews, 27: 33–44.
  20. 1 2 David Eagleman, Philosophy Bites Podcast, "David Eagleman on Morality and the Brain"
  21. "Brain tumour causes uncontrollable paedophilia"
  22. Pereboom, Derk (2001). Living without Free Will. Cambridge: Cambridge.
  23. 1 2 Pereboom, Derk (2005). "Defending Hard Incompatibilism". In "Free Will and Moral Responsibility.". Malden, MA: Blackwell Publishing, Inc. pp. 228–247. ISBN 1-4051-3810-6.
  24. 1 2 The Bhagavad Gita. New York: Penguin Books. 1962.
  25. An Inspirational Thought for Each Day. California: Self-Realization Fellowship. 1977.
  26. "Baruch Spinoza". Stanford Encyclopedia of Philosophy. Retrieved 3 May 2011.
  27. Dennett, D., (1984) Elbow Room: The Varieties of Free Will Worth Wanting. Bradford Books. ISBN 0-262-54042-8
  28. Mauro, Frédéric (1964). L'expansion européenne (1680-1870) [European expansion (1680-1870)]. Nouvelle Clio: L'Histoire et ses problèmes (in French). 27. Paris: Presses Universitaires de France. p. 213. Ces langues nouvelles vehiculent des concepts nouveaux et des idées parfois fort difficiles a comprendre: par exemple le principe de responsibilité personnelle pour les Iroquiens. [Translation: These new languages introduce new concepts and new ideas - sometimes very difficult to understand-for example the principle of personal responsibility for the Iroquois.]
  29. Nahmias, Eddy; Stephen Morris; Thomas Nadelhoffer; Jason Turner (2006). "Is Incompatibilism Intuitive?". Philosophy and Phenomenological Research. 73 (1): 28–52. doi:10.1111/j.1933-1592.2006.tb00603.x.
  30. Hagop Sarkissian, Amita Chatterjee, Felipe De Brigard, Joshua Knobe, Shaun Nichols, Smita Sirker (forthcoming)."Is belief in free will a cultural universal?" Mind & Language
  31. Nichols, Shaun; Joshua Knobe (2007). "Moral Responsibility and Determinism: The Cognitive Science of Folk Intuitions". Noûs. 41 (4): 663–685. doi:10.1111/j.1468-0068.2007.00666.x.
    • Giesler, Markus; Veresiu, Ela (2014). "Creating the Responsible Consumer: Moralistic Governance Regimes and Consumer Subjectivity". Journal of Consumer Research. 41 (October): 849–867. doi:10.1086/677842.
  32. Risser, David T. 2006. 'Collective Moral Responsibility'. Internet Encyclopedia of Philosophy (Accessed 8 Sept 2007)
  33. Harpur, T. J., Hare, R. D., & Hakstian, A. R. (1989). "Two-factor conceptualization of psychopathy: Construct validity and assessment implications.". Psychological Assessment. 1 (1): 6–17. doi:10.1037/1040-3590.1.1.6.
  34. 1 2 Allen, Colin; Varner, Gary; Zinser, Jason (2000). "Prolegomena to any future artificial moral agent". Journal of Experimental & Theoretical Artificial Intelligence. 12 (3): 251–261. doi:10.1080/09528130050111428.
  35. 1 2 Allen, Colin; Smit, Iva; Wallach, Wendell (September 2005). "Artificial Morality: Top-down, Bottom-up and Hybrid Approaches". Ethics and Information Technology. 7 (3): 149–155. doi:10.1007/s10676-006-0004-4.
  36. Sparrow, Robert (2007). "Killer Robots". Journal of Applied Philosophy. 24 (1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x.
  37. 1 2 Grodzinsky, Frances S.; Miller, Keith W.; Wolf, Marty J. (21 Jun 2008). "The ethics of designing artificial agents". Ethics and Information Technology. 10 (2-3): 115–121.
  38. 1 2 Kuflik, Arthur (1999). "Computers in control: Rational transfer of authority or irresponsible abdication of autonomy?". Ethics and Information Technology. 1 (3): 173–184. doi:10.1023/A:1010087500508.
  39. Friedman, Batya; Kahn, Jr., Peter H. (January 1992). "Human Agency and Responsible Computing: Implications for Computer System Design". Journal of Systems and Software. 17 (1): 7–14. doi:10.1016/0164-1212(92)90075-u.
  40. 1 2 3 Hew, Patrick Chisan (13 May 2014). "Artificial moral agents are infeasible with foreseeable technologies". Ethics and Information Technology. 16 (3): 197–206. doi:10.1007/s10676-014-9345-6.
  41. Matthias, Andreas (2004). "The responsibility gap: Ascribing responsibility for the actions of learning automata". Ethics and Information Technology. 6 (3): 175–183. doi:10.1007/s10676-004-3422-1.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.