Ethics |
Theoretical |
Meta-ethics |
Applied |
Bioethics · Cyberethics · Medical |
Core issues |
Justice · Value |
Key thinkers |
Confucius · Mencius |
Lists |
List of ethics topics |
Medical ethics is primarily a field of applied ethics, the study of moral values and judgments as they apply to medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology.
Medical ethics tends to be understood narrowly as an applied professional ethics, whereas bioethics appears to have worked more expansive concerns, touching upon the philosophy of science and the critique of biotechnology. Still, the two fields often overlap and the distinction is more a matter of style than professional consensus.
Medical ethics shares many principles with other branches of healthcare ethics, such as nursing ethics.
Historically, Western medical ethics may be traced to guidelines on the duty of physicians in antiquity, such as the Hippocratic Oath, and early rabbinic and Christian teachings. In the medieval and early modern period, the field is indebted to Muslim physicians such as Ishaq bin Ali Rahawi (who wrote the Conduct of a Physician, the first book dedicated to medical ethics) and al-Razi (known as Rhazes in the West), Jewish thinkers such as Maimonides, Roman Catholic scholastic thinkers such as Thomas Aquinas, and the case-oriented analysis (casuistry) of Catholic moral theology. These intellectual traditions continue in Catholic, Islamic and Jewish medical ethics.
By the 18th and 19th centuries, medical ethics emerged as a more self-conscious discourse. For instance, authors such as the British Doctor Thomas Percival (1740-1804) of Manchester wrote about "medical jurisprudence" and reportedly coined the phrase "medical ethics." Percival's guidelines related to physician consultations have been criticized as being excessively protective of the home physician's reputation. Jeffrey Berlant is one such critic who considers Percival's codes of physician consultations as being an early example of the anti competitive, "guild", like nature of the physician community.[1][2] In 1847, the American Medical Association adopted its first code of ethics, with this being based in large part upon Percival's work [2]. While the secularized field borrowed largely from Catholic medical ethics, in the 20th century a distinctively liberal Protestant approach was articulated by thinkers such as Joseph Fletcher. In the 1960s and 1970's, building upon liberal theory and procedural justice, much of the discourse of medical ethics went through a dramatic shift and largely reconfigured itself into bioethics.[3]
Six of the values that commonly apply to medical ethics discussions are:
Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts.
When moral values are in conflict, the result may be an ethical dilemma or crisis. Writers about medical ethics have suggested many methods to help resolve conflicts involving medical ethics. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, the principles of autonomy and beneficence clash when patients refuse life-saving blood transfusion, and truth-telling was not emphasized to a large extent before the HIV era.
In the United Kingdom, General Medical Council provides clear overall modern guidance in the form of its 'Good Medical Practice' statement. Other organisations, such as the Medical Protection Society and a number of university departments, are often consulted by British doctors regarding issues relating to ethics.
How does one ensure that appropriate ethical values are being applied within hospitals? Effective hospital accreditation requires that ethical considerations are taken into account, for example with respect to physician integrity, conflicts of interest, research ethics and organ transplantation ethics.
Informed consent in ethics usually refers to the idea that a person must be fully-informed about and understand the potential benefits and risks of their choice of treatment. An uninformed person is at risk of mistakenly making a choice not reflective of his or her values or wishes. It does not specifically mean the process of obtaining consent, nor the specific legal requirements, which vary from place to place, for capacity to consent. Patients can elect to make their own medical decisions, or can delegate decision-making authority to another party. If the patient is incapacitated, laws around the world designate different processes for obtaining informed consent, typically by having a person appointed by the patient or their next-of-kin make decisions for them. The value of informed consent is closely related to the values of autonomy and truth telling.
Confidentiality is commonly applied to conversations between doctors and patients. This concept is commonly known as patient-physician privilege.
Legal protections prevent physicians from revealing their discussions with patients, even under oath in court.
Confidentiality is mandated in America by HIPAA laws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA.
Confidentiality is challenged in cases such as the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, or in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[3]
The concept of doing good to humanity in general. This is the principle of taking actions that benefit the patients, and that are in their best interest..
Many health care practitioners are driven by altruistic intentions: wishing to do good to others, or a desire to be helpful. This may be based on personal history, such as being born into a family with medical, psychological or social problems, and thence having learned to cope with this or having thereby become competent in helping others.
Sometimes this can be tainted by- often unconscious - selfish aspects: the "do-good'ers". In the medical profession this is regarded with caution, as it may lead persons to want to help others in cases where help is not really needed, or self-help would produce more benefit in the long term. There often exists a co-dependence relationship between such "do-good'ers" and "health care shoppers" (people who go from one practitioner to another; sometimes in a quest to find real solace, but sometimes also because true help would eliminate their disease or symptoms, and the social benefits the sick person may derive from them).
Beneficence, in health care, invites the more general interest in Preventive Health Care, where not disease, but health is the focus and modus operandi.
The principle of autonomy recognizes the rights of individuals to self determination. This is rooted in society’s respect for individuals’ ability to make informed decisions about personal matters. Autonomy has become more important as social values have shifted to define medical quality in terms of outcomes that are important to the patient rather than medical professionals. The increasing importance of autonomy can be seen as a social reaction to a “paternalistic” tradition within healthcare. Some have questioned whether the backlash against historically excessive paternalism in favor of patient autonomy has inhibited the proper use of soft paternalism to the detriment of outcomes for some patients[4]. Respect for autonomy is the basis for informed consent and advance directives. Autonomy can often come into conflict with Beneficence when patients disagree with recommendations that health care professionals believe are in the patient’s best interest. Individuals’ capacity for informed decision making may come into question during resolution of conflicts between Autonomy and Beneficence. The role of surrogate medical decision makers is an extension of the principle of autonomy.
Autonomy is a general indicator of health. Many diseaes are characterised by loss of autonomy, in various manners. This makes autonomy an indicator for both personal well-being, and for the well-being of the profession. This has implications for the consideration of medical ethics: "is the aim of health care to do good, and benefit from it?"; or "is the aim of health care to do good to others, and have them, and society, benefit from this?". (Ethics - by definition - tries to find a beneficial balance between the activities of the individual and its effects on a collective.)
By considering Autonomy as a gauge parameter for (self) health care, the medical and ethical perspective both benefit from the implied reference to Health.
The concept of non-maleficence is embodied by the phrase, "first, do no harm," or the Latin, primum non nocere. Many consider that should be the main or primary consideration (hence primum): that it is more important not to harm your patient, than to do them good. This is partly because enthusiastic practitioners are prone to using treatments that they believe will do good, without first having evaluated them adequately to ensure they do no (or only acceptable levels of) harm. Much harm has been done to patients as a result. It is not only more important to do no harm than to do good; it is also important to know how likely it is that your treatment will harm a patient. So a physician should go further than not prescribing medications they know to be harmful - he or she should not prescribe medications (or otherwise treat the patient) unless s/he knows that the treatment is unlikely to be harmful; or at the very least, that patient understands the risks and benefits, and that the likely benefits outweigh the likely risks.
In practice, however, many treatments carry some risk of harm. In some circumstances, e.g. in desperate situations where the outcome without treatment will be grave, risky treatments that stand a high chance of harming the patient will be justified, as the risk of not treating is also very likely to do harm. So the principle of non-maleficence is not absolute, and must be balanced against the principle of beneficence (doing good).
American physicians interpret this principle to exclude the practice of euthanasia, though not all concur. Probably the most extreme example in recent history of the violation of the non-maleficence dictum was Dr. Jack Kevorkian, who was convicted of second-degree homicide in Michigan in 1998 after demonstrating active euthanasia on the TV news show, 60 Minutes.
In some countries euthanasia is accepted as standard medical practice. Legal regulations assign this to the medical profession. In such nations, the aim is to alleviate the suffering of patients from diseases known to be incurable by the methods known in that culture. In that sense, the "Primum no Nocere" is based on the realisation that the inability of the medical expert to offer help, creates a known great and ongoing suffering in the patient. "Not acting" in those cases is believed to be more damaging than actively relieving the suffering of the patient. Evidently the ability to offer help depends on the limitation of what the practitioner can do. These limitations are characteristic for each different form of healing, and the legal system of the specific culture. The aim to "not do harm" is still the same. It gives the medical practitioner a responsibility to help the patient, in the intentional and active relief of suffering, in those cases where no cure can be offered.
"Non-maleficence" is defined by its cultural context. Every culture has its on cultural collective definitions of 'good' and 'evil'. Their definitions depend on the degree to which the culture sets its cultural values apart from nature. In some cultures the terms "good" and "evil" are absent: for them these words lack meaning as their experience of nature does not set them apart from nature. Other cultures place the humans in interaction with nature, some even place humans in a position of dominance over nature. The religions are the main means of expression of these considerations.
Depending on the cultural consensus conditioning (expressed by its religious, political and legal social system) the legal definition of Non-maleficence differs. Violation of non-maleficence is the subject of medical malpractice litigation. Regulations thereof differ, over time, per nation.
Some interventions undertaken by physicians can create a positive outcome while also potentially doing harm. The combination of these two circumstances is known as the "double effect." The most applicable example of this phenomenon is the use of morphine in the dying patient. Such use of morphine can ease the pain and suffering of the patient, while simultaneously hastening the demise of the patient through suppression of the respiratory drive.
It has been argued that mainstream medical ethics is biased by the assumption of a framework in which individuals are not simply free to contract with one another to provide whatever medical treatment is demanded, subject to the ability to pay. Because a high proportion of medical care is typically provided via the welfare state, and because there are legal restrictions on what treatment may be provided and by whom, an automatic divergence may exist between the wishes of patients and the preferences of medical practitioners and other parties. Tassano[5] has questioned the idea that Beneficence might in some cases have priority over Autonomy. He argues that violations of Autonomy more often reflect the interests of the state or of the supplier group than those of the patient.
Practitioners in the field often feel that ethics is the back-door attempt by Theological Orthodoxy - a closed system - to encroach upon medicine, which is a natural science - an open system that values empirical observation, data collection and speculative hypothesis which are revised based on Best Available Evidence. Just as the applicant's religion, or lack thereof, is not a criterion for admission to medical school, these should not be criteria in their professional labours; this, of course, is impossible to legislate.
For the secular physician, morality is pragmatic - and not received wisdom. As such, upon licensing - just as professional competence is assumed based on successful completion of trainings, exams etc. - an "Average Expectable Ethics" should be assumed of physicians. For significant deviations from this, routine regulatory professional bodies or the courts of law are valid social recourses. "Ethics Committees" are not much more than Special Interest Groups, with didactic agenda used to obtain and hold on to power - and thus, should be studied under Political Studies, or the Sociology of Organizational Behavior.
Many so-called "ethical conflicts" in medical ethics are traceable back to a lack of communication. Communication breakdowns between patients and their healthcare team, between family members, or between members of the medical community, can all lead to disagreements and strong feelings. These breakdowns should be remedied, and many apparently insurmountable "ethics" problems can be solved with open lines of communication.
Many times, simple communication is not enough to resolve a conflict, and a hospital ethics committee of ad hoc nature must convene to decide a complex matter. Permanent bodies, ethical boards are established to a greater extent as ethical issues tend to increase. These bodies are composed of health care professionals, philosophers, lay people, and still clergy.
The assignment of philosophers or clergy will reflect the importance attached by the society to the basic values involved. An example from Sweden with Torbjörn Tännsjö on a couple of such committees indicates secular trends gaining influence.
Culture differences can create difficult medical ethics problems. Some cultures have spiritual or magical theories about the origins of disease, for example, and reconciling these beliefs with the tenets of Western medicine can be difficult.
Some cultures do not place a great emphasis on informing the patient of the diagnosis, especially when cancer is the diagnosis. Even American culture did not emphasize truth-telling in a cancer case, up until the 1970s. In American medicine, the principle of informed consent takes precedence over other ethical values, and patients are usually at least asked whether they want to know the diagnosis.
The delivery of diagnosis online leads patients to believe that doctors in some parts of the country are at the direct service of drug companies. Finding diagnosis as convenient as what drug still has patent rights on it. Physicians and drug companies are found to be competing for top ten search engine ranks to lower costs of selling these drugs with little to no patient involvement[6]
Physicians should not allow a conflict of interest to influence medical judgment. In some cases, conflicts are hard to avoid, and doctors have a responsibility to avoid entering such situations. Unfortunately, research has shown that conflicts of interests are very common among both academic physicians[7] and physicians in practice[8]. The The Pew Charitable Trusts has announced the Prescription Project for "academic medical centers, professional medical societies and public and private payers to end conflicts of interest resulting from the $12 billion spent annually on pharmaceutical marketing".
For example, doctors who receive income from referring patients for medical tests have been shown to refer more patients for medical tests [9]. This practice is proscribed by the American College of Physicians Ethics Manual [10].
Fee splitting and the payments of commissions to attract referrals of patients is considered unethical and unacceptable in most parts of the world - while it is rapidly becoming routine in other countries, like India, where many urban practitioners currently pay a percentage of office-visit charges, lab tests as well as hospital care to unaccredited "quacks", or semi-accredited "practitioners of alternative medicine", who refer the patient. It is tolerated in some areas of US medical care as well.
Studies show that doctors can be influenced by drug company inducements, including gifts and food. [11] Industry-sponsored Continuing Medical Education (CME) programs influence prescribing patterns. [12] Many patients surveyed in one study agreed that physician gifts from drug companies influence prescribing practices. [13] A growing movement among physicians is attempting to diminish the influence of pharmaceutical industry marketing upon medical practice, as evidenced by Stanford University's ban on drug company-sponsored lunches and gifts. Other academic institutions that have banned pharmaceutical industry-sponsored gifts and food include the University of Pennsylvania, and Yale University. [14]
Many doctors treat their family members. Doctors who do so must be vigilant not to create conflicts of interest or treat inappropriately.[15][16].
Sexual relationships between doctors and patients can create ethical conflicts, since sexual consent may conflict with the fiduciary responsibility of the physician. Doctors who enter into sexual relationships with patients face the threats of deregistration and prosecution. In the early 1990s it was estimated that 2-9% of doctors had violated this rule[17]. Sexual relationships between physicians and patients' relatives may also be prohibited in some jurisdictions, although this prohibition is highly controversial.[18].
The concept of medical futility has been an important topic in discussions of medical ethics. What should be done if there is no chance that a patient will survive but the family members insist on advanced care? Previously, some articles defined futiliy as the patient having less than a one percent chance of surviving. Some of these cases wind up in the courts. Advanced directives include living wills and durable powers of attorney for health care. (See also Do Not Resuscitate and cardiopulmonary resuscitation) In many cases, the "expressed wishes" of the patient are documented in these directives, and this provides a framework to guide family members and health care professionals in the decision making process when the patient is incapacitated. Undocumented expressed wishes can also help guide decisions in the absence of advanced directives, as in the Quinlan case in Missouri.
"Substituted judgment" is the concept that a family member can give consent for treatment if the patient is unable (or unwilling) to give consent himself. The key question for the decision making surrogate is not, "What would you like to do?", but instead, "What do you think the patient would want in this situation?".
Courts have supported family's arbitrary definitions of futility to include simple biological survival, as in the Baby K case (in which the courts ordered tube feedings stopped to a Downs Syndrome child with a correctable tracheo-esophageal fistula, which the parents did not want repaired based on their vision of "expected quality of life"; the child died 11 days later).
A more in-depth discussion of futility is available at futile medical care. In some hospitals, medical futility is referred to as "non-beneficial care."
Critics claim that this is how the State, and perhaps the Church, through its adherents in the executive and the judiciary, interferes in order to further its own agenda at the expense of the patient's. Ronald Reagen's Americans With Disabilities Act was a direct response to the Baby K Case, in an effort to prop up "Right to Life" philosophies.
|
|
Many famous cases in medical ethics illustrate and helped define important issues.