The following excerpt is taken from Chapter 3 of Cancer Clinical
Trials: Experimental Treatments & How They Can Help You by Robert
Finn, copyright 1999, published by O'Reilly & Associates, Inc. For book
orders/information, call (800) 998-9938. Permission is granted to
print and distribute this excerpt for noncommercial use as long as the
above source is included. The information in this article is meant to
educate and should not be used as an alternative for professional
Many people are reluctant to participate in clinical trials because they feel a sense of distaste for the idea of being experimented on, for being treated as if they were human guinea pigs. You've probably heard about the horrific medical experiments that were conducted on unwilling participants in Nazi concentration camps during World War II.1 You may also have heard about the shameful Tuskegee experiment, in which 400 African-American men with syphilis were left untreated for decades--even after a cure for syphilis became available--so that scientists could study the natural course of the disease.2 Those historical incidents along with several others--not to mention uncounted numbers of mad-scientist movies--have made many people wary of participating in clinical trials.
Fortunately it's extremely rare these days for research subjects to be treated badly. Those past abuses have led to the development of strict ethical codes for the conduct of clinical trials. Since the mid-1960s participants in clinical trials have been the beneficiaries of strong ethical, legal, and procedural protections. But that doesn't mean that all ethical questions surrounding clinical trials have been solved, or that you may assume that all clinical trials are being conducted in an exemplary fashion. Moreover, you'll want to understand some of the competing ethical principles involved in clinical trials, and you'll want to know how designers of clinical trials decide what's in the patient's best interests and what constitutes an ethical clinical trial.
Human beings have probably been conducting clinical trials since before the dawn of civilization. The first person who realized that a wound that's cleaned and wrapped heals better than one that's left open and dirty conducted a kind of primitive clinical trial.
A passage in the Old Testament even describes a clinical trial. The first chapter of the Book of Daniel tells what happens after Nebuchadnezzar, king of Babylon, conquered Israel.
The king ordered that several Jewish youths be brought to his palace for three years, where they would be fed and taught just like his own children. Among the youths was Daniel, who did not wish to defile himself by eating the king's meat or drinking his wine. He proposed to Melzar, the king's head eunuch, that they be allowed to eat "pulse" (a term referring to peas and beans) and to drink water instead. But Melzar feared Nebuchadnezzar's wrath if the Jewish youths grew sick. So Daniel suggested an experiment:
"Prove thy servants, I beseech thee, ten days; and let them give us pulse to eat and water to drink. Then let our countenances be looked upon before thee, and the countenance of the children that eat of the portion of the king's meat: and as thou seest, deal with thy servants." So he consented to them in this manner and proved them ten days. And at the end of ten days their countenances appeared fairer and fatter in flesh than all the children which did eat the portion of the king's meat. Thus Melzar took away the portion of their meat, and the wine that they should drink; and gave them pulse.3
Clinical trials are also discussed in ancient Greek, Roman, and Arab medical works, but it wasn't until the 12th and 13th Centuries A.D. that any ethical codes regarding human experimentation were written down.4 Moses Maimonides (1135-1204), the Jewish physician, philosopher, and Rabbi of Cairo, taught that physicians should seek to help individual patients, and should not use them merely as a way of learning new facts. Roger Bacon (1214-1294), the English scientist, philosopher, and Franciscan monk, noted that it was difficult for the physician to conduct experiments on living humans, "because of the nobility of the material in which he works; for that body demands that no error be made in operating upon it
It wasn't until the 18th and 19th Centuries that clinical trials became a fairly common way of testing new medical treatments. Often physicians would test potential remedies on themselves or on close friends and relatives. In developing the smallpox vaccine in 1789, the English physician Edward Jenner first tried inoculating his own son, then just one year old, with swinepox in the hope that the milder form of the disease that affected pigs would prevent the child from developing a far more serious human disease. Unfortunately Jenner's son caught smallpox anyway. Several months later, Jenner inoculated a neighbor's child with cowpox, followed a week later with an injection of smallpox. The child didn't get the disease, proving that vaccination worked.
The famous French physician Louis Pasteur (1822-1895) was a brilliant practitioner of human experimentation, and he was also keenly aware of the ethical implications of his work. In one of his lines of research he worked for many years using animal experiments to develop an antidote for rabies, and in 1884 he finally had a remedy that he thought would be highly effective. Yet he agonized about when and whether to try his antidote on a person. Only after being begged by the mother of a nine-year-old boy who had been bitten by a rabid dog, and after consulting with two colleagues who assured him that the child would certainly die without treatment, was he persuaded to try the antidote. Feeling great anxiety Pasteur gave the child twelve inoculations. The boy lived.
During the course of the 19th Century, larger and more organized clinical trials became increasingly typical in medicine. Ethical protection of research subjects also became part of Anglo-American common law, which carefully distinguished between science and quack medicine, and which regarded clinical trials as legitimate--providing the researcher had the participant's agreement.
With the increasing prevalence of clinical trials came some ethically questionable practices, however. While working in Panama on his groundbreaking work on yellow fever, the American physician Walter Reed (1851-1902) sought volunteers among American soldiers. These soldiers would allow themselves to be bitten by mosquitoes, which Reed believed (correctly, as it turned out) were the source of the infection. In seeking subjects, Reed offered his volunteers the princely sum of $100 in gold for allowing themselves to be bitten, and those who actually contracted yellow fever were given a bonus of another $100. But he exaggerated greatly while recruiting volunteers. He enticed people to join the experiment by stating that a case of yellow fever endangers life only "to a certain extent," when in fact the disease could often be fatal. And he also said that it would be "entirely impossible" for non-volunteers living in Panama to avoid the infection, when in fact many people did not catch the disease, even though it was epidemic.
In the years before World War II some of the world's most prominent physicians, including George Sternberg, the Surgeon General of the United States, believed that it was permissible to conduct experiments on vulnerable populations. Infants, condemned prisoners, and people who lived in large state institutions for the mentally retarded were frequently used in medical experimentation, including experiments that were clearly not designed to be therapeutic. To mention just one shocking example, orange juice was withheld from orphans at the Hebrew Infant Asylum of New York City so doctors could study the development of scurvy. Few if any of these experiments were regarded as unethical at the time, and hardly any of the investigators were even criticized for their practices.
The Nuremberg Code
In his article on the history of human research, David J. Rothman describes World War II as the "transforming event" in the conduct of clinical trials.5 While the odious experiments performed by the Nazis on concentration-camp inmates have received the most attention in that regard--and were the only ones to be prosecuted after the war--it's important to remember that under the pressure of the wartime emergency the Allies also conducted medical experiments that would be regarded as highly unethical today.
The Nazi experiments are almost too horrible to describe.6 Inmates were placed in decompression chambers to simulate the effects of extremely high altitudes. They were plunged into icy water to see how long downed pilots could survive. They were injected with toxins and with infectious agents including typhus. They were intentionally given mutilating wounds. Almost all the subjects of these experiments died in the course of the research. One of the many awful aspects of this history is that the majority of these studies were entirely without scientific merit.
Out of this horror came the first formalized set of ethical rules for the conduct of human experimentation. In the aftermath of the war the Nuremberg Tribunal prosecuted the perpetrators, and in 1946 developed a set of ethical principles that have come to be known as the Nuremberg Code. We've printed the Nuremberg Code in its entirety in Critical Public Documents.
In remarkably clear and definitive language the Code sets out ten ethical principles for the conduct of clinical trials. The first is the most important: "The voluntary consent of the human subject is absolutely essential." Moreover, this consent must be obtained "without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion
The Code directs researchers to ensure that experiments on humans be well designed, be conducted by qualified personnel, be based on the results of animal experimentation, and have a degree of risk commensurate with the humanitarian importance of the problem to be solved. In other words, the code says that you may conduct an experiment with potentially dangerous side effects if you're trying to cure a deadly disease like cancer, but not if you're only trying to cure the common cold.
The Code also gives the participant the right to leave the trial at any time, "if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible."
Despite the clear language of the Nuremberg Code, and despite the fact that it is regarded by many as the gold standard for the conduct of clinical trials, it does have a number of problems. For one thing, if it's interpreted literally, the Code seems to prohibit any research involving children or the mentally incompetent, such as people with Alzheimer's disease. That's because children and the mentally incompetent do not have "the legal capacity to give consent," in the words of the Nuremberg code. The Code makes no provision for consent by parents or legal guardians.7
Perhaps the biggest problem with the Nuremberg Code is that while it had some moral force, it did not have the force of law in the United States, and its provisions were widely ignored for at least 20 years.8 Rothman writes that from the point of view of most American investigators, "[T]he Code had nothing to do with science and everything to do with Nazis. The guilty parties were seen less as doctors than as Hitler's henchmen." This left American physicians free to conduct clinical trials simply in accordance with their consciences and with virtually no oversight or regulation.
During the postwar years the American Medical Association developed a research code, and the World Medical Association issued the Helsinki Declaration with detailed rules for human experimentation. Although these documents were the subject of a great deal of discussion in the medical community, neither proved to have much influence on the conduct of clinical trials.
Then, in June 1966, the tide turned. Henry K. Beecher, an anesthesiologist at Harvard Medical School, published a highly influential article entitled "Ethics and Clinical Research" in the New England Journal of Medicine.9 In his article he listed no fewer than twenty-two clinical trials that appeared to be highly unethical, in which researchers risked their patients' lives without fully informing them of the dangers and without obtaining their permission.
In one of these cases, investigators fed live hepatitis virus to mentally retarded residents of Willowbrook, a state institution in New York. In another case, investigators injected live cancer cells into senile patients at the Brooklyn Jewish Chronic Disease hospital to observe their immunological responses. In neither case were the experimental subjects properly informed of the dangers of the research. In neither case did the research have any potential therapeutic value to the patients under study.
Then, in 1970, came the revelation of the Tuskegee experiment. Starting in 1930 and continuing for four long decades, investigators began examining--but not treating--a group of 400 African-American men who had contracted syphilis. The researchers were interested in watching the natural course of the disease as it developed.
In 1930 the existing treatments for syphilis were complex and not very effective, so the researchers felt they were justified in not treating the men. But what could the researchers have been thinking when they took steps to make sure the men would not be drafted into the army, where they would have received treatment? And how did the researchers rationalize leaving the men untreated even after penicillin became widely available in 1945? Penicillin is a highly effective cure for syphilis. In fact, many of the men were left untreated until the scandal was uncovered in 1970.
The modern era
The uproar over the Tuskegee experiment and the Breecher article led directly to substantive changes in the way clinical trials were run in this country. The NIH quickly established rules requiring that committees called Institutional Review Boards (IRBs) be set up at each facility conducting clinical trials. IRBs were charged with conducting peer review of proposed research involving human beings. For the first time individual investigators were not permitted to decide for themselves whether their research was ethical. Instead it had to pass the muster of their colleagues.
The FDA, for its part, issued regulations that concentrated more on consumer protection. These regulations were the first to require that investigators obtain fully informed consent from potential subjects.
The US Congress followed in 1973 by creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Made up of eleven members, only a minority of whom were researchers (the remainder were experts in such fields as law, ethics, philosophy, and theology), the National Commission in 1979 issued a highly influential document known as the Belmont Report. The Belmont Report is included in its entirety in Critical Public Documents. Its language is clear and eloquent, and applies to how clinical trials are designed today.
The Belmont Report lays out three basic ethical principles for the conduct of clinical trials. They are:
In applying those principles, the report's authors recommended that consideration be given to three requirements:
Respect for Persons. Individuals should be regarded as autonomous agents, and their opinions and choices should be respected. Some people, such as children or individuals with mental incapacities, are not fully capable of self-determination, and those people should be subject to special protection.
Beneficence. The Belmont Report's definition of beneficence goes beyond its common meaning, which covers acts of charity or kindness. The National Commission regarded beneficence as an actual obligation, involving two rules: 1) do no harm, and 2) maximize benefits and minimize harms.
Justice. The benefits of research should be distributed fairly.
The federal regulations setting out all the rules for the conduct of clinical trials were revised most recently in 1991. They are part of the Code of Federal Regulations, Title 45, which deals with public welfare. The relevant section is Part 46, "Protection of Human Subjects." Although these regulations are too lengthy to print in this book, it's a good idea to take a look at them, especially if you suspect a violation of the rules. You can find the full text in many public libraries or online at: http://www.med.umich.edu/irbmed/FederalDocuments/hhs/
Informed consent. In order to provide fully informed consent, a potential research subject must first be given full information about the research project. Second, that information must be presented in a comprehensible way, taking into account the patient's intellectual capacities. If these capacities are limited, as they are in children or people who are mentally disabled, the consent of responsible third parties must be sought. However if this guardian agrees to the research but the patient objects, this objection must be respected, unless the study involves therapy that's unavailable outside the research setting. Third, the consent must be truly voluntary, and free from coercion and undue influence. Coercion occurs when there is a threat of harm. "You're going to die if you don't agree to participate," is an improperly coercive statement. Undue influence occurs through the offer of an excessive or inappropriate reward. "If you participate in this clinical trial, we'll cure your cancer," is an example of undue influence.
Assessment of risks and benefits. The dangers of any clinical trial must not exceed its potential benefits. Both the researcher and the IRB must explicitly consider not only the risks to a particular research subject, but also the risks to the subject's family and to society at large.
Selection of subjects. There must be fair procedures for the selection of research subjects. Investigators must not select certain patients merely because they like them. Conversely, investigators must not seek out undesirable people, such as prisoners, for especially risky experiments.
Adherence to these regulations has significantly improved protections for research subjects. Clinical trials are conducted far more ethically and are far safer now than they were thirty years ago. But this certainly does not mean that every ethical problem has been solved. On the contrary, the elimination of gross abuses tends to highlight more subtle ethical problems. In the following sections we'll discuss some of the ethical dilemmas that confront investigators and patients during the conduct of clinical trials.
Robert Jay Lifton, The Nazi Doctors: Medical Killing and the Psychology of Genocide (New York: Basic Books, 1986).
- James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment (New York: Free Press, 1993).
- Daniel 1: 12-16 (King James Version)
- The discussion of the history of clinical trials owes a great deal to: David J. Rothman, "Research, Human: Historical Aspects," in: Encyclopedia of Bioethics, Revised Edition (New York: Simon & Schuster MacMillan, 1995): 2248-2258, and Harold Y. Vanderpool, "Introduction and Overview: Ethics, Historical Case Studies, and the Research Enterprise," in Vanderpool, The Ethics of Research, 1-30.
- Rothman, "Research, Human," 2251.
- Lifton, The Nazi Doctors.
- Albert R. Jonsen, "The Weight and Weighing of Ethical Principles," in Vanderpool, The Ethics of Research, 59-82.
- Rothman, "Research, Human," 2253.
- Henry K. Beecher, "Ethics and Clinical Research," New England Journal of Medicine 274 (1966): 1354-60.