Food and Drug Administration | |
FDA Logo |
|
Agency overview | |
---|---|
Formed | 1906[1] |
Preceding Agencies | Food, Drug, and Insecticide Administration (July 1927 to July 1930) Bureau of Chemistry, USDA (July 1901 through July 1927) Division of Chemistry, USDA (Established 1862) |
Jurisdiction | Federal government of the United States |
Headquarters | 5600 Fishers Lane, Rockville, MD |
Employees | 9,300 (2008) |
Annual Budget | $2.3 billion (2008) |
Agency Executive | Andrew von Eschenbach, Commissioner |
Parent agency | Department of Health and Human Services |
Website | |
www.fda.gov |
The U.S. Food and Drug Administration (FDA) is an agency of the United States Department of Health and Human Services and is responsible for the safety regulation of most types of foods, dietary supplements, drugs, vaccines, biological medical products, blood products, medical devices, radiation-emitting devices, veterinary products, and cosmetics. The FDA also enforces section 361 of the Public Health Service Act and the associated regulations, including sanitation requirements on interstate travel as well as specific rules for control of disease on products ranging from pet turtles to semen donations for assisted reproductive medicine techniques.
The FDA is an agency within the United States Department of Health and Human Services responsible for protecting and promoting the nation's public health. The FDA is headquartered in Rockville, MD with 223 field offices[2] supported by 13 laboratories located throughout the United States, the U.S. Virgin Islands, and Puerto Rico.
The agency is organized into the following major subdivisions, each focused on a major area of regulatory responsibility:
The FDA frequently works in conjunction with other Federal agencies including the Department of Agriculture, Drug Enforcement Administration, Customs and Border Protection, and Consumer Product Safety Commission. Often local and state government agencies also work in cooperation with the FDA to provide regulatory inspections and enforcement action.
The FDA regulates more than $1 trillion worth of consumer goods, about 25 percent of consumer expenditures in the United States. This includes $466 billion in food sales, $275 billion in drugs, $60 billion in cosmetics and $18 billion in vitamin supplements. Much of the expenditures is for goods imported into the United States; the FDA is responsible for monitoring a third of all imports.[3]
The FDA's federal budget request for fiscal year (FY) 2008 (October 2007 through September 2008) totaled $2.1 billion, a $105.8 million increase from what it received for fiscal year 2007.[4] In February 2008, the FDA announced that the Bush Administration's FY 2009 budget request for the agency was just under $2.4 billion: $1.77 billion in budget authority (federal funding) and $628 million in user fees. The requested budget authority was an increase of $50.7 million more than the FY 2008 funding - about a three percent increase. In June 2008, Congress gave the agency an emergency appropriation of $150 million for FY 2008 and another $150 million for FY 2009.[3]
The FDA receives user fees submitted with New Drug Applications under the Prescription Drug User Fee Act (PDUFA); the company submitting an application pays a fee for the review of the new product. A similar process is used for medical devices under the Medical Device User Fee and Modernization Act (MDUFMA) and for animal drugs under a similar act. These fees are typically waived or reduced for small businesses.
Most federal laws administered through the FDA are codified into the Food, Drug and Cosmetic Act,[5] also called Title 21, Chapter 9 of the United States Code. Other significant laws enforced by the FDA include the Public Health Service Act, Controlled Substances Act, and Federal Anti-Tampering Act.
Important enabling legislation for the FDA includes:
The programs for FDA safety regulation vary widely by the type of product, its potential risks, and the regulatory powers granted to the agency. For example, the FDA regulates almost every facet of prescription drugs, including testing, manufacturing, labeling, advertising, marketing, efficacy and safety, yet FDA regulation of cosmetics is focused primarilly on labeling and safety. The FDA regulates most products with a set of published standards enforced by a modest number of facility inspections.
The Center for Food Safety and Applied Nutrition is the branch of the FDA which is responsible for ensuring the safety and accurate labeling of nearly all food products in the United States.[6] One exception is meat products derived from traditional domesticated animals, such as cattle and chickens, which fall under the jurisdiction of the United States Department of Agriculture Food Safety and Inspection Service. Products which contain minimal amounts of meat are regulated by FDA, and the exact boundaries are listed in a memorandum of understanding between the two agencies. However, medicines and other products given to all domesticated animals are regulated by the FDA through a different branch, the Center for Veterinary Medicine. Other consumables which are not regulated by the FDA include beverages containing more than 7% alcohol (regulated by the Bureau of Alcohol, Tobacco, Firearms and Explosives in the Department of Justice), and non-bottled drinking water (regulated by the United States Environmental Protection Agency (EPA)).
CFSAN's activities include establishing and maintaining food standards, such as standards of identity (for example, what the requirements are for a product to be labeled, "yogurt"). CFSAN also sets the requirements for nutrition labeling of most foods. Both food standards and nutrition labeling requirements are part of the Code of Federal Regulations.
The Dietary Supplement Health and Education Act of 1994 mandated that the FDA regulate dietary supplements as foods, rather than as drugs. Therefore, dietary supplements are not subject to safety and efficacy testing and there are no approval requirements. The FDA can take action against dietary supplements only after they are proven to be unsafe. Manufacturers of dietary supplements are permitted to make specific claims of health benefits, referred to as "structure or function claims" on the labels of these products. They may not claim to treat, diagnose, cure, or prevent disease and must include a disclaimer on the label.[7]
Bottled water is regulated in America by the FDA.[8] State governments also regulate bottled water. Tap water is regulated by state and local regulations, as well as the United States EPA. FDA regulations of bottled water generally follow the guidelines established by the EPA, and new EPA rules automatically apply to bottled water if the FDA does not release an explicit new rule.[9] Water bottlers in the US are subject to inspection similar to other food firms, but quality controls for the bottled water industry are not nearly as stringent as those for municipal water supplies.
Regulation of therapeutic goods in the United States | ||||||||||
Prescription drugs Over-the-counter drugs
|
||||||||||
The Center for Drug Evaluation and Research has different requirements for the three main types of drug products: new drugs, generic drugs and over-the-counter drugs. A drug is considered "new" if it is made by a different manufacturer, uses different excipients or inactive ingredients, is used for a different purpose, or undergoes any substantial change. The most rigorous requirements apply to "new molecular entities": drugs which are not based on existing medications.
New drugs receive extensive scrutiny before FDA approval in a process called a New Drug Application or NDA. New drugs are available only by prescription by default. A change to Over the Counter (OTC) status is a separate process and the drug must be approved through an NDA first.
A drug that is approved is said to be "safe and effective when used as directed."
The FDA reviews and regulates prescription drug advertising and promotion. (Other kinds of advertising, including for over-the- counter drugs, are regulated by the Federal Trade Commission). The drug advertising regulation[10] contains two key requirements. Under most circumstances, a company may only advertise a drug for the specific indication or medical use for which it was approved. "Off-label use", using a drug for other than its approved purpose, is common in medical practice. Also, an advertisement must contain "fair balance" between the benefits and risks of a drug.
The term "off-label", often now used with a vaguely pejorative connotation, actually may mean only that the pharmaceutical company did not choose to invest in the lengthy and costly regimen of clinical trials required by the FDA in order to be able to add the particular application to the "uses" listed on the label. However it is entirely legal and professionally proper for physicians to prescribe the medication for any use for which there exists sufficient scientific research from any source, indicating it is or may be efficacious for that purpose. The test of propriety for off-label uses is whether the prescription meets the "standard of practice" among physicians.
In the waning decades of the 20th Century, in order to impugn physician defendants, malpractice attorneys began implying there was something wrong with "off-label" prescribing, warping the actual meaning of the term to suit litigious rather than medical purposes. Since medications are commonly prescribed for purposes not on the label, it was often an easy way to imply that a defendant doctor had done something wrong in the particular case being tried. Not long after that corruption of the term had come into common use, insurance companies, (and particularly "managed care" companies: actually tasked with managing costs, not care), began using "off-label" prescribing as an excuse not to pay for legitimate medical care. They implied that if a usage was not on the label it must be "experimental": a convenient and self-serving argument but completely untrue. Finally, some younger academics, who had bought in to the altered meaning of the legal term, developed an air of disapproval about "off-label" prescribing, which remains in 2008 just as legal, and just as within the bounds of the standard of practice as it ever was.
After approval of an NDA, the sponsor must review and report to the FDA every patient adverse drug experience of which it learns. Unexpected serious and fatal adverse drug events must be reported within 15 days; other events on a quarterly basis.[11] The FDA also receives directly adverse drug event reports through its MedWatch program.[12] These reports are called '"spontaneous reports" because reporting by consumers and health professionals is voluntary. While this remains the primary tool of postmarket safety surveillance, FDA requirements for postmarketing risk management are increasing. As a condition of approval, a sponsor may be required to conduct additional clinical trials, called Phase IV trials. In some cases the FDA requires risk management plans for some drugs that may provide for other kinds of studies, restrictions, or safety surveillance activities.
Generic drugs are prescription drugs whose patent protection has expired, and therefore may be manufactured and marketed by other companies. For approval of a generic drug, the FDA requires scientific evidence that the generic drug is interchangeable or therapeutically equivalent with the originally approved drug.[13] This is called an "ANDA" (Abbreviated New Drug Application).
Over-the-counter (OTC) drugs are drugs and combinations that do not require a doctor's prescription. The FDA has a list of approximately 800 approved ingredients that are combined in various ways to create more than 100,000 OTC drug products. Many OTC drug ingredients had been previously approved prescription drugs now deemed safe enough for use without a medical practitioner's supervision. [14]
The Center for Biologics Evaluation and Research is the branch of the FDA responsible for ensuring the safety and efficacy of biological therapeutic agents.[15] These include blood and blood products, vaccines, allergenics, cell and tissue-based products, and gene therapy products. New biologics are required to go through a pre-market approval process similar to that for drugs. The original authority for government regulation of biological products was established by the 1902 Biologics Control Act, with additional authority established by the 1944 Public Health Service Act. Along with these Acts, the Federal Food, Drug, and Cosmetic Act applies to all biologic products as well. Originally, the entity responsible for regulation of biological products resided under the National Institutes of Health; this authority was transferred to the FDA in 1972.
The Center for Devices and Radiological Health (CDRH) is the branch of the FDA responsible for the premarket approval of all medical devices, as well as overseeing the manufacturing, performance and safety of these devices.[16] The definition of a medical device is given in the FD&C Act, and it includes products from the simple toothbrush to complex devices such as implantable brain pacemakers. CDRH also oversees the safety performance of non-medical devices which emit certain types of electromagnetic radiation. Examples of CDRH-regulated devices include cellular phones, airport baggage screening equipment, television receivers, microwave ovens, tanning booths, and laser products.
CDRH regulatory powers include the authority to require certain technical reports from the manufacturers or importers of regulated products, to require that radiation-emitting products meet mandatory safety performance standards, to declare regulated products defective, and to order the recall of defective or noncompliant products. CDRH also conducts limited amounts of direct product testing.
Cosmetics are regulated by the Center for Food Safety and Applied Nutrition, the same branch of the FDA that regulates food. Cosmetic products are not generally subject to pre-market approval by the FDA unless they make "structure or function claims" which make them into drugs (see Cosmeceutical). However, all color additives must be specifically approved by the FDA before they can be included in cosmetic products sold in the U.S. The labeling of cosmetics is regulated by the FDA, and cosmetics which have not been subjected to thorough safety testing must bear a warning to that effect.
The Center for Veterinary Medicine (CVM) is the branch of the FDA which regulates food, food additives, and drugs that are given to animals, including food animals and pets. CVM does not regulate vaccines for animals; these are handled by the USDA.
CVM's primary focus is on medications that are used in food animals and ensuring that they do not affect the human food supply. The FDA's requirements to prevent the spread of Mad Cow Disease are also administered by CVM through inspections of feed manufacturers.
On [December 19], 2007, the FDA announced plans to create a database to track cloned animals through the food system and enable an effective labeling process [21]. This system will be part of the National Animal Identification System, which will track all livestock in the United States from farm to fork [22].
Up until the 20th century, there were few federal laws regulating the contents and sale of domestically produced food and pharmaceuticals, with one exception being the short-lived Vaccine Act of 1813. A patchwork of state laws provided varying degrees of protection against unethical sales practices, such as misrepresenting the ingredients of food products or therapeutic substances. The history of the FDA can be traced to the latter part of the 19th century and the U.S. Department of Agriculture's Division of Chemistry (later Bureau of Chemistry). Under Harvey Washington Wiley, appointed chief chemist in 1883, the Division began conducting research into the adulteration and misbranding of food and drugs on the American market. Although they had no regulatory powers, the Division published its findings from 1887 to 1902 in a ten-part series entitled Foods and Food Adulterants. Wiley used these findings, and alliances with diverse organizations such as state regulators, the General Federation of Women's Clubs, and national associations of physicians and pharmacists, to lobby for a new federal law to set uniform standards for food and drugs to enter into interstate commerce. Wiley's advocacy came at a time when the public had become aroused to hazards in the marketplace by muckraking journalists like Upton Sinclair, and became part of a general trend for increased federal regulations in matters pertinent to public safety during the Progressive Era.[17] The 1902 Biologics Control Act was put in place after tetanus antitoxin was collected from a horse named Jim who also had diphtheria, resulting in several deaths.
In June 1906, President Theodore Roosevelt signed into law the Food and Drug Act, also known as the "Wiley Act" after its chief advocate.[17] The Act prohibited, under penalty of seizure of goods, the interstate transport of food which had been "adulterated", with that term referring to the addition of fillers of reduced "quality or strength", coloring to conceal "damage or inferiority," formulation with additives "injurious to health," or the use of "filthy, decomposed, or putrid" substances. The act applied similar penalties to the interstate marketing of "adulterated" drugs, in which the "standard of strength, quality, or purity" of the active ingredient was not either stated clearly on the label or listed in the United States Pharmacopoeia or the National Formulary. The act also banned "misbranding" of food and drugs.[18] The responsibility for examining food and drugs for such "adulteration" or "misbranding" was given to Wiley's USDA Bureau of Chemistry.[17]
Wiley used these new regulatory powers to pursue an aggressive campaign against the manufacturers of foods with chemical additives, but the Chemistry Bureau's authority was soon checked by judicial decisions, as well as by the creation of the Board of Food and Drug Inspection and the Referee Board of Consulting Scientific Experts as separate organizations within the USDA in 1907 and 1908 respectively. A 1911 Supreme Court decision ruled that the 1906 act did not apply to false claims of therapeutic efficacy,[19] in response to which a 1912 amendment added "false and fraudulent" claims of "curative or therapeutic effect" to the Act's definition of "misbranded." However, these powers continued to be narrowly defined by the courts, which set high standards for proof of fraudulent intent.[17] In 1927, the Bureau of Chemistry's regulatory powers were reorganized under a new USDA body, the Food, Drug, and Insecticide organization. This name was shortened to the Food and Drug Administration (FDA) three years later.[20]
By the 1930s, muckraking journalists, consumer protection organizations, and federal regulators began mounting a campaign for stronger regulatory authority by publicizing a list of injurious products which had been ruled permissible under the 1906 law, including radioactive beverages, cosmetics which caused blindness, and worthless "cures" for diabetes and tuberculosis. The resulting proposed law was unable to get through the Congress of the United States for five years, but was rapidly enacted into law following the public outcry over the 1937 Elixir Sulfanilamide tragedy, in which over 100 people died after using a drug formulated with a toxic, untested solvent. The only way that the FDA could even seize the product was due to a misbranding problem: an "Elixir" was defined as a medication dissolved in ethanol, not the diethylene glycol used in the Elixir Sulfanilamide.
President Franklin Delano Roosevelt signed the new Food, Drug, and Cosmetic Act (FD&C Act) into law on June 24, 1938. The new law significantly increased federal regulatory authority over drugs by mandating a pre-market review of the safety of all new drugs, as well as banning false therapeutic claims in drug labeling without requiring that the FDA prove fraudulent intent. The law also authorized factory inspections and expanded enforcement powers, set new regulatory standards for foods, and brought cosmetics and therapeutic devices under federal regulatory authority. This law, though extensively amended in subsequent years, remains the central foundation of FDA regulatory authority to the present day.[17]
Soon after passage of the 1938 Act, the FDA began to designate certain drugs as safe for use only under the supervision of a medical professional, and the category of 'prescription-only' drugs was securely codified into law by the 1951 Durham-Humphrey Amendment.[17] While pre-market testing of drug efficacy was not authorized under the 1938 FD&C Act, subsequent amendments such as the Insulin Amendment and Penicillin Amendment did mandate potency testing for formulations of specific lifesaving pharmaceuticals.[20] The FDA began enforcing its new powers against drug manufacturers who could not substantiate the efficacy claims made for their drugs, and the 1950 Court of Appeals ruling in Alberty Food Products Co. v. U.S. found that drug manufacturers could not evade the "false therapeutic claims" provision of the 1938 act by simply omitting the intended use of a drug from the drug's label. These developments confirmed extensive powers for the FDA to enforce post-marketing recalls of ineffective drugs.[17] Much of the FDA's regulatory attentions in this era were directed towards abuse of amphetamines and barbiturates, but the agency also reviewed some 13,000 new drug applications between 1938 and 1962. While the science of toxicology was in its infancy at the start of this era, rapid advances in experimental assays for food additive and drug safety testing were made during this period by FDA regulators and others.[17]
In 1959, Senator Estes Kefauver began holding congressional hearings into concerns about pharmaceutical industry practices, such as the perceived high cost and uncertain efficacy of many drugs promoted by manufacturers. There was significant opposition, however, to calls for a new law expanding the FDA's authority. This climate was rapidly changed by the thalidomide tragedy, in which thousands of European babies were born deformed after their mothers took that drug - marketed for treatment of nausea - during their pregnancies. Thalidomide had not been approved for use in the U.S. due to the concerns of an FDA reviewer, Frances Oldham Kelsey. However, thousands of "trial samples" had been sent to American doctors during the "clinical investigation" phase of the drug's development, which at the time was entirely unregulated by the FDA. Individual members of Congress cited the thalidomide incident in lending their support to expansion of FDA authority.[21]
The 1962 Kefauver-Harris Amendment to the FD&C act represented a "revolution" in FDA regulatory authority.[22] The most important change was the requirement that all new drug applications demonstrate "substantial evidence" of the drug's efficacy for a marketed indication, in addition to the existing requirement for pre-marketing demonstration of safety. This marked the start of the FDA approval process in its modern form. Drugs approved between 1938 and 1962 were also subject to FDA review of their efficacy, and to potential withdrawal from the market. Other important provisions of the 1962 amendments included the requirement that drug companies use the "established" or "generic" name of a drug along with the trade name, the restriction of drug advertising to FDA-approved indications, and expansion of FDA powers to inspect drug manufacturing facilities.
One of the most important statutes in establishing the modern American pharmaceutical market was the 1984 Drug Price Competition and Patent Term Restoration Act, more commonly known as the "Hatch-Waxman Act" after its chief sponsors. This act was intended to correct two unfortunate interactions between the new regulations mandated by the 1962 amendments, and existing patent law (which is not regulated or enforced by the FDA, but rather by the United States Patent and Trademark Office). Because the additional clinical trials mandated by the 1962 amendments significantly delayed the marketing of new drugs, without extending the duration of the manufacturer's patent, "pioneer" drug manufacturers experienced a decreased period of lucrative market exclusivity. On the other hand, the new regulations could be interpreted to require complete safety and efficacy testing for generic copies of approved drugs, and "pioneer" manufacturers obtained court decisions which prevented generic manufacturers from even beginning the clinical trial process while a drug was still under patent. The Hatch-Waxman Act was intended as a compromise between the "pioneer" and generic drug manufacturers which would reduce the overall cost of bringing generics to market and thus, it was hoped, reduce the long-term price of the drug, while preserving the overall profitability of developing new drugs. The act extended the patent exclusivity terms of new drugs, and importantly tied those extensions, in part, to the length of the FDA approval process for each individual drug. For generic manufacturers, the Act created a new approval mechanism, the Abbreviated New Drug Application (ANDA), in which the generic drug manufacturer need only demonstrate that their generic formulation has the same active ingredient, route of administration, dosage form, strength, and pharmacokinetic properties ("bioequivalence") as the corresponding brand-name drug. This act has been credited with essentially creating the modern generic drug industry.[23]
Concerns about the length of the drug approval process were brought to the fore early in the AIDS epidemic. In the mid- and late 1980s, ACT-UP and other HIV activist organizations accused the FDA of unnecessarily delaying the approval of medications to fight HIV and opportunistic infections, and staged large protests, such as a confrontational October 11, 1988 action at the FDA campus which resulted in nearly 180 arrests.[24] In August 1990, Dr. Louis Lasagna, then chairman of a presidential advisory panel on drug approval, estimated that thousands of lives were lost each year due to delays in approval and marketing of drugs for cancer and AIDS.[25]
Partly in response to these criticisms, the FDA issued new rules to expedite approval of drugs for life threatening diseases, and expanded pre-approval access to drugs for patients with limited treatment options.[26] The first of these new rules was the "IND exemption" or "treatment IND" rule, which allowed expanded access to a drug undergoing phase II or III trials (or in extraordinary cases even earlier) if it potentially represented a safer or better alternative to treatments currently available for terminal or serious illness. A second new rule, the "parallel track policy", allowed a drug company to set up a mechanism for access to a new potentially lifesaving drug by patients who for various reasons would be unable to participate in ongoing clinical trials. The "parallel track" designation could be made at the time of IND submission. The accelerated approval rules were further expanded and codified in 1992.[27]
All of the initial drugs approved for the treatment of HIV/AIDS were approved through accelerated approval mechanisms. For example, a "treatment IND" was issued for the first HIV drug, AZT, in 1985, and approval was granted just two years later in 1987.[28] Three of the first five drugs targeting HIV were approved in the United States before they were approved in any other country.[29]
A 2006 court case, Abigail Alliance v. von Eschenbach, would have forced radical changes in FDA regulation of unapproved drugs. The Abigail Alliance argued that the FDA must license drugs for use by terminally ill patients with "desperate diagnoses," after they have completed Phase I testing.[30] The case won an initial appeal in May 2006, but that decision was reversed by a March 2007 rehearing. The US Supreme Court declined to hear the case, and the final decision denied the existence of a right to unapproved medications.
The widely publicized recall of Vioxx, a non-steroidal anti-inflammatory drug now estimated to have contributed to fatal heart attacks in thousands of Americans, played a strong role in driving a new wave of safety reforms at both the FDA rulemaking and statutory levels. Vioxx was approved by the FDA in 1999, and was initially hoped to be safer than previous NSAIDs, due to its reduced risk of gastrointestinal tract bleeding. However, a number of pre- and post-marketing studies suggested that Vioxx might increase the risk of myocardial infarction, and this was conclusively demonstrated by results from the APPROVe trial in 2004.[31] Faced with numerous lawsuits, the manufacturer voluntarily withdrew it from the market. The example of Vioxx has been prominent in an ongoing debate over whether new drugs should be evaluated on the basis of their absolute safety, or their safety relative to existing treatments for a given condition. In the wake of the Vioxx recall, there were widespread calls by major newspapers, medical journals, consumer advocacy organizations, lawmakers, and FDA officials[32] for reforms in the FDA's procedures for pre- and post- market drug safety regulation.
In 2006, a congressionally requested committee was appointed by the Institute of Medicine to review pharmaceutical safety regulation in the U.S. and to issue recommendations for improvements. The committee was composed of 16 experts, including leaders in clinical medicinemedical research, economics, biostatistics, law, public policy, public health, and the allied health professions, as well as current and former executives from the pharmaceutical, hospital, and health insurance industries. The authors found major deficiencies in the current FDA system for ensuring the safety of drugs on the American market. Overall, the authors called for an increase in the regulatory powers, funding, and independence of the FDA.[33][34] Some of the committee’s recommendations have been incorporated into drafts of the PDUFA IV bill which is expected to be passed by Congress in 2007.
Prior to the 1990s, only 20% of all drugs prescribed for children in the United States were tested for safety or efficacy in a pediatric population. This became a major concern of pediatricians as evidence accumulated that the physiological response of children to many drugs differed significantly from those drugs' effects on adults. The reasons for the dearth of clinical drug testing in children were multifactorial. For many drugs, children represented such a small proportion of the potential market, that drug manufacturers did not see such testing as cost-effective. Also, because children were thought to be ethically restricted in their ability to give informed consent, there were increased governmental and institutional hurdles to approval of these clinical trials, as well as greater concerns about legal liability. Thus, for decades, most medicines prescribed to children in the U.S. were done so in a non-FDA-approved, "off-label" manner, with dosages "extrapolated" from adult data through body weight and body-surface-area calculations.[35]
An initial attempt by the FDA to address this issue was the 1994 FDA Final Rule on Pediatric Labeling and Extrapolation, which allowed manufacturers to add pediatric labeling information, but required drugs which had not been tested for pediatric safety and efficacy to bear a disclaimer to that effect. However, this rule failed to motivate many drug companies to conduct additional pediatric drug trials. In 1997, the FDA proposed a rule to require pediatric drug trials from the sponsors of New Drug Applications. However, this new rule was successfully preempted in Federal court as exceeding the FDA's statutory authority. While this debate was unfolding, Congress used the 1997 Food and Drug Administration Modernization Act to pass incentives which gave pharmaceutical manufacturers a six-month patent term extension on new drugs submitted with pediatric trial data. The act reauthorizing these provisions, the 2002 Best Pharmaceuticals for Children Act, allowed the FDA to request NIH-sponsored testing for pediatric drug testing, although these requests are subject to NIH funding constraints. Most recently, in the Pediatric Research Equity Act of 2003, Congress codified the FDA's authority to mandate manufacturer-sponsored pediatric drug trials for certain drugs as a "last resort" if incentives and publicly funded mechanisms proved inadequate.[35]
Since the 1990s, many successful new drugs for the treatment of cancer, autoimmune diseases, and other conditions have been protein-based biotechnology drugs, regulated by the Center for Biologics Evaluation and Research. Many of these drugs are extremely expensive; for example, the anti-cancer drug Avastin costs $55,000 for a year of treatment, while the enzyme replacement therapy drug Cerezyme costs $200,000 per year, and must be taken by Gaucher's Disease patients for life. Biotechnology drugs do not have the simple, readily verifiable chemical structures of conventional drugs, and are produced through complex, often proprietary techniques, such as transgenic mammalian cell cultures. Because of these complexities, the 1984 Hatch-Waxman Act did not include biologics in the Abbreviated New Drug Application (ANDA) process, essentially precluding the possibility of generic drug competition for biotechnology drugs. In February 2007, identical bills were introduced into the House[36] and Senate[37] with bipartisan cosponsorship to create an ANDA process for the approval of generic biologics. The bills face opposition from biologic drug manufacturers, and other lawmakers are working to create compromise legislation.[38]
The FDA currently has regulatory oversight over a large array of products that affect the health and life of American citizens.[17] As a result, the FDA's powers and decisions are carefully monitored by several governmental and non-governmental organizations. There are many criticisms and complaints lodged against the FDA from patients, economists, regulatory bodies, and the pharmaceutical industry.
With acceptance of premarket notification 510(k) 033391 in January 2004, the FDA granted Dr. Ronald Sherman permission to produce and market Medical maggots for use in humans or other animals as a prescription medical device in a medical procedure known as maggot therapy The specific indications the FDA has allowed Medical maggots to be employed in are as follows:
"For debriding non-healing necrotic skin and soft tissue wounds, including pressure ulcers, venous stasis ulcers, neuropathic foot ulcers, and non-healing traumatic or post surgical wounds."
Medical maggots represent the first living organism ever allowed by the Food and Drug Administration for production and marketing as a prescription medical device.
In June 2004, the FDA cleared leeches (Hirudo medicinalis) as the second living organism to be used in modern medicine as medical devices. Surgeons who do plastic and reconstructive surgery find leeches especially valuable when regrafting amputated appendages, such as fingers or toes. Severed veins in such cases often are too damaged or too small to reconnect. In these cases, pooled blood around a wound can threaten tissue survival. The primary function of leeches is to drain the pooled blood and increase the chances of survival of the tissue in reattached appendages.
|