When a drugmaker seeks approval for a new drug from FDA, it must prove that its medication is safe and effective—but it doesn't have to answer an equally critical question: Is the drug more effective than other ways of treating the same condition? Writing for the New York Times' "The Upshot," Aaron Carroll explains why such a simple, practical question so often goes unexplored in medicine.
Why comparative effectiveness research is critical
The FDA's requirement that a drug be proven safe and effective is a "meaningful bar," Carroll writes, but it offers only limited guidance in day-to-day medical practice.
Carroll cites the example of antibiotics: "Which drug is the best first-line therapy for which common illnesses? We don't know," he writes. "How long should we treat for different infections? We don't know. What are the relative trade-offs between benefits and side effects in different patients in different circumstances? We don't know."
That knowledge gap is especially significant given that FDA-approved treatments for the same condition may vary widely in cost and carry very different risks of side effects.
Despite the value of comparative effectiveness research, it remains relatively rare. While drugmakers generally must fund the studies required to prove to FDA that their medications are safe and effective, there's no such clear-cut source of funding for comparative effectiveness studies.
But Joe Selby, the executive director of the Patient Centered Outcomes Research Institute, argued that it's time for that to change: "It's essential that we learn to ask and answer practical questions about the comparative clinical effectiveness of therapy options in the course of everyday care," he said.
A case study in how comparative effectiveness research can improve care
One of the most prominent examples of comparative effectiveness research was a 2002 study published in JAMA that compared four medications to treat hypertension.
Researchers recruited more than 33,000 participants in 623 centers within the United States and Canada, and they randomly assigned each participant to take one of four FDA-approved drugs:
- Amlodipine, a calcium channel blocker;
- Chlorthalidone, a diuretic;
- Doxazosin, a drug that causes blood vessels to relax and widen using adrenaline; or
- Lisinopril, a drug that blocks the enzyme angiotensin, which tightens blood vessels.
After observing the participants for an average of about four years, the researchers found no differences between the groups in rates of death from coronary heart disease or non-fatal heart attacks.
They did find, however, find that chlorthalidone was better at lowering systolic blood pressure than two of the other drugs and that it was better at preventing heart failure and stroke as well as lowering rates of cardiovascular disease, Carroll writes. Notably, chlorthalidone was also the cheapest option.
But while the JAMA study illustrates the promise of comparative effectiveness research, it also reveals some of the limits of the technique, Carroll writes. The study was extremely complex and cost more than $100 million. It also attracted a "stunning" array of pushback from other researchers, and because many new drugs have entered the market since its publication, its guidance for doctors on the best drugs to use has become "somewhat murky once again."
Even so, Carroll writes, comparative effectiveness research can play a critical role in guiding medical treatment. Without it, he concludes, "too many important questions that concern patients will remain unanswered" (Carroll, "The Upshot," New York Times, 8/20).
Learn more: Your cheat sheets on evidence-based medicine
Been awhile since your last statistics class? It can be difficult to judge the quality of studies, the significance of data, or the importance of new findings when you don't know the basics.
Download our cheat sheets to get a quick, one-page refresher on some of the foundational components of evidence-based medicine.
- Evidence-based practice (EBP)
- Observational studies
- Randomized control trials (RCT's)
- Systematic reviews
- Statistical significance