The idea that medical practice should be "evidence-based" may seem like common sense—but the term has sparked a surprisingly intense battle among physicians and health industry stakeholders, Aaron Carroll writes in the New York Times' "The Upshot."
According to Carroll, the term "evidence-based" is relatively new in health care. It was popularized in a 1997 handbook by David Sackett, titled "Evidence-Based Medicine," that explored the best ways to utilize statistics in medical treatment.
Before Sackett, Carroll writes providers largely relied on clinical experience. "Doctors tried to figure out what worked by trial and error, and they passed their knowledge along to those who trained under them," Carroll writes.
Sackett's work introduced many physicians to rigorous mathematical ideas that have now become commonplace: the difference between absolute and relative risk, for instance, and the meaning of statistical terms such as "sensitivity" and "specificity."
But Sackett's work also sparked a decades-long debate about the value of "evidence-based medicine"—a term, he writes, that "causes more arguments than you might expect."
Optimists about evidence-based medicine, Carroll writes, argue that strong medical evidence can form the foundation of best practices that can ultimately "address the problems of cost, quality, and access that bedevil the health care system."
But not everyone is convinced that evidence, as currently used, is sufficient to drive most medical practices.
Critics, Carroll writes, often "point to weak evidence behind many guidelines" and raise concerns about stakeholders' influence on guidelines. They also worry that if doctors are required to follow rigid treatment guidelines, the result could be "a cookbook approach removes focus from the individual patient," Carroll writes.
According to Carroll, "Everyone is a bit right here, and everyone is a bit wrong."
When used properly, Carroll writes, data can help doctors make better diagnoses, better treatment choices, and better widespread recommendations. But doctors should resist the temptation to over-extrapolate data beyond the realm where it was collected: "Just because something worked in a particular population doesn't mean we should do the same things to another group and say that we have evidence for it," he writes.
Further, Carroll adds, "There is a difference between statistical significance and clinical significance. Get a large enough cohort together, and you will achieve the former." But he writes, "That by itself does not ensure that the result achieves clinical significance and should alter clinical practice."
And there's a danger that, as the term "evidence-based" grows more popular, some stakeholders may deploy it to describe recommendations that aren't truly grounded in evidence—whether because the evidence is ill-suited to the claim being made, or because the research cited isn't high-quality.
Carroll writes that, "If evidence-based medicine is to live up to its potential," then medical professionals must focus on properly explaining what data has revealed about risks and the specific actions that people can take to avoid them, rather than "taking best guesses and calling them evidence-based" (Carroll, "The Upshot," New York Times, 12/27/17).
Despite the shift toward broad acceptance of evidence-based practice (EBP) among medical staff, over half of physicians report not actually using guidelines day-to-day when they are available. As a result, organizations continue to see tremendous variation in clinical practice—as well as in costs and outcomes.
Our infographic outlines four principles you can use to support EBP at your organization, along with action steps to implement each one and pitfalls to avoid along the way.
Create your free account to access 2 resources each month, including the latest research and webinars.
You have 2 free members-only resources remaining this month remaining this month.
Never miss out on the latest innovative health care content tailored to you.