How strong is the "evidence" in evidence-based practice?
It's a deceptively simple question, but a vitally important one to consider, since health care policymakers and reformers generally have been proceeding under the belief that quality would improve and costs would go down if only health care providers adhered more consistently to the guidelines promulgated by medical societies and professional journals.
In the spirit of full disclosure, our Advisory Board research experts are in this camp as well. The Physician Executive Council has developed extensive resources (for example, this recent white paper) to help chief medical officers promote evidence-based practice at their organizations. And, for that matter, our Crimson analytical tools help physicians and clinical leaders assess guideline compliance as well as actual outcomes.
But the truth is—as clinical leaders well know—guidelines aren't unassailable.
Every so often, the medical community comes across a recommendation that seems virtually impossible to contest; quality guru Peter Provonost's checklists for how to eliminate central line infections in the intensive care unit represents one example. In such cases, though, the outcomes of guideline compliance are tangible and time-limited. ICUs that implement the central line checklists are able to tell rapidly that the checklists work because the rate of central line infections drops precipitously—often to zero.
Even in those cases, real life often complicates our certainty. A JAMA study garnered widespread attention last week when it found that the use of surgical checklists don't always result in quality improvements.
For other medical scenarios, the evidence in favor of guidelines is even less ironclad, and there are legitimate reasons for physicians, patients, and health care leaders to be skeptical of even the best-endorsed clinical recommendation.
What's the best for the population? Or the individual?
First of all, it can be difficult to reconcile what is best for the population as a whole with what's best for an individual patient. Physicians, in particular, struggle to balance these sometimes opposing points of view.
This tension between the population and the individual, more than anything else, is what leads to intermittent anti-guideline backlashes, where physicians express their distaste for "cookbook medicine" in person and in print.
Writing in the New York Times recently, Harvard physicians Pamela Hartzband and Jerome Groopman argued that the guidelines associated with pay-for-performance programs are "population-based and generic, and do not take into account the individual characteristics and preferences of the patient or differing expert opinions on optimal practice." They contend, eloquently, that because real patients are complex and varied, caring for them properly demands the superior diagnostic skill of a trained physician, not the simplistic algorithm of a clinical guideline.
The tension between individual and population perspectives manifests itself even in something as simple as the annual well patient visit. Guidelines today state that an annual physician exam is not necessary for well patients, a position that stirred up another round of debate recently after Obamacare architect and medical ethicist (and celebrity sibling) Zeke Emanuel wrote an opinion piece disparaging the annual checkup.
The published evidence is on Emanuel's side: a 2012 meta-analyses of published trials concluded that annual physical exams do not improve outcomes at the population level. Yet many primary care doctors can point to compelling patient stories where they discovered potentially fatal but asymptomatic conditions in the course of doing a routine exam.
Who's right in that scenario?
It's easy for someone taking a population-level view of the health system to say that the annual physical is of insufficient value because its benefits don't outweigh its costs. But if you're the patient who avoided a serious health consequence because of something that a doctor discovered in a routine physical—or the doctor devoted to the interests of that patient—you'd probably feel quite differently.
One could debate whether the patient was really likely to suffer a serious health event, or whether the risk might have been detected another way. But it's hard to reject physicians' perspectives on this score altogether, because their concerns stem from a philosophical commitment to delivering the best possible care to each individual patient.
The limitations of clinical knowledge
There is another reason to be cautious about clinical guidelines: the limitations of our collective clinical knowledge, as represented by the major source of that knowledge, the "gold standard" of prospective, controlled clinical trials.
I was exposed to these limitations early in my career, when I was a reporter covering FDA reviews of new pharmaceutical products. Back then, I spent days on end watching drug companies present data to panels of scientific experts and FDA reviewers. In nearly every one of these FDA advisory committee discussions, even the ones where the drugs in question were ultimately recommended for approval, I left the meeting not completely sure of whether the drugs were safe or effective for their intended purpose.
That wasn't because these medicines just didn't work. Rather, it was because despite the hundreds of millions of dollars the drug companies spent on late-stage trials, the reviewers seemed to always find a flaw in the evidence. Sometimes the treatment and control groups didn't end up perfectly comparable. Or the manufacturer was studying a "surrogate marker" of effectiveness that was imperfectly correlated with the ultimate outcome. Or the study design was somehow mathematically questionable.
The pharmaceutical companies were hardly slacking off. These advisory committee presentations were the culmination of years of work by hundreds of people. But their evidence still seemed fragile.
The ensuing years have, if anything, reinforced my impression that even the most hard-won clinical evidence is imperfect and incomplete. To cite just one well-known example, Merck's painkiller Vioxx went through extensive clinical testing and analysis before it reached the market, but in the end it turned out to have dangerous cardiovascular side effects.
If a small army of researchers, years of work, and a large fortune's investment can't tell us for sure whether a treatment is safe and effective, what hope is there, really, for certainty in anything?
The case for skeptical review
I'm overstating the point, of course, but our track record should give us pause about uncritically accepting treatment recommendations and guidelines.
That can be especially challenging these days, since the government and insurers have waded into the fray, not just by endorsing certain guideline-based treatment recommendations, but incorporating them to payment schemes and incentives. In 2015, Medicare’s Value-Based Payment program will reward or penalize hospitals based on how well they perform on 13 process measures, such as “prophylactic antibiotic received within one hour prior to surgical incision.”
The good news is that these government-sanctioned protocols have been chosen in part because so many people agree that they are appropriate. However, the endorsement of the federal government and widespread consensus are hardly an indication of infallibility—to pick one public health example, decades ago, the federal government played a leading role in the national campaign to reduce saturated fat intake, a recommendation which inadvertently led Americans to consume more sugars and simple carbohydrates and has now been reframed by various medical authorities.
So, even these seemingly uncontroversial pay-for-performance recommendations deserve a modicum of scrutiny.
What's more, it's appropriate in many cases for this scrutiny to occur by local physicians, at the local level. I've often heard health care consultants and administrators explain the importance of involving physicians in developing guidelines because without the physicians' participation, they probably won't comply with the guidelines. That may be true, but it's not the most important reason to involve them.
Guidelines are not gospel, and I would argue that the quality of care will improve if even the most common guidelines are submitted to the informed review of knowledgeable clinicians before they make their way into practice.
Even more importantly, as physicians and health care leaders succeed in standardizing care processes, and also get access to data and analytics that allow them to correlate treatment choices with patient outcomes, they will be able to assess the real-life impact of their practice choices. That may lead them toward better compliance with published guidelines, or it may reinforce their principled resistance. Either way, I'm hoping it results in better care.