"Quality is more than just whether you live or die," challenged Cleveland Clinic CEO Toby Cosgrove at a Nashville Health Care Council briefing last week.
If we are to genuinely improve quality in the future, we need to confront some fundamental truths about the state of quality in health care today.
This is particularly evident in the area of clinical quality: how we talk about quality, how we measure it, how we present it, and what we believe about our own performance and that of our peers.
1. Most providers don’t know the level of quality they provide, but assume it's excellent
I regularly visit hospitals devoid of useful quality data and processes for ongoing quality improvement. It’s not difficult to pick up on this after a day onsite: reports aren’t circulated; data elements are randomly selected; the retrospective data are ancient; there is no connection between data on a page and actionable steps toward improvement; quality committees meet monthly or quarterly with spotty attendance, and so on.
Yet I’m uniformly told by staff that the hospital provides superior care. It’s simply part of their culture. Something they’re proud of.
But they have no way of objectively demonstrating what they believe to be true.
2. External-agency quality awards are imperfect and sometimes misleading indicators of quality
Most hospital main entrances display banners, plaques, and lucite blocks in glass cases awarded for quality by external parties—everything from popular magazines to standing commissions. They are presented to the public as evidence of high-quality care determined by an objective reviewer using meaningful criteria.
Yet health care providers know these awards fall short of this ideal. Stacks of literature detail methodological shortcomings of popular quality awards. Even for the best-documented awards, nearly all their metrics relate to process, not outcomes. And the award selection processes are not beyond reproach: Many of these external agencies are in the "awards business" for commercial or organizational self-interests.
To be sure, efforts made to prepare for and pursue this form of recognition may contribute to improved quality performance. But processes may or may not inflect actual outcomes.
In the worst cases, these awards—and the importance and legitimacy bestowed on them—lead providers to give too much credence to what they purport to represent and therefore to declare "mission accomplished" on quality, de-emphasizing critically important improvement efforts.
3. Even government-sanctioned core measures are highly imperfect indicators of quality
Most hospitals directly equate "quality" with performance on CMS core measure sets. But the core measures provide only a limited window into actual quality performance; it is entirely possible to score well on core measures but still provide substandard care. Here too, process measures significantly outnumber outcomes measures.
For example, hospitals report PCI mortality, which, besides being a rare occurrence, doesn’t tell us if the intervention was appropriate in the first place. They report hospital readmissions, which tell us little about effectiveness in managing chronic disease from a longitudinal perspective.
The prerequisite for quality: Physician performance analytics
Admittedly, clinical quality is difficult to capture and quantify, and we’re still a long way from getting it right. Still, we can do far better than observational assessments and process-oriented surrogate measures.
Physician behavior has the greatest influence on clinical quality. What physicians do and don’t do to a given patient can trump even other important factors, such as the caliber of nursing care.
Yet today in health care we have very little insight into what physicians really do. We have EMRs that house data. We hand-craft core measures. We bring select performance data to monthly department meetings or M&M conferences. But none of this tells us, by individual physician, on a case-by-case basis, across time, utilizing dozens of relevant metrics, refreshed continuously, with risk-adjustment and attribution, how that one physician practices medicine and to what ends.
Unless we perform this type of profiling, and apply the learning to continuously improve clinical practice—with physicians themselves as the primary users and drivers—we will never make the gains in quality that we should. We won’t be able to identify, understand, and rein in variation; we won’t be able to segment gifted from mediocre proceduralists and take corrective actions; we won’t be able to understand if what we do to patients produces benefits in excess of costs and risks; and we won’t have the visibility necessary to implement evidence-based medicine.
Going back just a few years, the reason why providers did not employ physician performance analytics was because the technologies available were rudimentary—incapable, really, of bringing to bear a critical mass of relevant and sufficiently-accurate data to build a useful profile.
But that’s not the case today—these analytic tools exist and they are up to the task.
So why is it, then, that despite the availability of breakthrough technology, only a fraction of provider organizations currently employ these tools, and of this subgroup, a fewer number still employ them religiously and drive performance improvement as guided by the data?
In part, the answer I believe goes back to where we began—with misguided notions of clinical excellence based on ungrounded or partial evidence. But going beyond this, there’s a bigger misconception at work: the belief by providers that physician performance analytics are optional—a capability they can choose to invest in or not. As if the knowledge gained is only peripherally related to quality, as opposed to its bedrock.
The truth is, not investing in and embracing this capability means these providers lack the basic infrastructure necessary to deliver high-quality care. And it also signals a questionable commitment to quality by the organization’s leadership.
It’s really this simple—black and white, no equivocation. Will we allow ourselves to see the truth here? Or will we cling to the notion "we’re a high-quality provider—just believe me"?
Metrics and Analytics,