They found that 844 different hospitals were ranked by at least one organization—but just 10% of those hospitals were named to all four lists. And no hospital was named a "top performer" by all four services.
In part, that's because the ranking services relied on significantly different metrics and weights, which has created a frequent paradox. For instance, more than 40% of hospitals classified as having below-average mortality by one method were simultaneously considered having above-average mortality by another method.
"The measures were so divergent that 27 hospitals were simultaneously rated among the nation's best by one service and among the worst by another," Melinda Beck writes at the Wall Street Journal.
This variation isn't a surprise. In a 2013 Kaiser Health News analysis, Jordan Rau found that more than 1,600 U.S. hospitals were honored by one service or another. In Baltimore, more than 85% of hospitals won awards.
But it increasingly matters, especially in a world where consumers are increasingly shopping for their own care—and sorely need good, useful rankings to help steer their decisions.
"A consumer could consult four different lists and come up with four different results," lead author Matt Austin told Modern Healthcare. (Austin's an assistant professor at the Armstrong Institute for Patient Safety at Johns Hopkins.)
And the constant variation can confound providers, Austin points out. The range of scores can "become a challenge for where a hospital should focus improvement efforts," he says.
Complicating the issue is that more players are getting into the game all the time. Consumer finance site NerdWallet offers a brand-new tool that draws on Medicare data to rank "best hospitals." Insurers are releasing data about providers' prices in a push for transparency. In some cases, provider groups are issuing their own report cards, trying to get control of the ratings narrative.
Counterpoint: Why rankings still offer value
Rankings groups defend their services and say there's a rationale for the variation: The different rankings are designed to track different services and serve different purposes.
"They're not measuring the same thing," said Evan Marks, the chief strategy officer for HealthGrades, in an interview with Reed Abelson of the New York Times. "Why would they overlap?"
"There is no one-size-fits-all answer to which hospital is best," Ben Harder, chief of health analysis for U.S. News, told the Journal. "We are the only rating that looks at high complexity care."
Leah Binder, president and CEO of the Leapfrog Group, thinks the cluster of rankings is ultimately a benefit for the industry: The added pressure and range of reports pushes providers to get better. "Consumers are accustomed to reviewing a lot of reviewers and coming to their own conclusions," she told Modern Healthcare last year. "Hospitals shouldn't be exempt."