Workplace wellness programs don't work, a new study suggests. So why did past research show otherwise?

A unique study on workplace wellness programs suggests that the programs may not actually improve employee health—contradicting past research, and exposing the dangers of relying on so-called "observational studies" to determine whether a treatment works, Aaron Carroll writes for the New York Times' "The Upshot."

What the evidence shows about workplace wellness programs

Companies offer employee wellness programs to encourage employees to adopt healthy habits, Carroll explains. The programs may include a variety of services, such as access to gyms, screenings for reversible conditions, and weight-loss assistance. To date, many studies of wellness programs have shown encouraging results, suggesting that such programs actually make employees healthier.

But there's a catch: Nearly all of those analyses have been observational studies. That is, they've compared groups of people who join wellness programs to their colleagues who don't, and tried to determine whether participating in wellness programs actually leads to better health.

But there are many hazards in comparing workers who join wellness programs to those who don't, Carroll writes. For example, are the people participating in the wellness programs already healthier? Or perhaps they have higher incomes or are younger? Researchers do the best they can to adjust the findings of their studies for any such differences between the groups, but there's always the risk that outcomes may be skewed by factors that researchers can't observe or adjust for, Carroll writes.

A more reliable study, Carroll contends, would involve a "randomized controlled trial," in which employees are randomly assigned either to join a wellness program or not. Because the employees randomly assigned to each group should be statistically very similar, any differences between their outcomes would very likely be due to the effects of the wellness program itself.

But conducting a randomized controlled trial of employee wellness programs has proven difficult, Carroll contends. Such research can be costly, time-consuming, and hard to organize.

But the Illinois Workplace Wellness Study addressed these concerns—and did so in a way that illustrated the hazards of blindly trusting the results of observational studies.

A perplexing puzzle: Why do a randomized study's results seem to contradict past research?

For the study, researchers recruited nearly 5,000 University of Illinois at Urbana-Champaign employees to be randomly assigned to a wellness program or not. More than 1,500 volunteers were placed into the control group, while about 3,300 received a biometric health screening and an online health risk assessment. The researchers also gave the intervention group access to various wellness activities, including weight-loss classes, exercise, and smoking cessation programs.

After a year, the researchers found no relationship between the wellness programs and employees' health, productivity, and medical spending, Carroll writes.

That was surprising: It contradicts the findings of earlier, observational studies that suggested that wellness programs boosted health. So what accounts for the difference?

To get to the bottom of this mystery, researchers decided to re-analyze their own data as though it had been collected in an observational study. In other words, "they took the 3,300 who were offered the wellness program" and compared "those who participated with those who didn't"—adjusting for the observed differences in the group such as age, race, and sex just as they would have done if they'd been conducting an observational study.

The results were radically different. For example, the observational results implied that people who didn't use the wellness program went to the gym 3.8 times a year, while those who did use the program went 7.4 times. But the randomized controlled trial found no significant difference between the groups. 

Why the results are so different

Julian Reif, one of the lead authors of the study, "The most likely explanation is that participants differ from nonparticipants in very important ways" that researchers who conduct observational studies cannot currently account for.

He added, "Therefore, when a wellness program is offered, the differences seen between those who take advantage of it and those who don't are due to differences in the people rather than differences from the program."

David Molitor, another lead author on the study, said that, had the researchers simply published the observational study, "the headline result could have been that even after controlling for a battery of confounding variables, participation in a wellness program was associated with a significant reduction in health care spending, an improvement in exercise, and a lower chance of ceasing employment."

But such a headline "would have led us astray," Carroll writes.

So what's the right role of observational studies?

This isn't to say that observational studies should never be used, Carroll notes. Many such studies show "significant results that stand the test of time," and they can be especially useful for studying population-level effects.

But it does mean that we need to consider the possibility of selection bias when interpreting observational studies—even those that seem to have statistically adjusted for every conceivable difference between the two observed groups.

While randomized controlled trials may have their flaws, Carroll writes, they are still "much more reliable and useful than other types of studies. In our eagerness to point out their flaws, we shouldn't overlook their benefits" (Carroll, "The Upshot," New York Times, 8/6).

Learn more about why study design matters

Been awhile since your last statistics class? It can be difficult to judge the quality of studies, the significance of data, or the importance of new findings when you don't know the basics.

Download our cheat sheets to get a quick, one-page refresher on some of the foundational components of evidence-based medicine.

Get all 6 Cheat Sheets


Next in the Daily Briefing

ACO roundup: AIM ACOs save Medicare $82M

Read now