June 28, 2018

Facebook and Google are hotbeds for 'medical quackery.' Should they have to police it?

Daily Briefing

    Social media algorithms help circulate misinformation related to AIDS, autism, and vaccines on Facebook and YouTube, but experts are divided on whether—and to what extent—companies should be responsible for regulating such content, Michael Schulson writes for Undark Magazine.

    Access our health care cheat sheets on top-of-mind legal landmarks

    Background

    U.S. consumers, lawmakers, and others over the past two years have raised concerns over the speed at which "incendiary, inaccurate, and often deliberately false content spreads on sites like Facebook and YouTube "—focusing primarily on targeting conspiracy theories, fake news, and hate speech, Schulson writes. However, Schulson writes that less attention has been drawn to "medical quackery" that is spread through social media.

    According to Schulson, misinformation related to diseases and health conditions are comparable to "public health threats." For example, Schulson writes, "dubious—and potentially deadly—cures for autism have found sanctuary on social media for years," and "desperate cancer patients have been lured online to baseless treatments peddled by shady 'experts'." In addition, "opponents of childhood vaccination thrive in an expanding and self-reinforcing internet bubble that researchers describe as 'more real and more true' for its inhabitants than anything coming from the outside," Schulson writes.

    AIDS denialism is another example of how social media can perpetuate untrue medical theories, Schulson writes. According to Schulson, "AIDS denialists have adapted readily to the internet age" using platforms such as YouTube to spread a theory about AIDS being an "elaborate hoax," which has prompted some AIDS patients to discontinue using their medications—and has prompted questions about whether social media platforms "bear any responsibility" for their role in "nudg[ing] people toward dangerous medical decisions." As Schulson explains, "Convincing someone that the 2001 terrorist attacks on the World Trade Center and other U.S. sites was an inside job can make them sound a little whacky at parties," but "[c]onvincing someone that AIDS is a hoax can kill them."

    Social media platforms under Section 230 of the Communication Decency Act are not held liable for the majority of the content published on their sites, but the platforms are allowed to regulate and pull content in violation of their terms and conditions. For instance, Schulson notes that YouTube's terms and conditions prohibit users from uploading content intended "to incite violence or encourage dangerous or illegal activities that have an inherent risk of serious physical harm or death." Under those terms and conditions, YouTube has "in at least one high-profile case" removed a YouTube channel promoting "questionable medical advice," Schulson writes.

    Experts weigh in

    Ultimately, though, Schulson writes that experts are divided on whether social media platforms "have a responsibility to police this sort of misleading or dangerous content more closely."

    Lyrissa Lidsky, an expert on free speech online and dean of the University of Missouri Law School, said companies should not be given too much power to regulate online content, noting that some public debates are less clear cut than those about medical care.

    Lidsky, who advocates for people to learn how to become savvier media consumers, said the question comes down to "who gets to [be] the arbiter of what's true and what's false?" She said, "Some questions are easy, but most of them aren't easy, even in the realm of scientific fact. And how paternalistic do you want the government to be, and then how paternalistic do you want YouTube or Facebook or Google to be? How comfortable are you with ceding the power to them to determine what's good for us?"

    Frank Pasquale, a critic of Silicon Valley policies and a professor at the University of Maryland School of Law, said, "When there are clear examples of false … information that endanger individuals' lives, that is when the platforms' responsibilities are at their apex, where they really have to start thinking deeply about their role and their responsibility in highlighting this stuff." He said social media platforms "have the power already," adding, "The question is whether they're going to exercise that power responsibly, or are they going to hide behind algorithmic ordering, and just say, 'Well, it's all algorithmic, it's all done by some computer program in the sky, and we don't really have responsibility for that.'"

    In Europe, countries have been more willing than the United States to regulate content on social media. In Russia, for instance, Schulson writes that lawmakers have moved to ban social media content that misinforms people about AIDs.

    However, Peter Meylakhs—a researcher at the National Research University's Higher School of Economics in St. Petersburg, Russia—said he is concerned that such censorship could bolster the claims of AIDS denialists, "mak[ing] them more martyr-like" and potentially "increasing] their credibility" (Schulson, Undark Magazine, 6/6).

    Just updated: Your cheat sheet for understanding health care's legal landscape

    book

    With the new tax law, MACRA, HIPAA, and countless others, the health care landscape has become an alphabet soup of legislation. To help you keep up, we've created a series of cheat sheets for some of the most important—and complicated—legal landmarks.

    Check them out now for everything you need to know about the Affordable Care Act, antitrust laws, fraud and abuse prevention measures, HIPAA, MACRA, and the two-midnight rule.

    Get the Cheat Sheets

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.