A new bill introduced Thursday would hold Facebook, Twitter, and other social media companies responsible for amplifying conspiracies and falsehoods about vaccines, COVID cures, and other health misinformation.
“For far too long, online platforms have not done enough to protect the health of Americans,” said Sen. Amy Klobuchar (D-MN), one of the bill’s sponsors. “These are some of the biggest, richest companies in the world, and they must do more to prevent the spread of deadly vaccine misinformation.”
If signed into law, the Health Misinformation Act would strip Facebook, Twitter, and other social media companies of some immunity under Section 230 of the Communications Decency Act, which currently prevents Internet firms from being held liable for most content posted on their platforms. The carveout proposed by Klobuchar and Sen. Ben Ray Luján (D-NM) would eliminate that legal shield in instances where a platform “promotes health misinformation through an algorithm,” the bill says.
The reduced immunity would only kick in during public health emergencies declared by the Department of Health and Human Services. That agency would also be responsible for defining which “health misinformation” platforms need to remove from algorithmic promotion.
Defining health misinformation is certain to be a hurdle both for the law’s passage and, if it is passed, for policymakers at HHS during implementation. In recent years, certain health matters, including COVID-19 and vaccines for the virus, have become politicized. Even if HHS were to comprehensively define health misinformation, dedicated conspiracy theorists and other misinformation spreaders would likely find ways around any bans. Take, for example, the recent trend among some anti-vaccination groups on Facebook. To avoid detection by the site’s algorithms, they’ve started cloaking their names and language they use to post, referring to themselves as a “Dance Party” or “Dinner Party.”
The legislation would not forbid people from posting falsehoods, and platforms would still enjoy Section 230 immunity if they display the posts using a “neutral mechanism,” the bill says, adding that posts shown in chronological order would be one such example. Still, the new law would make it less likely that people receptive to, but not dedicated to, anti-vax views would be inundated with misinformation.
Despite the bill’s limited scope, the sort of restriction it proposes would likely encounter resistance from tech companies, Olivier Sylvain, a law professor at Fordham University, told Ars. “They’re likely to raise the argument that ranking and displaying content through some automated system is, from their vantage point, the thing that distinguishes social media, and it’s arguably the kind of thing that’s protected,” he said.
“But I’m not terribly persuaded by this,” Sylvain added, “because this is not a bill that imposes sanction or criminal liability. It is instead a bill that would remove the immunity and just return us to the baseline that every other person on the planet has to abide by.”
“The challenge is determining whether any given reform to Section 230 is unconstitutionally restrictive of speech,” he said. “It can’t be that simply removing the immunity is what triggers First Amendment scrutiny. Because then, under that theory, when Congress passed the 1996 bill, it would have basically created a protection for the intermediaries for all time.”
While Section 230 shields platforms from liability for what they publish, the law was written to encourage moderation by not leaving companies liable for that moderation. An early ruling on the law said that platforms could not be sued for performing “a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content.”
Yet shielding tech companies from scrutiny over any form of moderation has allowed misinformation to flourish and spread on platforms. Social media, while not people’s only source of information on health matters, has played a major role in reducing COVID-19 vaccine acceptance, according to the Kaiser Family Foundation.
If the bill passes, it could force social media companies to think more carefully about the role their algorithms play in the spread of information, Sylvain said. “It would make these companies far more alert to the ways in which they impose harms and costs by way of their automated systems,” he said. “It’s not enough to say they’re not the source of the bad information—that’s a disingenuous argument because they are the ones spreading it by way of targeted delivery.”