No, it’s not just your relatives and old high school friends sharing fake coronavirus cures and anti-mask propaganda on Facebook.
The social media giant admitted the mind-boggling scale of COVID-19 misinformation flooding its platforms Tuesday in a statement to reporters. And, in perhaps what won’t come as a surprise to the people who rely on Facebook and Instagram to get their daily news, the sites are drowning in it.
For starters, the company confirmed it removed more than 7 million “pieces of harmful COVID-19 misinformation” from both Facebook and Instagram in the months spanning April to June. Examples of which include posts pushing “exaggerated cures” and “fake preventative measures.”
That might include, although Facebook didn’t specify, incorrect and possibly dangerous claims that hydroxychloroquine is a cure for COVID-19. Or, perhaps, suggestions to inject oneself with bleach? Oh yeah, and then there’s the false claim — made by Donald Trump — that children are “almost immune” to COVID-19.
But Facebook didn’t stop there. The company put “warning labels” on 98 million “pieces of COVID-19 misinformation on Facebook” in the same three months.
Notably, this is very much a global problem. That doesn’t mean, however, that Facebook and Instagram in the U.S. are beacons of truth. Facebook says that from March through July it removed 110,000 “pieces of content” in the U.S. for violating its coronavirus misinformation policies.
“Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” reads a statement from Emily Cain, a policy communications manager at Facebook, emailed to reporters.
Putting firm numbers to what many assumed — that Facebook has a coronavirus misinformation problem — was just one of the many announcements made by the company Tuesday. Facebook also published three posts to its Newsroom page, all addressing varying aspects of its community standards and content review policies.
The first post announced Facebook’s sixth . This report attempts to clearly communicate how Facebook enforced its community standards from April through June of this year. The second post tries to explain how Facebook reviews content (as opposed to what content had been reviewed). The third post, meanwhile, announces Facebook’s intention to launch an ostensibly independent, third-party audit of its annual community standards enforcement reports.
As Tuesday’s news makes clear, Facebook has either found or made itself the arbiter of public discourse for much of the internet. At least when it comes to COVID-19 misinformation, it claims to be trying to do the right thing.