Twitter is failing to rein in “superspreaders” of coronavirus misinformation on its platform, according to research detailing dozens of posts shared by high-profile accounts apparently flouting the social media company’s rules.
Tech companies including Twitter, Facebook and YouTube have all introduced new policies to clamp down on the so-called infodemic of false coronavirus information that has swept the Internet in recent weeks amid panic and national lockdowns.
But a report from NewsGuard, which monitors and rates news websites on trustworthiness, found multiple posts from accounts on Twitter with 100,000 or more followers—some of which are verified—promoting misinformation about questionable treatments or cures that appear to violate recent policies banning such content.
It also found numerous posts touting conspiracy theories—for example, linking 5G technology to coronavirus—although in most cases these do not fall foul of Twitter’s rules.
All but one of the posts, which were shared to a combined following of more than 3 million people across 10 prolific misinformation spreaders, remain live on the platform, NewsGuard said.
Social media sites have scrambled to curb health misinformation that experts claim endangers lives, as conspiracy theorists known for propagating anti-vaccination and far-right narratives have shifted their attention to the pandemic.
Separately on Thursday, more than 100 senior doctors and global health experts—including former National Institute of Allergy and Infectious Diseases and CDC officials—published a full page letter in the New York Times calling on the chief executives of the big tech groups to “stop giving oxygen” to the “tsunami” of bad content.
“We are calling on the tech giants to take immediate systemic action to stem the flow of health misinformation, and the public health crisis it has triggered,” the letter said, urging the groups to downgrade such content in users’ feeds and provide “retroactive corrections” to users who have seen harmful falsities.
While Facebook is among those facing criticism for not doing enough, Twitter has been a particular laggard. A study by fact-checkers at the University of Oxford between January and March found about 60 percent of false claims on the platform remained online, compared with 27 and 24 percent on YouTube and Facebook respectively.
According to the report from NewsGuard, one Twitter account, with 125,000 followers and links to the far-right conspiracy group QAnon, tweeted false studies claiming that hydroxychloroquine has a 100 percent success rate as a treatment. Other accounts endorse chlorine dioxide, licorice root, and Zinc as cures for the virus, also going against Twitter’s policies that ban promotion of ineffective treatments.
Twitter personality Martin Geddes cited a blog saying social distancing was “ineffective,” while renowned conspiracy theorist David Icke, who last week was banned from Facebook, posted that the virus was a scam.
In a response to requests for comment, Mr. Geddes told NewsGuard that “quoting a line from an article is not the same as promoting the idea,” while Mr. Icke accused the researchers of censorship, the report said.
Gordon Crovitz, NewsGuard’s co-founder, called for more transparency around the social media platform’s moderation processes and the proactive “debunking” of false information, for example through the labelling of dubious websites. Social media groups are “randomly taking down accounts, which excites the conspiracy minded… They’re playing whack-a-mole without proper equipment,” he said.
Twitter said that since the March 18 introduction of its new coronavirus content policies, it had removed more than 2,400 tweets. While the group does not employ fact-checkers, it has both manual and automated moderation mechanisms.
It added that it was “prioritizing the removal of content when it has a call to action that could potentially cause harm,” but “will not take enforcement action on every tweet that contains incomplete or disputed information about COVID-19.”