Home World Wide News NYU study: Facebook’s content moderation efforts are ‘grossly inadequate’

NYU study: Facebook’s content moderation efforts are ‘grossly inadequate’

0
NYU study: Facebook’s content moderation efforts are ‘grossly inadequate’

In a scathing examination of Facebook’s content moderation strategy, a new study identifies the company’s decision to outsource such work as a key reason its efforts are failing.

The NYU Stern Center for Business and Human Rights released a report today that calls on Facebook to end the outsourcing practice and commit to bringing the work in-house so moderation receives the resources and attention it deserves. The report also calls for a massive increase in the number of moderators, as well as improved working conditions that include better physical and mental health care for moderators who are subjected to disturbing content throughout the workday.

The report comes as Facebook’s reputation continues to degrade following years of controversy over its handling of disinformation, fake news, and other dangerous content on its platform. Such criticism has intensified in recent days, with CEO Mark Zuckerberg facing a backlash from employees over his failure to censure tweets by President Trump that appear to violate the platform’s policies against inciting violence.

While the problems facing Facebook’s content moderation have been widely reported, the study’s principal author, Paul Barrett, said he wanted to highlight the fact that while content moderation is fundamental to keeping the platform usable, the company has relegated the work to a secondary role by primarily employing underpaid contractors in remote locations.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

“One of the revelations for me was realizing just how central the function [of moderation] is to the companies, and therefore how anomalous it is that they hold it at arm’s length,” said Barrett, who is deputy director of the Stern Center. “The second surprise was the connection between the outsourcing issue and the problems they have experienced in what they call ‘at-risk countries.’”

The study acknowledged that all the major social media platforms are suffering from the same content moderation problem. While Facebook has about 15,000 content moderators, most of them work for third-party vendors. That’s compared to about 10,000 moderators for YouTube and Google and 1,500 for Twitter, according to the study. And while Facebook has also partnered with 60 journalist organizations to implement fact-checking, the number of items sent to these groups far exceeds their capacity to verify most claims.

“These numbers may sound substantial, but given the daily volume of what is disseminated on these sites, they’re grossly inadequate,” the report says.

Barrett decided to zoom in on Facebook as a case study in content moderation gone wrong. In part, he blames the company’s relentless focus on growth for its inability to keep up with dangerous content and disinformation. “You have a strategy to expand and grow,” he said in an interview. “But you don’t really have a parallel strategy for how to make sure that your offerings are not misused.”

The report estimates that users and the company’s artificial intelligence system flag more than 3 million items daily. With the company reporting an error rate of 10% by moderators spread across 20 sites, that means Facebook makes about 300,000 content moderation mistakes per day.

To emphasize the central role that moderators play in keeping social platforms usable, Barrett imagines this scenario:

Picture what social media sites would look like without anyone removing the most egregious content posted by users. In short order, Facebook, Twitter, and YouTube (owned by Google) would be inundated not just by spam, but by personal bullying, neo-Nazi screeds, terrorist beheadings, and child sexual abuse. Witnessing this mayhem, most users would flee, advertisers right behind them. The mainstream social media business would grind to a halt.

And yet, even though moderators are crucial to keeping Facebook usable, they have been largely marginalized, the report says. Because these moderators are also physically distant, the company often fails to recognize the gravity of the content they are reviewing in places such as Myanmar, where pro-government forces have used the platform to spread propaganda that targets minorities for genocide.

Fundamentally, it’s all about saving money. Outsourced moderators working in developing countries are paid far less than full-time employees in Silicon Valley, where office spaces, benefits, and perks are also far more costly. As part of the study, Barrett interviewed a number of former content moderators.

“While the third-party vendors that oversee this activity on paper provide a fair amount of benefits related to mental health, this offering was consistently described as being not particularly serious in practice, given how potentially traumatic this activity is,” Barrett said.

To remedy the problem, the report proposes eight steps:

  • Bring content moderation in-house, with substantial pay and benefits.
  • Double the number of content moderators.
  • Appoint a high-ranking executive to oversee content moderation.
  • Invest more in content moderation for “at-risk countries” in Asia and Africa so there are teams working in the local language.
  • Provide on-site medical care.
  • Sponsor academic research into the health risks of content moderation.
  • Support government regulation that would focus on the “prevalence” of harmful content, or how often users are exposed to such content. (In fact, Zuckerberg has expressed support for some version of this idea.)
  • Expand the scale of fact-checking to attack disinformation.

Barrett said he knows the cost of implementing such measures is a major deterrent. But he’s optimistic that Facebook could at least take some steps in this direction, in part because of the company’s response to the coronavirus. To fight misinformation, Zuckerberg brought content moderation for some sensitive categories of information in-house, at least temporarily.

The company has also begun recognizing that the AI it employs to flag offensive content still has severe limits. When Facebook had to send content moderators home and rely more on AI during quarantine, Zuckerberg said mistakes were inevitable because the system often fails to understand context.

In the end, Barrett said, people are the answer. And he believes the company is starting to acknowledge that.

“It is a very ambitious ask,” Barrett said of the proposal to scrap outsourcing. “But my attitude is if the current arrangement is inadequate, why not just go for it and urge [the company] to remedy the problem in a big way. I don’t think Mark Zuckerberg is going to [smack himself on the head] and say, ‘Oh my god, I never thought of that!’ But I do think it’s possible the company is ready to move in that direction.”

Source link

قالب وردپرس

LEAVE A REPLY

Please enter your comment!
Please enter your name here