Social media platforms say their artificial intelligence won’t moderate content as well as humans.

Social media platforms say their artificial intelligence won’t moderate content as well as humans.

To adjust to the social distancing required by the Covid-19 coronavirus pandemic, social media platforms will lean more heavily on artificial intelligence to review content that potentially violates their policies. That means your next YouTube video or snarky tweet might be more likely to get taken down in error.
As they transition their operations to a primarily work-from-home model, platforms are asking users to bear with them while acknowledging that their automated technology will probably make some mistakes. YouTube, Twitter, and Facebook recently said that their AI-powered content moderators may be overly aggressive in flagging questionable content and encouraged users to be vigilant about reporting potential mistakes.
In a blog post on Monday, YouTube told its creators that the platform will turn to machine learning to help with some of the work normally done by reviewers. The company warned that the transition will mean that some content will be taken down without human review, and that both users and contributors to the platform might see videos removed from the site that dont actually violate any of YouTubes policies.
The company also warned that unreviewed content may not be available via search, on the homepage, or in recommendations.
The CDC and the WHO recommend several basic measures to help prevent the spread of Covid-19:

  • Wash your hands often for at least 20 seconds.
  • Cover your cough or sneeze with a tissue, then throw the tissue in the trash.
  • Clean and disinfect frequently touched objects.
  • Stay home when you are sick.
  • Contact a health worker if you have symptoms; fever and a dry cough are most common.
  • DONT touch your face.
  • DONT travel if you have a fever and cough.
  • DONT wear a face mask if you are well.

Guidance may change. Stay informed, and stay safe, with Voxs guide to Covid-19.
Similarly, Twitter has told users that the platform will increasingly rely on automation and machine learning to remove abusive and manipulated content. Still, the company acknowledged that artificial intelligence would be no replacement for human moderators.
We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes, said the company in a blog post.
To compensate for potential errors, Twitter said it wont permanently suspend any accounts based solely on our automated enforcement systems. YouTube, too, is making adjustments. We wont issue strikes on this content except in cases where we have high confidence that its violative, the company said, adding that creators would have the chance to appeal these decisions.
Facebook, meanwhile, says its working with its partners to send its content moderators home and to ensure that theyre paid. The company is also exploring remote content review for some of its moderators on a temporary basis.
We dont expect this to impact people using our platform in any noticeable way, said the company in a statement on Monday. That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.
The move toward AI moderators isnt a surprise. For years, tech companies have pushed automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can fester on their platforms. Although AI can help content moderation move faster, the technology can also struggle to understand the social context for posts or videos and, as a result make inaccurate judgments about their meaning. In fact, research has shown that algorithms that detect racism can be biased against black people, and the technology has been widely criticized for being vulnerable to discriminatory decision-making.
Normally, the shortcomings of AI have led us to rely on human moderators who can better understand nuance. Human content reviewers, however, are by no means a perfect solution either, especially since they can be required to work long hours analyzing incredibly traumatic, violent, and offensive words and imagery. Their working conditions have recently come under scrutiny.
But in the age of the coronavirus pandemic, having reviewers working side by side in an office could not only be dangerous for them, it could also risk further spreading the virus to the general public. Keep in mind that these companies might be hesitant to allow content reviewers to work from home as they have access to lots of private user information, not to mention highly sensitive content.
Amid the novel coronavirus pandemic, content review is just another way were turning to AI for help. As people stay indoors and look to move their in-person interactions online, were bound to get a rare look at how well this technology fares when its given more control over what we see on the worlds most popular social platforms. Without the influence of human reviewers that weve come to expect, this could be a heyday for the robots.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Share