Sight Magazine – Online Content Moderation: Can AI Help Clean Up Social Media?

Thomson Reuters Foundation

Two days after being sued by Rohingya refugees from Myanmar over allegations he failed to take action against hate speech, social media company Meta, formerly known as Facebook, announced a new artificial intelligence system to combat harmful content.

Machine learning tools have increasingly become the go-to solution for tech companies to control their platforms, but questions have been raised about their accuracy and their potential threat to free speech.

The Facebook logo is displayed on a mobile phone in this photo illustration taken on December 2, 2019. PHOTO: Reuters / Johanna Geron / Illustration / file photo.

Here’s everything you need to know about AI and content moderation:

Why are social media companies criticized for moderating content?
The $ 150 billion Rohingya class action lawsuit filed this month came at the end of a tumultuous period for social media giants, who have come under fire for failing to effectively tackle hate speech online and growing polarization.

The complaint argues that calls for violence shared on Facebook contributed to real-world violence against the Rohingya community, which suffered a military crackdown in 2017 that the refugees said included mass killings and rapes. .

The lawsuit follows a series of incidents that have subjected social media giants to scrutiny over their practices, including the murder of 51 people in two mosques in Christchurch, New Zealand in 2019, which aired live by the attacker on Facebook.

Following the deadly Jan.6 assault on Capitol Hill, Meta CEO Mark Zuckerberg and his Google and Twitter counterparts appeared before the U.S. Congress in March to answer questions about extremism and disinformation on their services.



Why are businesses turning to AI?
Social media companies have long relied on human moderators and user reports to control their platforms. Meta, for example, said it has 15,000 content moderators reviewing material from its global users in more than 70 languages.

But the mammoth size of the task and the regulatory pressure to quickly remove harmful content has prompted companies to automate the process, said Eliska Pirkova, head of free speech at digital rights group Access Now.

There are “good reasons” to use AI for moderation of content, said Mitchell Gordon, doctor of computer science at Stanford University.

“Platforms rarely have enough human moderators to review all or even most of the content. And when it comes to problematic content, it’s often best for the well-being of everyone if no human ever has to. watch, “Gordon said in comments sent via email.

How does AI moderation work?
Like other machine learning tools, AI moderation systems learn to recognize different types of content after being trained on large data sets that have been previously categorized by humans.

Researchers who collect these datasets typically ask multiple people to review each piece of content, Gordon said.

“What they tend to do is take a majority vote and say, ‘Well, if most people say it’s toxic, we’re going to call it toxic,'” he said. declared.

From Twitter to YouTube to TikTok, moderation of AI content has become ubiquitous in the industry in recent years.

In March, Zuckerberg told Congress that AI was responsible for removing more than 90% of content deemed to violate Facebook guidelines.

And earlier this month, the company announcement a new tool that requires fewer examples for each dataset, meaning it can be trained to take action on new or evolving types of harmful content in weeks instead of months.

Sight Subscriber Announcement Oct. 21, 2

What are the pitfalls?
Tech experts say one problem with these tools is that algorithms struggle to understand the context and intricacies that allow them to discern, say, satire from hate speech.

“Computers, no matter how sophisticated the algorithm they use, are always essentially dumb,” said David Berry, professor of digital humanities at the University of Sussex in Britain.

“[An algorithm] can only really process what has been taught to him and he does so in a very simplistic way. So the nuances of human communication …[are] very rarely captured.

This can lead to the censorship of harmless content and the maintenance of harmful posts online, which has profound ramifications for free speech, said Pirkova of Access Now.

Earlier this year, Instagram and Twitter faced backlash for removing posts mentioning the possible deportation of Palestinians from East Jerusalem, which the companies blamed on technical errors in their automated moderation systems.

Another problem is the variety of languages.

Documents leaked in October suggested that by 2020 Meta lacked filtering algorithms for languages ​​used in some of the countries the company considered to be most “at risk” of potential harm in the real world, including Myanmar and Ethiopia.

Finally, since algorithms are largely formed based on what a majority thinks about a certain type of content, minorities with a different perspective are at risk of having their voices automatically erased, Gordon said.

What can we improve?
No matter how good AI systems get, deciding what’s right and what isn’t will always be a matter of opinion.

A 2017 to study by researchers in New York and Doha on hate speech detection, human coders reached a unanimous verdict in just 1.3% of cases.

“Until people agree on what crosses the line, no AI will be able to make a decision that everyone sees as legitimate and correct,” Gordon said.

He and his team are working on a solution: train AI to take different perspectives and create interfaces that allow moderators to choose the views they want system decisions to reflect.

Given the power that automated surveillance systems have in shaping public discourse, companies should be more transparent about the tools they deploy, how they operate and how they are trained, Pirkova told Access Now.

Lawmakers should also impose due diligence safeguards such as human rights impact assessments and independent audits, taking into account how algorithms affect minorities, she added.

“That doesn’t mean we should get rid of automated decision-making processes altogether, but we need to understand them better,” she said.

Comments are closed.