.

Innovations in Artificial Intelligence have yielded an inpouring of questions in the last few years: How will AI affect or replace jobs? How will AI be used in warfare? How can AI make products and services more accessible? These questions are undoubtedly important to ask. But in our internet-ruled world, we’ve seen that information and sources can be falsified, data can be manipulated, free speech can be censored, and political messages can be used to manipulate citizens. The intricate balance between disinformation and democracy raises an important question: Should Artificial Intelligence be used to moderate online content? And how?

Last month, the Council on Foreign Relations set out to answer just this. Researchers and academics Robyn Caplan (Researcher, Data &Society), Tiffany Li (Resident Fellow, Information Society Project, Yale LawSchool), and Sarah Roberts (Assistant Professor of Information Studies, UCLA)convened to discuss content moderation online, and specifically, on social media sites.

In a recent report by Data & Society titled Content or Context Moderation Artisanal, Community-Reliant, and Industrial Approaches, social media platforms were divided into three categories: artisanal (smaller companies that have a localized team moderating content), industrial (sites like Facebook andTwitter, with tens of thousands of employees around the world), and community-reliant (sites like Reddit and Wikimedia that rely on contributors to crowdsource information and moderate content). Within all of these platforms, there is a lingering debate about free speech—what type of speech must be regulated? What free-speech protections should users have? How should speech be moderated?

The first roadblock in moderating online content is regulatory practices. When social media platforms reach a global market, they are faced with vastly different regulatory rules from each country. The United States, for example, highly values the protection of free speech. However, manyEuropean countries are stricter about removing any content that is considered hate speech. To further complicate the issue, it’s not feasible for social media sites to hire, or even crowdsource enough moderators in each country, who know the language and understand localized norms enough to moderate content.This is where Artificial Intelligence enters the scene.

Social media platforms are at the genesis of using AI to moderate content. According to Robyn Caplan, AI is currently, for the most part, being used by larger companies to flag content that is spam, child pornography, or content that promotes terrorism. In most cases, automated detection technology is used to identify such content, and then it goes to human review to determine what happens next.

This type of detection has not, however, been very successful in detecting harassment, violence, or hate speech. Part of the reason why is because this content is very context-specific, making it difficult for AI to learn what type of content is okay in what situation. Sarah Roberts pointed to the fact that if we, as humans, do not yet have a baseline understanding of what content needs to be regulated, AI will not be able to understand content regulation.

Roberts also explained that the reason AI is good at identifying content such as terrorist messages and spam is because the machine has already seen this content. According to her, “What makes AI good at retrieving it is the fact that, for better or for worse, much of that information is recirculated… And so what the AI is doing is it’s really just a complex matching process, looking for the repetition of this material and taking it out beforehand. And that’s why… things like hate speech, which are discrete instances of behavior that are nuanced and difficult to understand, cannot just be automated away.” In other words, there is not yet an algorithm that can predict something like harassment.

Another issue automation poses is accountability. In any conversation about Artificial Intelligence, accountability is always brought up. And for good reason—without humans in charge of a task, if something goes wrong, who is to blame? We have not yet reached a point where we understand howAI can be legally and socially held accountable. Moderating content is no different, and social media platforms will continue to grapple with this question as they implement AI.

Despite some of these downfalls, there are still benefits to usingArtificial Intelligence for online moderation. For one, AI has the ability to remove bot accounts, which have presented major problems in political campaigns recently. AI can also be useful for detecting and removing content that is copyrighted or intellectual property.

Above all, one thing is certain: with millions of users on social media and a constant stream of content being fed into the sites every second, humans alone will not be able to regulate online content without the help of Artificial Intelligence. There are benefits to AI when used correctly, but we have to come to an agreement about content regulation before we can teach AI what to moderate, and how. As with any implementation of AI, the key point to remember is that AI should augment human tasks, instead of replace humans entirely. If we work together with AI, we can reach a balance between protecting free speech and moderating harmful content.

About
Hannah Bergstrom
:
Hannah Bergstrom is a Diplomatic Courier Correspondent and Brand Ambassador for the Learning Economy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Should Artificial Intelligence Be Used to Moderate Online Content?

December 12, 2018

Innovations in Artificial Intelligence have yielded an inpouring of questions in the last few years: How will AI affect or replace jobs? How will AI be used in warfare? How can AI make products and services more accessible? These questions are undoubtedly important to ask. But in our internet-ruled world, we’ve seen that information and sources can be falsified, data can be manipulated, free speech can be censored, and political messages can be used to manipulate citizens. The intricate balance between disinformation and democracy raises an important question: Should Artificial Intelligence be used to moderate online content? And how?

Last month, the Council on Foreign Relations set out to answer just this. Researchers and academics Robyn Caplan (Researcher, Data &Society), Tiffany Li (Resident Fellow, Information Society Project, Yale LawSchool), and Sarah Roberts (Assistant Professor of Information Studies, UCLA)convened to discuss content moderation online, and specifically, on social media sites.

In a recent report by Data & Society titled Content or Context Moderation Artisanal, Community-Reliant, and Industrial Approaches, social media platforms were divided into three categories: artisanal (smaller companies that have a localized team moderating content), industrial (sites like Facebook andTwitter, with tens of thousands of employees around the world), and community-reliant (sites like Reddit and Wikimedia that rely on contributors to crowdsource information and moderate content). Within all of these platforms, there is a lingering debate about free speech—what type of speech must be regulated? What free-speech protections should users have? How should speech be moderated?

The first roadblock in moderating online content is regulatory practices. When social media platforms reach a global market, they are faced with vastly different regulatory rules from each country. The United States, for example, highly values the protection of free speech. However, manyEuropean countries are stricter about removing any content that is considered hate speech. To further complicate the issue, it’s not feasible for social media sites to hire, or even crowdsource enough moderators in each country, who know the language and understand localized norms enough to moderate content.This is where Artificial Intelligence enters the scene.

Social media platforms are at the genesis of using AI to moderate content. According to Robyn Caplan, AI is currently, for the most part, being used by larger companies to flag content that is spam, child pornography, or content that promotes terrorism. In most cases, automated detection technology is used to identify such content, and then it goes to human review to determine what happens next.

This type of detection has not, however, been very successful in detecting harassment, violence, or hate speech. Part of the reason why is because this content is very context-specific, making it difficult for AI to learn what type of content is okay in what situation. Sarah Roberts pointed to the fact that if we, as humans, do not yet have a baseline understanding of what content needs to be regulated, AI will not be able to understand content regulation.

Roberts also explained that the reason AI is good at identifying content such as terrorist messages and spam is because the machine has already seen this content. According to her, “What makes AI good at retrieving it is the fact that, for better or for worse, much of that information is recirculated… And so what the AI is doing is it’s really just a complex matching process, looking for the repetition of this material and taking it out beforehand. And that’s why… things like hate speech, which are discrete instances of behavior that are nuanced and difficult to understand, cannot just be automated away.” In other words, there is not yet an algorithm that can predict something like harassment.

Another issue automation poses is accountability. In any conversation about Artificial Intelligence, accountability is always brought up. And for good reason—without humans in charge of a task, if something goes wrong, who is to blame? We have not yet reached a point where we understand howAI can be legally and socially held accountable. Moderating content is no different, and social media platforms will continue to grapple with this question as they implement AI.

Despite some of these downfalls, there are still benefits to usingArtificial Intelligence for online moderation. For one, AI has the ability to remove bot accounts, which have presented major problems in political campaigns recently. AI can also be useful for detecting and removing content that is copyrighted or intellectual property.

Above all, one thing is certain: with millions of users on social media and a constant stream of content being fed into the sites every second, humans alone will not be able to regulate online content without the help of Artificial Intelligence. There are benefits to AI when used correctly, but we have to come to an agreement about content regulation before we can teach AI what to moderate, and how. As with any implementation of AI, the key point to remember is that AI should augment human tasks, instead of replace humans entirely. If we work together with AI, we can reach a balance between protecting free speech and moderating harmful content.

About
Hannah Bergstrom
:
Hannah Bergstrom is a Diplomatic Courier Correspondent and Brand Ambassador for the Learning Economy.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.