.
A

s AI evolves, it is expected that by 2025–2026, it could assist in creating nearly 90% of online material, presenting challenges in managing content authenticity and exposing society to digital manipulations like deepfakes. The media and news industry will play a crucial role in combating misinformation, and we can expect a higher demand for accuracy in content. While concerns regarding the increasing volume of AI–generated content are valid, especially when it is created with harmful intentions, it is important to recognize the positive potential of AI. Even though deepfake generation is becoming one of the dominant ways of fueling the misinformation game, it is essential to collect and analyze that information so that we can create and/or adjust existing AI tools to address those challenges. For example, here are a couple of ways where AI can do more to help:

  • Deepfake Detection: When well–crafted, deepfakes blend seamlessly into genuine content, especially on social media, and have become increasingly difficult to identify due to their realistic appearance. To address this, researchers are currently building AI tools employing multimodal neural network analysis, assessing visual, auditory, text, and metadata to spot manipulation indicators. These systems are based on neural network architecture that learns to identify anomalies in data patterns and detect forgeries in audio and visual content.
  • AI–powered Fact–Checking: many fact–checking teams will likely experience increased demand and workload to provide content authenticity verification. AI can be leveraged to continuously retrain algorithms on extensive databases of previously flagged misinformation, such as false narratives injected into social media. These algorithms can then cross–reference claims for verification. Another valuable aspect is collaborative AI Learning: AI systems must continually learn from shared data to stay ahead of new misinformation tactics, improving their detection accuracy.

In addition to AI, another technology worth mentioning is blockchain for content capture and transfer (“digital provenance”). This approach has been implemented by a growing number of organizations, including Reuters. This creates a transparent content history from the very moment when a photo is taken, travels through the editorial desk and then is published. Digital provenance is also consistent with initiatives like C2PA (Content Authenticity Proof of Association), which provide verifiable "content credentials."

For AI to benefit society, it must be developed with a commitment to transparency and ethics, and in collaboration between the media, research, and technology sectors. By integrating multimodal analysis, real–time fact–checking, collective learning, and blockchain verification, the news industry can establish a strong defense against disinformation. 

About
Yulia Pavlova
:
Yulia Pavlova is the Head of Applied Innovation team at Reuters News agency.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Leveraging AI to uphold news authenticity in the deepfake era

May 28, 2024

Verifying digital content authenticity is already a challenge and that will only get worse as AI evolves. Yet AI is also the solution to those challenges, writes Reuters’ Yulia Pavlova.

A

s AI evolves, it is expected that by 2025–2026, it could assist in creating nearly 90% of online material, presenting challenges in managing content authenticity and exposing society to digital manipulations like deepfakes. The media and news industry will play a crucial role in combating misinformation, and we can expect a higher demand for accuracy in content. While concerns regarding the increasing volume of AI–generated content are valid, especially when it is created with harmful intentions, it is important to recognize the positive potential of AI. Even though deepfake generation is becoming one of the dominant ways of fueling the misinformation game, it is essential to collect and analyze that information so that we can create and/or adjust existing AI tools to address those challenges. For example, here are a couple of ways where AI can do more to help:

  • Deepfake Detection: When well–crafted, deepfakes blend seamlessly into genuine content, especially on social media, and have become increasingly difficult to identify due to their realistic appearance. To address this, researchers are currently building AI tools employing multimodal neural network analysis, assessing visual, auditory, text, and metadata to spot manipulation indicators. These systems are based on neural network architecture that learns to identify anomalies in data patterns and detect forgeries in audio and visual content.
  • AI–powered Fact–Checking: many fact–checking teams will likely experience increased demand and workload to provide content authenticity verification. AI can be leveraged to continuously retrain algorithms on extensive databases of previously flagged misinformation, such as false narratives injected into social media. These algorithms can then cross–reference claims for verification. Another valuable aspect is collaborative AI Learning: AI systems must continually learn from shared data to stay ahead of new misinformation tactics, improving their detection accuracy.

In addition to AI, another technology worth mentioning is blockchain for content capture and transfer (“digital provenance”). This approach has been implemented by a growing number of organizations, including Reuters. This creates a transparent content history from the very moment when a photo is taken, travels through the editorial desk and then is published. Digital provenance is also consistent with initiatives like C2PA (Content Authenticity Proof of Association), which provide verifiable "content credentials."

For AI to benefit society, it must be developed with a commitment to transparency and ethics, and in collaboration between the media, research, and technology sectors. By integrating multimodal analysis, real–time fact–checking, collective learning, and blockchain verification, the news industry can establish a strong defense against disinformation. 

About
Yulia Pavlova
:
Yulia Pavlova is the Head of Applied Innovation team at Reuters News agency.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.