.
S

taring down the lineup of critical elections in 2024—the United States, Taiwan, Russia, India, and the United Kingdom, among others—there’s a need to ensure that voters are not manipulated as a cornerstone of civic resilience. Here, large language models (LLMs) like ChatGPT find themselves in the hot seat. Despite the focus on concerns about adversaries exploiting LLMs for targeted disinformation, a more substantial impact is their role in accelerating information overload. LLMs are flooding the internet with excessive, repetitive, and often irrelevant content—driven by their widespread use for automated content production across various industries.

The typical reliance on influencers, journalists, and close circles to sift through the noise coupled with the surge of LLMs, deepens our reliance on LLM-generated content. This dependency creates ripe conditions for manipulation: ill-intentioned sources can omit critical details to sway opinions and perspectives, using information overload to hide the gaps. Under these conditions, it is difficult to sort through the chaos and boil it down to essential information only. That takes work. Information overload threatens to sap society’s attention spans before having proper time to fully grasp a situation.

However, there is an eye to the hurricane. In addition to generating text, LLMs can summarize texts that are clear and unambiguous. Of course, the internet is not clear and unambiguous—causing LLMs to generate falsehoods or fabrications. Yet, there exists potential to engineer LLMs that summarize complex issues and cut back on information overload. These advanced logical models, part of the journey toward “artificial general intelligence,” would use logical weights to assess the factual accuracy, relevance, and importance of information—not word association, the current basis of LLMs.

Such a model is a tall task. Flaws in its design might amplify false information due to the inherent trust people place in technology. These potential risks; however, pale in comparison to the predictable consequences of succumbing to information overload. With intelligent and ethical design, such a tool could foster populations that are more resilient to mis/disinformation and more equipped to make informed political decisions at the polls.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Engineering Generative AI to Ease Information Overload

Image via Adobe Stock.

January 13, 2024

To ensure democratic resilience in a 2024 that features numerous major elections, we must grapple with large language models and information overload. We can no longer rely on influencers, journalists, and close friends to sift through the noise, writes Thomas Plant.

S

taring down the lineup of critical elections in 2024—the United States, Taiwan, Russia, India, and the United Kingdom, among others—there’s a need to ensure that voters are not manipulated as a cornerstone of civic resilience. Here, large language models (LLMs) like ChatGPT find themselves in the hot seat. Despite the focus on concerns about adversaries exploiting LLMs for targeted disinformation, a more substantial impact is their role in accelerating information overload. LLMs are flooding the internet with excessive, repetitive, and often irrelevant content—driven by their widespread use for automated content production across various industries.

The typical reliance on influencers, journalists, and close circles to sift through the noise coupled with the surge of LLMs, deepens our reliance on LLM-generated content. This dependency creates ripe conditions for manipulation: ill-intentioned sources can omit critical details to sway opinions and perspectives, using information overload to hide the gaps. Under these conditions, it is difficult to sort through the chaos and boil it down to essential information only. That takes work. Information overload threatens to sap society’s attention spans before having proper time to fully grasp a situation.

However, there is an eye to the hurricane. In addition to generating text, LLMs can summarize texts that are clear and unambiguous. Of course, the internet is not clear and unambiguous—causing LLMs to generate falsehoods or fabrications. Yet, there exists potential to engineer LLMs that summarize complex issues and cut back on information overload. These advanced logical models, part of the journey toward “artificial general intelligence,” would use logical weights to assess the factual accuracy, relevance, and importance of information—not word association, the current basis of LLMs.

Such a model is a tall task. Flaws in its design might amplify false information due to the inherent trust people place in technology. These potential risks; however, pale in comparison to the predictable consequences of succumbing to information overload. With intelligent and ethical design, such a tool could foster populations that are more resilient to mis/disinformation and more equipped to make informed political decisions at the polls.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.