.
T

his short report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Exponential Technologies). The meeting took place under the Chatham House Rule, so specific ideas will not be directly attributed to any specific Fellow. W2050 Senior Fellows and Brain Trust members attending the committee meeting were: Joseph Toscano, Mario Vasilescu, Nikos Acuna, Srujana Kaddevarmuth, and Stacey Rolland. Also present were Diplomatic Courier Editor Melissa Metos and W2050 Executive Director Dr. Shane C. Szarkowski.

When we talk about artificial intelligence and societal health today, we are almost always talking about ways AI is being or could be used recklessly or harmfully—and what sort of regulations we should have in place to mitigate those harms. However, we’ve generally paid less attention to how AI can be used actively as a force for the good of societal health. We asked members of W2050’s Senior Fellows cohort and Brain Trust to consider what conditions need to be in place for AI to be used as a tool for societal good.

Our unhealthy relationship with information 

The debate surrounding best ways to use artificial intelligence as a tool is widely discussed; though it tends to leap over the imperative conversation of first discovering the constraints hindering the ability to tap into AI as a force for good. To address constraints, a better understanding of the roots of our ‘AI problem’ is necessary, with some committee members viewing it as a call back to social media. During the rapid onset of social media, users and policymakers alike had difficulty placing a value on, and thus assessing, both personal information (plus how it’s used) and the information we consume. It’s become evident that our relationship with information has eroded alongside technological advancement due, in part, to the lack of strong innovation surrounding data provenance—transparent insight into the origin, chain of custody, and changes made to a digital object. Without adequate provenance, there’s little to no transparency—creating a digital landscape ripe for manipulation and “fundamentally reorienting” our relationship with information. Along the way, our trust in expertise and one another decays. 

Further, this absence of transparency impacts policymakers’ ability to define ‘truth’ and ‘misinformation,’ which, provenance aside, is already a monumental undertaking when information itself has various purposes—anything from persuasion to education. Without bipartisan consensus on such important terminology nor what our information is worth, it becomes increasingly difficult to identify the root of the AI problem and assess risk. Unfortunately, a common response to this difficulty is compulsive mania—the behavioral urge to hastily fix symptoms rather than taking a step back and discovering the true problem. Admittedly, creating such provenance tools, and thus shaping policy, for AI as a source for good is a daunting task. 

Bipartisan consensus is particularly difficult in today's charged environment, but is further complicated by the complexity at issue. As experienced with climate change and the Paris Treaty, with a particularly complex issue clarity (and the attendant will to take action) often comes only as the risks associated with the problem increase. Worse, even if we arrive at a consensus of defining and valuing our information, that doesn't necessarily lead to creating a more equitable playing field—primarily due to the lack of private sector guardrails and widespread digital literacy. In particular, digital literacy has become increasingly difficult to incentivize when platforms are motivated by the monetization of information which is a slippery slope that leads to too much, albeit low quality, ‘information creation.’ What we are left with are users and policymakers struggling to catch up as the aforementioned barriers reinforce the regulation and user attention problem.   

To reorient society, first simplify the problem

Perhaps it’s time to simplify this ‘AI problem;’ take a step back and sit in the stage of discovery for some time to discover the meaning of ‘truth,’ ‘misinformation,’ and the value of information. As many march forward innovation wise, the risk becomes falling into the trap of compulsive mania as they go—hastily putting a bandaid on the symptoms that emerge from AI use rather than delving into the root of the problem. Instead, by tapping into a first principle approach, we can revisit the lessons learned (good and bad) from previous attempts to address misinformation, and use those learnings to work toward bipartisan consensus of AI–related terminology. Only then can software engineers innovate new solutions like provenance software. This first principles approach is undoubtedly a better alternative than waiting for things to get worse as a way to unveil the root. 

In tandem with software, this approach emphasizes the need for accessible education with a focus on digital literacy. This type of education can replace mania and intersects well with provenance software. Accurate provenance software coupled with digital literacy would allow us to work on our currently problematic relationship with information. In fact, incentivizing public engagement with a better understanding of AI and information, is a public good. Even if it doesn’t necessarily lead to a specific policy outcome, it will certainly lead to more open and informed policy debates. For example, on the policy front, some are discussing different tools that would disclose information as being “Generated by AI” to let people decide what they want to see rather than controlling the flow of information. Giving users the power to assess the value of their information and others through provenance software would reorient society by building a healthier, more transparent, digital ecosystem. 

Priorities for a healthier digital ecosystem 

We don’t have an AI problem; we have an information problem: We need to change how we think about AI and stop treating it as a completely new problem. Instead, we need to recognize that most AI problems are evolutions of challenges around personal data and information we’ve been struggling with since the age of social media. Looking at it this way, we can learn from our past mistakes and improve our relationship with information. 

Simplify the problem rather than oversimplifying the solution: By embracing first principle thinking we can simplify the problem itself and avoid falling into the all–too familiar trap of compulsive mania which, by nature, simplifies the solution rather than the problem. Getting bipartisan consensus on how we define misinformation, truth, AI-generated, and any other AI–related terminology, is the first step. As one of our committee members articulated, we need to solve information pollution before we can solve pollution. 

Innovating a healthier digital ecosystem with digital provenance tools: Once we have consensus on terminology, we can move toward greater transparency by creating accurate digital provenance tools. These tools will allow us to discover the value of our data and assess information on an individual basis. Such a technique involves both complex code and community engagement because even with, for example, check marks verifying information as ‘truth,’ truth will still always be somewhat relative. Thus building a relational model in addition to verifying where the information came from and the quality of it, is key.

Rethink how we frame our digital information infrastructure: Markets incentivize the manipulation of our personal data and other information, while consumers are not incentivized to be good information caretakers. We need to put more effort into exploring alternative incentive structures, perhaps utilizing behavioral economics to incentivize healthier actions. In a way, we need to be deprogrammed. Instead of fearing the validity of information and lacking an incentive to care about verifying provenance, there should be a healthy balance of education (digital literacy) and incentive structures in place so that, whether selfishly or not, the average person cares about what happens to their information and the quality of external information they consume.

About
Melissa Metos
:
Melissa Metos is a Diplomatic Courier Editor and Writer.
About
Shane Szarkowski
:
Dr. Shane C. Szarkowski is Editor–in–Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Healthier digital ecosystems via provenance, transparency

July 31, 2024

Members of World in 2050’s Senior Fellows cohort and Brain Trust weigh in on what conditions need to be in place for AI to be used as a tool for societal good.

T

his short report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Exponential Technologies). The meeting took place under the Chatham House Rule, so specific ideas will not be directly attributed to any specific Fellow. W2050 Senior Fellows and Brain Trust members attending the committee meeting were: Joseph Toscano, Mario Vasilescu, Nikos Acuna, Srujana Kaddevarmuth, and Stacey Rolland. Also present were Diplomatic Courier Editor Melissa Metos and W2050 Executive Director Dr. Shane C. Szarkowski.

When we talk about artificial intelligence and societal health today, we are almost always talking about ways AI is being or could be used recklessly or harmfully—and what sort of regulations we should have in place to mitigate those harms. However, we’ve generally paid less attention to how AI can be used actively as a force for the good of societal health. We asked members of W2050’s Senior Fellows cohort and Brain Trust to consider what conditions need to be in place for AI to be used as a tool for societal good.

Our unhealthy relationship with information 

The debate surrounding best ways to use artificial intelligence as a tool is widely discussed; though it tends to leap over the imperative conversation of first discovering the constraints hindering the ability to tap into AI as a force for good. To address constraints, a better understanding of the roots of our ‘AI problem’ is necessary, with some committee members viewing it as a call back to social media. During the rapid onset of social media, users and policymakers alike had difficulty placing a value on, and thus assessing, both personal information (plus how it’s used) and the information we consume. It’s become evident that our relationship with information has eroded alongside technological advancement due, in part, to the lack of strong innovation surrounding data provenance—transparent insight into the origin, chain of custody, and changes made to a digital object. Without adequate provenance, there’s little to no transparency—creating a digital landscape ripe for manipulation and “fundamentally reorienting” our relationship with information. Along the way, our trust in expertise and one another decays. 

Further, this absence of transparency impacts policymakers’ ability to define ‘truth’ and ‘misinformation,’ which, provenance aside, is already a monumental undertaking when information itself has various purposes—anything from persuasion to education. Without bipartisan consensus on such important terminology nor what our information is worth, it becomes increasingly difficult to identify the root of the AI problem and assess risk. Unfortunately, a common response to this difficulty is compulsive mania—the behavioral urge to hastily fix symptoms rather than taking a step back and discovering the true problem. Admittedly, creating such provenance tools, and thus shaping policy, for AI as a source for good is a daunting task. 

Bipartisan consensus is particularly difficult in today's charged environment, but is further complicated by the complexity at issue. As experienced with climate change and the Paris Treaty, with a particularly complex issue clarity (and the attendant will to take action) often comes only as the risks associated with the problem increase. Worse, even if we arrive at a consensus of defining and valuing our information, that doesn't necessarily lead to creating a more equitable playing field—primarily due to the lack of private sector guardrails and widespread digital literacy. In particular, digital literacy has become increasingly difficult to incentivize when platforms are motivated by the monetization of information which is a slippery slope that leads to too much, albeit low quality, ‘information creation.’ What we are left with are users and policymakers struggling to catch up as the aforementioned barriers reinforce the regulation and user attention problem.   

To reorient society, first simplify the problem

Perhaps it’s time to simplify this ‘AI problem;’ take a step back and sit in the stage of discovery for some time to discover the meaning of ‘truth,’ ‘misinformation,’ and the value of information. As many march forward innovation wise, the risk becomes falling into the trap of compulsive mania as they go—hastily putting a bandaid on the symptoms that emerge from AI use rather than delving into the root of the problem. Instead, by tapping into a first principle approach, we can revisit the lessons learned (good and bad) from previous attempts to address misinformation, and use those learnings to work toward bipartisan consensus of AI–related terminology. Only then can software engineers innovate new solutions like provenance software. This first principles approach is undoubtedly a better alternative than waiting for things to get worse as a way to unveil the root. 

In tandem with software, this approach emphasizes the need for accessible education with a focus on digital literacy. This type of education can replace mania and intersects well with provenance software. Accurate provenance software coupled with digital literacy would allow us to work on our currently problematic relationship with information. In fact, incentivizing public engagement with a better understanding of AI and information, is a public good. Even if it doesn’t necessarily lead to a specific policy outcome, it will certainly lead to more open and informed policy debates. For example, on the policy front, some are discussing different tools that would disclose information as being “Generated by AI” to let people decide what they want to see rather than controlling the flow of information. Giving users the power to assess the value of their information and others through provenance software would reorient society by building a healthier, more transparent, digital ecosystem. 

Priorities for a healthier digital ecosystem 

We don’t have an AI problem; we have an information problem: We need to change how we think about AI and stop treating it as a completely new problem. Instead, we need to recognize that most AI problems are evolutions of challenges around personal data and information we’ve been struggling with since the age of social media. Looking at it this way, we can learn from our past mistakes and improve our relationship with information. 

Simplify the problem rather than oversimplifying the solution: By embracing first principle thinking we can simplify the problem itself and avoid falling into the all–too familiar trap of compulsive mania which, by nature, simplifies the solution rather than the problem. Getting bipartisan consensus on how we define misinformation, truth, AI-generated, and any other AI–related terminology, is the first step. As one of our committee members articulated, we need to solve information pollution before we can solve pollution. 

Innovating a healthier digital ecosystem with digital provenance tools: Once we have consensus on terminology, we can move toward greater transparency by creating accurate digital provenance tools. These tools will allow us to discover the value of our data and assess information on an individual basis. Such a technique involves both complex code and community engagement because even with, for example, check marks verifying information as ‘truth,’ truth will still always be somewhat relative. Thus building a relational model in addition to verifying where the information came from and the quality of it, is key.

Rethink how we frame our digital information infrastructure: Markets incentivize the manipulation of our personal data and other information, while consumers are not incentivized to be good information caretakers. We need to put more effort into exploring alternative incentive structures, perhaps utilizing behavioral economics to incentivize healthier actions. In a way, we need to be deprogrammed. Instead of fearing the validity of information and lacking an incentive to care about verifying provenance, there should be a healthy balance of education (digital literacy) and incentive structures in place so that, whether selfishly or not, the average person cares about what happens to their information and the quality of external information they consume.

About
Melissa Metos
:
Melissa Metos is a Diplomatic Courier Editor and Writer.
About
Shane Szarkowski
:
Dr. Shane C. Szarkowski is Editor–in–Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.