his report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Societal & Governance Institutions Committee). The meeting took place under the Chatham House Rule, so ideas will not be directly attributed to any specific Fellow. W2050 Senior Fellows attending the committee meeting were: Christopher Karwacki, Lisa Gable, Thomas Garrett, and Dr. Tina Managhan.
For some time, W2050 has been monitoring a trend of declining faith in our societal and governance institutions—this is concerning given the increasingly global nature of problems facing us today. The rapid development of AI has the potential to worsen this institutional crisis, or to provide a key to fix what ails our institutions. W2050’s Senior Fellows committee on Societal and Governance Institutions met recently to discuss the impacts AI could have on our institutions and what steps we can take to make sure AI helps us build more effective and resilient institutions.
Eroding Institutional Trust, Efficacy Leads to Private Actor Involvement
Trust in the effectiveness and legitimacy of our social and governance institutions has been on the decline for some time. One side effect of this is that private actors have begun to fill in some roles where institutions are either under-performing or there is a perception that they are under-performing. One recent example is the 2021 Taliban takeover of Afghanistan, which saw charities and NGOs step in to help Afghans who had aided NATO to evacuate—when it became clear governments were failing to do so.
While this example gives us some reason for hope—the idea that private actors can step in successfully when our institutions falter is a hopeful one—our institutions aren’t going anywhere, and we need them to be robust to be future ready. As AI becomes more widely available and its applications proliferate, private actors will become more capable of filling institutional gaps. However, those actors are not regulated or beholden to the public, leaving space for selfish action. Further, there are some functions of institutions which are not well suited for private actors, and if it appears that private actors can make institutions obsolete then public buy-in to the institutions we still need will likely erode further.
Role of Democracies in Regulating AI, Combating Bias
All technologies tend to be more enabling for some people than others. One of the hopes for AI is that it will help us address issues of equity and empower marginalized groups—for instance AI can be used to aid in remote diagnosis of rare diseases, creating opportunities for better healthcare access for individuals lacking access to modern facilities. However, there are concerns that prevailing issues—for instance bias in AI training data and uncertainty about what kind of innovation pathways will both be profitable and do good—will create more inequality rather than helping solve it.
Summits discussing how to regulate AI have in recent years been dominated by undemocratic governments, which itself has an inhibitory effect on open and critical conversations about regulation and best practice. Democracies provide a better space for having conversations about how to regulate AI and combat bias, given the democratic tradition for more open expression and freedom of thought. We need this to have more inclusive conversations that bring in marginalized groups. Yet we must also be mindful of the inequalities existing within democracies and work at inclusion as we discuss regulation and bias.
Priorities for Ensuring AI Bolsters Our Institutions
Institutions, Private Actors Must Collaborate: For now, private actors are better suited for some tasks that are traditionally carried out by institutions, and that will expand as they learn to leverage AI—and that brings issues. Looking at Afghanistan again, a lack of coordination between private actors and institutions led to breakdowns. Afghans who could have been evacuated by charities were stuck under “shelter in place” orders from embassies. Meanwhile, private actors evacuated some Afghans who were likely human rights violators because they didn’t have access to the same information that institutions did. Institutions and private actors will inhabit many of the same spaces for the immediate future, and they must learn to coordinate and empower one another.
Adapting AI to Institutions: One reason institutions have a crisis of public trust is that they are slow to adapt to changing needs, and there are access gaps. AI, properly tailored to the needs of individual institutions and their constituencies, can make institutions more responsive and more accessible. For instance, layers of bureaucracy can make it difficult for global publics to access services from an institution. Those same layers of bureaucracy can make institutions slower to respond to evolving situations. Many bureaucratic functions are precisely the sort of thing that AI is good at streamlining, with its ability to digest masses of information quickly.
Inclusive Consultation: One of the biggest roles of institutions will continue to be laying down regulation and best practice guidelines—both of which are crucial to the healthy development of AI. To address AI bias and to ensure innovations work for everyone and not just against privileged segments—institutions must actively seek consultation with marginalized groups as well as the usual slate of recognized stakeholders. These marginalized groups will be key to getting fuller perspectives on bias and ensure innovators have a deeper understanding of community needs.
Fast Tracking Innovation: When we think of how institutions promulgate regulations and best practice guidelines, they are typically well behind the tech innovation curve. In some cases, it may be best to slow or curtain certain types of innovation (see below). In other cases, there is a clear case to be made that AI can have immediate and profound impacts on the public good. In such cases—see the example of the remote diagnosis of disease mentioned above—a productive role of institutions would be to help get specific applications of AI through regulatory hurdles as rapidly as (safely) possible.
Bifurcated Regulatory Pathways: Bifurcated Regulatory Pathways: The EU recently published guidance on AI regulation, where it categorized different uses of AI in three categories: acceptable risk, limited risk, and unacceptable risk. These categories help to shape what sort of regulatory hurdles must be cleared for a given application of AI. The same should be considered by sector—areas where there are clear public goods and well-understood or limited risks such as education and medicine should have different regulatory hurdles and best practice guidance from, for instance, AI in the security space (law enforcement or military).
a global affairs media network
AI Could Be Salvation or Doom of Our Institutions
Image by Pete Linforth from Pixabay
January 10, 2024
World in 2050 Senior Fellows met in December to discuss the impacts that AI will have on the health of our institutions in 2024 and beyond. Fellows highlighted the likely impact of AI-empowered private actors as well as the pivotal role of democracies in promulgating AI regulation best practice.
T
his report was compiled from a collective intelligence gathering of World in 2050’s (W2050) Senior Fellows (Societal & Governance Institutions Committee). The meeting took place under the Chatham House Rule, so ideas will not be directly attributed to any specific Fellow. W2050 Senior Fellows attending the committee meeting were: Christopher Karwacki, Lisa Gable, Thomas Garrett, and Dr. Tina Managhan.
For some time, W2050 has been monitoring a trend of declining faith in our societal and governance institutions—this is concerning given the increasingly global nature of problems facing us today. The rapid development of AI has the potential to worsen this institutional crisis, or to provide a key to fix what ails our institutions. W2050’s Senior Fellows committee on Societal and Governance Institutions met recently to discuss the impacts AI could have on our institutions and what steps we can take to make sure AI helps us build more effective and resilient institutions.
Eroding Institutional Trust, Efficacy Leads to Private Actor Involvement
Trust in the effectiveness and legitimacy of our social and governance institutions has been on the decline for some time. One side effect of this is that private actors have begun to fill in some roles where institutions are either under-performing or there is a perception that they are under-performing. One recent example is the 2021 Taliban takeover of Afghanistan, which saw charities and NGOs step in to help Afghans who had aided NATO to evacuate—when it became clear governments were failing to do so.
While this example gives us some reason for hope—the idea that private actors can step in successfully when our institutions falter is a hopeful one—our institutions aren’t going anywhere, and we need them to be robust to be future ready. As AI becomes more widely available and its applications proliferate, private actors will become more capable of filling institutional gaps. However, those actors are not regulated or beholden to the public, leaving space for selfish action. Further, there are some functions of institutions which are not well suited for private actors, and if it appears that private actors can make institutions obsolete then public buy-in to the institutions we still need will likely erode further.
Role of Democracies in Regulating AI, Combating Bias
All technologies tend to be more enabling for some people than others. One of the hopes for AI is that it will help us address issues of equity and empower marginalized groups—for instance AI can be used to aid in remote diagnosis of rare diseases, creating opportunities for better healthcare access for individuals lacking access to modern facilities. However, there are concerns that prevailing issues—for instance bias in AI training data and uncertainty about what kind of innovation pathways will both be profitable and do good—will create more inequality rather than helping solve it.
Summits discussing how to regulate AI have in recent years been dominated by undemocratic governments, which itself has an inhibitory effect on open and critical conversations about regulation and best practice. Democracies provide a better space for having conversations about how to regulate AI and combat bias, given the democratic tradition for more open expression and freedom of thought. We need this to have more inclusive conversations that bring in marginalized groups. Yet we must also be mindful of the inequalities existing within democracies and work at inclusion as we discuss regulation and bias.
Priorities for Ensuring AI Bolsters Our Institutions
Institutions, Private Actors Must Collaborate: For now, private actors are better suited for some tasks that are traditionally carried out by institutions, and that will expand as they learn to leverage AI—and that brings issues. Looking at Afghanistan again, a lack of coordination between private actors and institutions led to breakdowns. Afghans who could have been evacuated by charities were stuck under “shelter in place” orders from embassies. Meanwhile, private actors evacuated some Afghans who were likely human rights violators because they didn’t have access to the same information that institutions did. Institutions and private actors will inhabit many of the same spaces for the immediate future, and they must learn to coordinate and empower one another.
Adapting AI to Institutions: One reason institutions have a crisis of public trust is that they are slow to adapt to changing needs, and there are access gaps. AI, properly tailored to the needs of individual institutions and their constituencies, can make institutions more responsive and more accessible. For instance, layers of bureaucracy can make it difficult for global publics to access services from an institution. Those same layers of bureaucracy can make institutions slower to respond to evolving situations. Many bureaucratic functions are precisely the sort of thing that AI is good at streamlining, with its ability to digest masses of information quickly.
Inclusive Consultation: One of the biggest roles of institutions will continue to be laying down regulation and best practice guidelines—both of which are crucial to the healthy development of AI. To address AI bias and to ensure innovations work for everyone and not just against privileged segments—institutions must actively seek consultation with marginalized groups as well as the usual slate of recognized stakeholders. These marginalized groups will be key to getting fuller perspectives on bias and ensure innovators have a deeper understanding of community needs.
Fast Tracking Innovation: When we think of how institutions promulgate regulations and best practice guidelines, they are typically well behind the tech innovation curve. In some cases, it may be best to slow or curtain certain types of innovation (see below). In other cases, there is a clear case to be made that AI can have immediate and profound impacts on the public good. In such cases—see the example of the remote diagnosis of disease mentioned above—a productive role of institutions would be to help get specific applications of AI through regulatory hurdles as rapidly as (safely) possible.
Bifurcated Regulatory Pathways: Bifurcated Regulatory Pathways: The EU recently published guidance on AI regulation, where it categorized different uses of AI in three categories: acceptable risk, limited risk, and unacceptable risk. These categories help to shape what sort of regulatory hurdles must be cleared for a given application of AI. The same should be considered by sector—areas where there are clear public goods and well-understood or limited risks such as education and medicine should have different regulatory hurdles and best practice guidance from, for instance, AI in the security space (law enforcement or military).