rtificial Intelligence (AI) offers enormous opportunity across a vast array of sectors: healthcare, food safety, energy conservation, transportation, and the list goes on. AI systems can also be applied to a variety of business processes such as HR, consumer engagement, manufacturing, and sales to drive efficiency and productivity for businesses. Despite these benefits, risks associated with AI have taken center stage in recent years. AI poses the potential for abuse including mass surveillance by governments, manipulation of democratic processes, bias reinforcement through using non-diverse data sets, and discrimination. Systems can also fail, whether through security breaches or imperfect manual overrides that led to the Boeing 737 MAX disaster.
So, What Is the Problem?
These risks must be anticipated and mitigated to fully harness the benefits of AI and build trust in its use. Over the past several years, organizations around the world have issued a plethora of guidelines and frameworks on the ethical development and use of AI. After sieving through the details, several principles common to the vast majority of these guidelines and frameworks become apparent, all pointing toward the need for AI to amplify human capabilities whilst ensuring that human well-being and safety is guarded. These principles highlight the need to protect privacy and human rights; ensure the transparency and traceability of AI algorithms; promote security, fairness, justice, and freedom; and clearly assign responsibility and liability for decisions made through AI. The last years also saw an increase in the part of the private sector’s ethical initiatives around AI.
All this looks good so far, but there is one big problem. These guidelines, principles and ethical initiatives are not binding. This means that there are no sanctions for those that break these principles, and no redress for those who face negative consequences of a “bad” decision by an AI-driven application. Entrepreneurs typically focus on maximizing profit rather than social utility, which can erode the credibility of their motives when they claim to promote ethical practices. This is made worse by the non-binding nature of principles and absence of any transparent mechanisms for monitoring and reporting, calling into question how the sector can be held accountable. Lastly, the current dominance of a handful of players, with a large part of the sector merely “following” the lead of the “giants”, is also perceived as an issue, especially from a social justice and equality perspective.
Moves Toward AI Governance
Governance has an important role to play in this debate. US Regulators recently acted to ban the use of facial recognition by all federal enforcement agencies. This was a result of intense lobbying by activists, while giants like IBM, Microsoft, and Amazon acted on their own to limit the sale of their AI-driven technologies. However, outright bans should be the last resort and not the first option. To maximize benefits and manage risks of novel technologies, regulators need to get ahead of the curve. The real questions are 1) what type of governance do we want? 2) how do we go about it?
Whilst pockets of legislation regulating AI do exist, 2019 saw a clear move toward regulating AI technologies at a regional and global level. Europe is the clear leader in working toward comprehensive regulation of AI technologies. The EU’s approach is to set minimum regulatory standards based on certain criteria. To start with, all laws currently applicable to humans are also applicable to AI technologies, the use of AI technologies needs to be fully disclosed, confidential information should be protected by AI systems, AI should not be weaponized, and all AI systems should have an “off switch”. In terms of accountability, it is clear that the buck must stop with the human and not the machine.
There is also a consensus that not all AI applications need the same level of regulation. The recent EU White paper on AI offers a risk-based approach based on the sensitivity of the sector and type of AI application. For example, an appointment-taking AI tool in a hospital needs to have a different set of regulatory requirements compared to a diagnostic tool, even if both applications are within the health sector. The overall approach therefore is toward sector-specific but flexible regulation. The EU is also looking at policies beyond AI, for example at data governance policies, which ensure quality data use and protection. Going forward, the EU is contemplating the creation of AI centers of excellence where solutions around AI governance will be tried and tested.
Another major player in the creation of rules for AI is the Council of Europe. The Council of Europe is complementing the work of the EU Commission and working on legal standards aimed at protecting fundamental human rights through its Ad hoc Committee on Artificial Intelligence (CAHAI). There is general consensus that instead of reinventing the wheel completely, the first step should be to look at current regulations in the context of AI applications and identify gaps that can then be worked on.
Another issue is that of intergovernmental cooperation, which is currently lagging behind scientific cooperation. Realizing this, the Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020. GPAI will be supported by a Secretariat, hosted by the OECD in Paris. Apart from the notable exception of China, GPAI currently includes many of the leading countries in AI including those from the EU, USA, India, Singapore, and Japan. While it is too early to judge the success of this initiative, it certainly is a good start.
Coming up with a full set of rules will take time. Moreover, better regulation principles also tell us that enforcement by sanctions and fines does not lead to optimal compliance. We need to therefore look beyond binding rules. This includes a closer look at the organizational culture and processes within companies and providing positive reinforcement for those who are doing a good job in terms of ethical behavior. Creating benchmarks and certifications for “trustworthy AI” is another way forward, though these certifications should be dynamic and outcome based.
However, ethical outcomes can be subjective, and are very much a function of individual and collective cultures and beliefs. This adds to the complexity of this issue.
The above approach highlights the need for governments to understand technology and increase their ability to measure, analyze, benchmark, and forecast the deployments of AI systems.
Where Does the Industry Really Stand and What Should it Do?
Businesses need clear rules and common standards to minimize their risks, safeguard their reputation, secure their investments, and build trust in their products. From a company’s perspective, incorporating ethics into the business process voluntarily makes business sense. Though some would argue that the move toward trustworthy AI from the private sector only really happens when shareholders and its customers demand it!
The best way for businesses to build trust is to act before governance mandates it. Incorporating ethical practices from the start goes a long way toward building trust. Practically this can be achieved by embedding ethicists in design teams, similar to how members of marketing, legal, and supply chain teams are already included. Another tactic is to establish external advisory bodies who can review and provide unbiased advice. The key is to stay consistent and coordinate efforts with the maximum number of external stakeholders, including regulators. It is important to take the pulse of society and act in line with its expectations. Industry should also engage frequently and comprehensively with governments and academia to track developments and evaluate the impact of their products on society.
…and Finally
We need to work together to build an ecosystem of trust, driven by cross-disciplinary collaboration and constant reflection. It is our joint responsibility. There is asymmetric information today between the private sector and the policy makers. Regulators need industry to make their job easier and industry needs regulators to set rules which provide stability and incentives for promoting innovation and growth. Ethicists need to help guide us to understand the evolving risks and threats of AI applications, and academics need to conduct interdisciplinary and targeted research to offer unbiased and neutral solutions. As citizens we need to increase our understanding of AI applications and their implications. After that, it is just a matter of connecting the dots. Once you scratch the surface, you see that there is immense willingness on all sides to do exactly that!
a global affairs media network
Connecting the Dots for Meaningful AI Governance
October 9, 2020
A
rtificial Intelligence (AI) offers enormous opportunity across a vast array of sectors: healthcare, food safety, energy conservation, transportation, and the list goes on. AI systems can also be applied to a variety of business processes such as HR, consumer engagement, manufacturing, and sales to drive efficiency and productivity for businesses. Despite these benefits, risks associated with AI have taken center stage in recent years. AI poses the potential for abuse including mass surveillance by governments, manipulation of democratic processes, bias reinforcement through using non-diverse data sets, and discrimination. Systems can also fail, whether through security breaches or imperfect manual overrides that led to the Boeing 737 MAX disaster.
So, What Is the Problem?
These risks must be anticipated and mitigated to fully harness the benefits of AI and build trust in its use. Over the past several years, organizations around the world have issued a plethora of guidelines and frameworks on the ethical development and use of AI. After sieving through the details, several principles common to the vast majority of these guidelines and frameworks become apparent, all pointing toward the need for AI to amplify human capabilities whilst ensuring that human well-being and safety is guarded. These principles highlight the need to protect privacy and human rights; ensure the transparency and traceability of AI algorithms; promote security, fairness, justice, and freedom; and clearly assign responsibility and liability for decisions made through AI. The last years also saw an increase in the part of the private sector’s ethical initiatives around AI.
All this looks good so far, but there is one big problem. These guidelines, principles and ethical initiatives are not binding. This means that there are no sanctions for those that break these principles, and no redress for those who face negative consequences of a “bad” decision by an AI-driven application. Entrepreneurs typically focus on maximizing profit rather than social utility, which can erode the credibility of their motives when they claim to promote ethical practices. This is made worse by the non-binding nature of principles and absence of any transparent mechanisms for monitoring and reporting, calling into question how the sector can be held accountable. Lastly, the current dominance of a handful of players, with a large part of the sector merely “following” the lead of the “giants”, is also perceived as an issue, especially from a social justice and equality perspective.
Moves Toward AI Governance
Governance has an important role to play in this debate. US Regulators recently acted to ban the use of facial recognition by all federal enforcement agencies. This was a result of intense lobbying by activists, while giants like IBM, Microsoft, and Amazon acted on their own to limit the sale of their AI-driven technologies. However, outright bans should be the last resort and not the first option. To maximize benefits and manage risks of novel technologies, regulators need to get ahead of the curve. The real questions are 1) what type of governance do we want? 2) how do we go about it?
Whilst pockets of legislation regulating AI do exist, 2019 saw a clear move toward regulating AI technologies at a regional and global level. Europe is the clear leader in working toward comprehensive regulation of AI technologies. The EU’s approach is to set minimum regulatory standards based on certain criteria. To start with, all laws currently applicable to humans are also applicable to AI technologies, the use of AI technologies needs to be fully disclosed, confidential information should be protected by AI systems, AI should not be weaponized, and all AI systems should have an “off switch”. In terms of accountability, it is clear that the buck must stop with the human and not the machine.
There is also a consensus that not all AI applications need the same level of regulation. The recent EU White paper on AI offers a risk-based approach based on the sensitivity of the sector and type of AI application. For example, an appointment-taking AI tool in a hospital needs to have a different set of regulatory requirements compared to a diagnostic tool, even if both applications are within the health sector. The overall approach therefore is toward sector-specific but flexible regulation. The EU is also looking at policies beyond AI, for example at data governance policies, which ensure quality data use and protection. Going forward, the EU is contemplating the creation of AI centers of excellence where solutions around AI governance will be tried and tested.
Another major player in the creation of rules for AI is the Council of Europe. The Council of Europe is complementing the work of the EU Commission and working on legal standards aimed at protecting fundamental human rights through its Ad hoc Committee on Artificial Intelligence (CAHAI). There is general consensus that instead of reinventing the wheel completely, the first step should be to look at current regulations in the context of AI applications and identify gaps that can then be worked on.
Another issue is that of intergovernmental cooperation, which is currently lagging behind scientific cooperation. Realizing this, the Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020. GPAI will be supported by a Secretariat, hosted by the OECD in Paris. Apart from the notable exception of China, GPAI currently includes many of the leading countries in AI including those from the EU, USA, India, Singapore, and Japan. While it is too early to judge the success of this initiative, it certainly is a good start.
Coming up with a full set of rules will take time. Moreover, better regulation principles also tell us that enforcement by sanctions and fines does not lead to optimal compliance. We need to therefore look beyond binding rules. This includes a closer look at the organizational culture and processes within companies and providing positive reinforcement for those who are doing a good job in terms of ethical behavior. Creating benchmarks and certifications for “trustworthy AI” is another way forward, though these certifications should be dynamic and outcome based.
However, ethical outcomes can be subjective, and are very much a function of individual and collective cultures and beliefs. This adds to the complexity of this issue.
The above approach highlights the need for governments to understand technology and increase their ability to measure, analyze, benchmark, and forecast the deployments of AI systems.
Where Does the Industry Really Stand and What Should it Do?
Businesses need clear rules and common standards to minimize their risks, safeguard their reputation, secure their investments, and build trust in their products. From a company’s perspective, incorporating ethics into the business process voluntarily makes business sense. Though some would argue that the move toward trustworthy AI from the private sector only really happens when shareholders and its customers demand it!
The best way for businesses to build trust is to act before governance mandates it. Incorporating ethical practices from the start goes a long way toward building trust. Practically this can be achieved by embedding ethicists in design teams, similar to how members of marketing, legal, and supply chain teams are already included. Another tactic is to establish external advisory bodies who can review and provide unbiased advice. The key is to stay consistent and coordinate efforts with the maximum number of external stakeholders, including regulators. It is important to take the pulse of society and act in line with its expectations. Industry should also engage frequently and comprehensively with governments and academia to track developments and evaluate the impact of their products on society.
…and Finally
We need to work together to build an ecosystem of trust, driven by cross-disciplinary collaboration and constant reflection. It is our joint responsibility. There is asymmetric information today between the private sector and the policy makers. Regulators need industry to make their job easier and industry needs regulators to set rules which provide stability and incentives for promoting innovation and growth. Ethicists need to help guide us to understand the evolving risks and threats of AI applications, and academics need to conduct interdisciplinary and targeted research to offer unbiased and neutral solutions. As citizens we need to increase our understanding of AI applications and their implications. After that, it is just a matter of connecting the dots. Once you scratch the surface, you see that there is immense willingness on all sides to do exactly that!