.
W

ith the revelation that Tesla is activating in-car cameras to monitor motorists using the self-driving feature, the European Commission’s proposal to regulate high-risk AI systems offers a glimpse at what the future of artificial intelligence (AI) could look like in the European Union ––for companies like Tesla, and for AI in general.  

On April 25th, 2021, the European Commission released “the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence,” which, if approved, will become the Artificial Intelligence Act. In the text, the European Commission proposes standardized definitions and restrictions for AI alongside calls for new regulatory bodies to test and promote safe AI systems in the EU. The proposal’s listed objective is to ensure a market for “the development and use of trustworthy artificial intelligence in the Union.”

AI is complex: it is a group of technologies that perceive patterns, calculate decisions, and predict outcomes by weighing hundreds of variables and patterns. It seeks to approach the computational abilities of the human mind––all without consciousness. But given that it’s a tool, the outcomes of Al––whether positive or negative––depend on intent. The programmer can manipulate the system to produce societal benefit or great harm.

With respect to the benefit afforded by AI, the European Commission identifies society’s pressing issues that could be supported by this technology: climate change, education, health, the public sector, finance, mobility, home affairs, and agriculture. In addition to these sectors, the Commission recognizes that encouraging AI innovation could offer the EU a strategic competitive advantage in international markets.

However, AI technologies do not come without potentially dangerous drawbacks. The Commission identifies that in the case of system failures or even small errors, AI can jeopardize the health and safety of European citizens. Think back to Tesla’s AI in self-driving vehicles. What could be the cost of an error? Conversely, AI can be dangerous if it works too well: if assessing human activity or characteristics, AI can gather information to the point that it oversteps the user’s fundamental rights to human dignity, privacy, and protection of personal data.

Given the split between benefits and concerns, the proposed Artificial Intelligence Act covers two main aims: encouraging innovation and restricting high-risk AI systems. To bolster Union-wide innovation, the Commission plans to offer AI-specific resources to all member states through what it terms "an innovation sandbox". This AI regulatory sandbox will serve as a database of personal information “lawfully collected for other purposes” in the EU. The member states can use these data to develop, improve, and test their AI systems before they enter the market.

Although it calls for innovation, the bulk of the European Commission’s proposal aims to regulate and limit AI use, namely by identifying and categorizing characteristics of “high-risk” AI. As outlined in Article 32, these systems “pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence.”

Conveniently, the Commission lists examples of high-risk AI. They tend to fall into three groups based on their negative outcomes. First, AI that could cause physical harm if it malfunctions in sectors like traffic, water supply, gas, heating, and electricity. Second, human-selection AI that could inherit the programmer’s biases if used for certain purposes in areas like job recruitment, educational evaluations, credit scores, and immigration documentation. Third, AI designed for certain tasks in criminal justice, including individual risk assessment, polygraph calculation, deep fake detection (manipulated media), personality assessment, and criminality assessment.

What can the EU expect in light of these classifications? Restrictions, but safety. If approved, the Artificial Intelligence Act would install a new European Artificial Intelligence Board (EAIB) comprised of representatives of the European Commission and member states. This board would impose international regulations on AI systems enforced by fines.

According to these new rules, some AI technologies will be off-the-table. The Commission plans to prohibit all devices intended to distort human behavior or exploit vulnerabilities of children and people “due to their age, physical or mental incapacities.” Additionally, the EU will see the prohibition of systems that violate the right to dignity and non-discrimination. In other words, the AI may not calculate the trustworthiness of people according to their past social behavior or predicted personality characteristics. Finally, there will be restrictions on surveillance for law enforcement that uses biometric identification––the evaluation of physical or behavioral human characteristics––in public spaces. However, this last restriction is qualified. The Commission permits surveillance AI using biometric identification in public spaces within the following contexts: threats to life, terrorist attacks, identification of targeted victims, and identification of missing children.

If a specific AI system does not violate these rules but still classifies as high-risk, the providers must meet a series of requirements. Perhaps most obviously, the AI must work––it should perform consistently and without errors. The provider must also make sure that the AI’s functions are transparent, recorded, and controllable to gain approval from the EAIB. If those regulations are met and the EAIB approves the AI, the process isn’t over: the provider must continuously reveal to any users that the technology with which they are interacting is an AI system.

Systems with limited risk must only reveal that they make use of AI, and systems with minimal risk may go unregulated by the Artificial Intelligence Act.

According to the proposal’s financial statement, the implementation of the proposals for innovation, restrictions, and prohibitions will occur within one and a half years after the Regulation’s adoption. But adoption isn’t guaranteed to happen quickly––it may take years before the rules become law. However, in the meantime, this proposal serves as an early model for other countries considering regulations for AI technology.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

EU Seeks to Limit High-Risk AI

Image by Pixabay.

June 11, 2021

A new proposal would offer sweeping legislation to ensure a market for the development and use of trustworthy artificial intelligence in the European Union, while protecting individual privacy and limiting harmful bias.

W

ith the revelation that Tesla is activating in-car cameras to monitor motorists using the self-driving feature, the European Commission’s proposal to regulate high-risk AI systems offers a glimpse at what the future of artificial intelligence (AI) could look like in the European Union ––for companies like Tesla, and for AI in general.  

On April 25th, 2021, the European Commission released “the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence,” which, if approved, will become the Artificial Intelligence Act. In the text, the European Commission proposes standardized definitions and restrictions for AI alongside calls for new regulatory bodies to test and promote safe AI systems in the EU. The proposal’s listed objective is to ensure a market for “the development and use of trustworthy artificial intelligence in the Union.”

AI is complex: it is a group of technologies that perceive patterns, calculate decisions, and predict outcomes by weighing hundreds of variables and patterns. It seeks to approach the computational abilities of the human mind––all without consciousness. But given that it’s a tool, the outcomes of Al––whether positive or negative––depend on intent. The programmer can manipulate the system to produce societal benefit or great harm.

With respect to the benefit afforded by AI, the European Commission identifies society’s pressing issues that could be supported by this technology: climate change, education, health, the public sector, finance, mobility, home affairs, and agriculture. In addition to these sectors, the Commission recognizes that encouraging AI innovation could offer the EU a strategic competitive advantage in international markets.

However, AI technologies do not come without potentially dangerous drawbacks. The Commission identifies that in the case of system failures or even small errors, AI can jeopardize the health and safety of European citizens. Think back to Tesla’s AI in self-driving vehicles. What could be the cost of an error? Conversely, AI can be dangerous if it works too well: if assessing human activity or characteristics, AI can gather information to the point that it oversteps the user’s fundamental rights to human dignity, privacy, and protection of personal data.

Given the split between benefits and concerns, the proposed Artificial Intelligence Act covers two main aims: encouraging innovation and restricting high-risk AI systems. To bolster Union-wide innovation, the Commission plans to offer AI-specific resources to all member states through what it terms "an innovation sandbox". This AI regulatory sandbox will serve as a database of personal information “lawfully collected for other purposes” in the EU. The member states can use these data to develop, improve, and test their AI systems before they enter the market.

Although it calls for innovation, the bulk of the European Commission’s proposal aims to regulate and limit AI use, namely by identifying and categorizing characteristics of “high-risk” AI. As outlined in Article 32, these systems “pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence.”

Conveniently, the Commission lists examples of high-risk AI. They tend to fall into three groups based on their negative outcomes. First, AI that could cause physical harm if it malfunctions in sectors like traffic, water supply, gas, heating, and electricity. Second, human-selection AI that could inherit the programmer’s biases if used for certain purposes in areas like job recruitment, educational evaluations, credit scores, and immigration documentation. Third, AI designed for certain tasks in criminal justice, including individual risk assessment, polygraph calculation, deep fake detection (manipulated media), personality assessment, and criminality assessment.

What can the EU expect in light of these classifications? Restrictions, but safety. If approved, the Artificial Intelligence Act would install a new European Artificial Intelligence Board (EAIB) comprised of representatives of the European Commission and member states. This board would impose international regulations on AI systems enforced by fines.

According to these new rules, some AI technologies will be off-the-table. The Commission plans to prohibit all devices intended to distort human behavior or exploit vulnerabilities of children and people “due to their age, physical or mental incapacities.” Additionally, the EU will see the prohibition of systems that violate the right to dignity and non-discrimination. In other words, the AI may not calculate the trustworthiness of people according to their past social behavior or predicted personality characteristics. Finally, there will be restrictions on surveillance for law enforcement that uses biometric identification––the evaluation of physical or behavioral human characteristics––in public spaces. However, this last restriction is qualified. The Commission permits surveillance AI using biometric identification in public spaces within the following contexts: threats to life, terrorist attacks, identification of targeted victims, and identification of missing children.

If a specific AI system does not violate these rules but still classifies as high-risk, the providers must meet a series of requirements. Perhaps most obviously, the AI must work––it should perform consistently and without errors. The provider must also make sure that the AI’s functions are transparent, recorded, and controllable to gain approval from the EAIB. If those regulations are met and the EAIB approves the AI, the process isn’t over: the provider must continuously reveal to any users that the technology with which they are interacting is an AI system.

Systems with limited risk must only reveal that they make use of AI, and systems with minimal risk may go unregulated by the Artificial Intelligence Act.

According to the proposal’s financial statement, the implementation of the proposals for innovation, restrictions, and prohibitions will occur within one and a half years after the Regulation’s adoption. But adoption isn’t guaranteed to happen quickly––it may take years before the rules become law. However, in the meantime, this proposal serves as an early model for other countries considering regulations for AI technology.

About
Thomas Plant
:
Thomas Plant is an Associate Product Manager at Accrete AI and co–founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.