zoo began requiring pass-holders to submit to a face scan to enter the venue. One of the park’s visitors successfully sued the park after a court agreed with the visitor’s contention that this requirement was a violation of privacy and consumer rights. One might have assumed this litigation took place in Austin, Vancouver, or Marseille. In fact, it took place in Zhejiang, a province in China’s northeast.
The Promise of CV
Facial recognition, the technique employed by the zoo, is just one of computer vision’s (CV) numerous capabilities. China has invested significantly in development and commercialization of the technology – which is a subset of AI. China is also at the forefront of developing public policy around its use. The U.S., meanwhile, lags with a patchwork of underdeveloped guidelines at the local, state, and federal level.
As CV moves from the realm of science fiction to real-world applications—both beneficent and nightmarish—this regulatory uncertainty becomes less tenable. The current situation in the U.S. creates uncertainty for citizens and business while hindering the country’s ability to lead as a standard-setter. The U.S. should match its research and industry might with a focus on building strong rules for a future that will almost certainly be shaped by CV.
CV refers to related software and hardware that captures and extracts information about the physical world, using that data to achieve a specific task — much as the human eye might. Unlike a person, these algorithmic ‘eyes’ are tireless and, when trained appropriately, often more accurate than their biological kin. With increasing reliability, CV techniques such as facial recognition, object detection, and image segmentation are enhancing our capabilities across critical sectors including healthcare, government, agriculture, and manufacturing. While autonomous driving is probably the most well-known use case, CV could also empower frictionless shopping, secure and high-efficiency manufacturing, improved healthcare outcomes, and help shape fashion. CV startups have raised billions of dollars in recent years, a figure likely exceeded by investments from established firms and governments.
CV Around the World
China leads the world in applying CV technology, with the government prioritizing investment in AI and supporting CV-focused firms such as SenseTime, Mebavii, and Yitu. While Beijing previously allowed industry players to ideate, train, and deploy CV throughout the country with little scrutiny, recent public blowback has prompted the government to place guardrails on how data can be collected and used by industry. (Naturally, the authoritarian state’s security organs remain unrestrained in their use of the technology.) China’s Personal Information Protection Law (PIPL), which went into force late last year, sets parameters around the collection and use of peoples' data, with clear implications for CV. The PIPL sets stringent requirements that organizations operating in China will need to observe, providing a measure of predictability even as the cost of business has grown.
Europe is also taking regulatory steps, having proposed legal framework focused on AI with global implications for organizations akin to the supranational treatment afforded by the GDPR. The EU approach, which focuses on so-called "high-risk" instances of AI, will likely examine many applications of CV. Similar to the GDPR, the bloc’s landmark data privacy legislation, the EU hopes its proposal will become the de facto global standard. Brussels has stated as much, expressing an intent to “export its values across the world.”
In contrast, the U.S. exhibits an uneven approach to data privacy and protection with little focus on CV. Some states—including California, Colorado, and Virginia—have enacted comprehensive data protection laws and some localities have voted to restrict facial recognition. To date, however, no federal law regulates the commercial use of CV. U.S. federal law enforcement routinely employs CV in contexts ranging from everyday crime-fighting to cross-border travel, though this use is regulated by protections afforded by the Privacy Act. Despite these protections, the sheer amount of data that can be inferred using this technology poses questions about how much information authorities need to retain to meet their missions. This disparate approach makes it a difficult exercise for citizens and businesses alike to understand what ground rules, if any, apply to them around CV.
The Future of CV Regulation
Too much regulation hinders technological innovation. Yet for both the public and private sector, many potential uses will sit in an ethical gray zone. There are pressing questions for policymakers to consider. Is CV-powered work monitoring appropriate? Who is at fault when a CV algorithm results in a miscarriage of justice? To what standard must algorithms be able to offer human-interpretable ‘explainability’ into their decision-making? Most potential use cases will not be clearly classifiable as benign or impermissible.
Recent polling suggests that Americans are divided on the potential of AI and whether it will be misused. While certain consumer-focused CV use cases, such as Apple’s Face ID, have entered the public consciousness, most have not. With AI adoption accelerating, new applications are certain to become subject of public and legal scrutiny, particularly when accounting for the incentives of polemicists intent on inflaming issues of public policy.
These uncertainties make it difficult for citizens to understand their rights. They also complicate the ability of companies to ensure regulatory compliance and measure business risk. Over time, these uncertainties also dull the U.S.’ ability to remain an attractive host to innovation. There are a few key areas where policymakers can make a real impact addressing these challenges.
As a first measure, federal agencies should play a far more active role as a clearinghouse to achieve consensus on frameworks for ethical, robust, and authoritative AI and CV standards. The AI Risk Management Framework, spearheaded by the National Institute of Standards and Technology (NIST), is a good start. So is the creation of a top-level National Artificial Intelligence Advisory Committee housed within the Commerce Department. These activities should be accelerated and serve as the bedrock for how Congress may approach subsequent legislation related to AI. Furthermore, research suggests that scientific organizations such as NIST enjoy the greatest confidence from AI stakeholders, underscoring the critical role they must play.
Second, more opportunities for regulatory sandboxes should be identified and stood up on federal and state levels. These would be regulatory ‘safe grounds’ where specific innovations in CV can be tested and evaluated transparently—potentially in partnership with public, private, and academic stakeholders. These fora would provide insight into the viability of new applications or assess sensitive research areas, such as measuring algorithmic bias, in a calibrated environment. They would also encourage risk-taking, inform regulatory best-practices, and foster cross-sectoral exchange. This is a focus area highlighted in the EU’s proposed AI framework, piggy-backing off similar vehicles focused on GDPR compliance in areas such as healthcare data. A robust testbed exists in the U.S. for autonomous vehicles, and this should be expanded to many other areas of CV innovation.
Finally, the U.S. should be at the vanguard in promoting and harmonizing global standards for CV and AI. With the EU and others still developing legislation, the U.S. has an opportunity to influence the design of universal policy frameworks that promote innovation while restricting dignity-violating use cases. Regulatory codification of AI systems is not a trivial matter; it may determine how easy it is for CV-powered surveillance tools, such as those exported by China, to be used by governments bent on far-reaching social control.
The transformative potential of CV is too great for policy passivity from U.S. lawmakers. Industry, academic, and public experts should be shaping the contours of what would eventually become national legislation, with a vision to influence global standards that promote commercial and technological innovation while respecting civil liberties and privacy. To do otherwise is to enable continuing uncertainty about the technology. Worse yet, it risks ceding ground to policymakers in Beijing, who bear no reluctance in shaping global AI standards to mirror what we see within China’s borders.
a global affairs media network
Computer Vision Innovation Outpacing U.S. Policymakers
Image via Adobe Stock.
February 3, 2022
The transformational potential of computer vision (CV)—the tech behind facial recognition, object detection, and image segmentation—is astounding but also filled with danger. The U.S.' lead in innovating CV is at risk as lacking policy guidance means low implementation, writes Lulio Vargas-Cohen.
A
zoo began requiring pass-holders to submit to a face scan to enter the venue. One of the park’s visitors successfully sued the park after a court agreed with the visitor’s contention that this requirement was a violation of privacy and consumer rights. One might have assumed this litigation took place in Austin, Vancouver, or Marseille. In fact, it took place in Zhejiang, a province in China’s northeast.
The Promise of CV
Facial recognition, the technique employed by the zoo, is just one of computer vision’s (CV) numerous capabilities. China has invested significantly in development and commercialization of the technology – which is a subset of AI. China is also at the forefront of developing public policy around its use. The U.S., meanwhile, lags with a patchwork of underdeveloped guidelines at the local, state, and federal level.
As CV moves from the realm of science fiction to real-world applications—both beneficent and nightmarish—this regulatory uncertainty becomes less tenable. The current situation in the U.S. creates uncertainty for citizens and business while hindering the country’s ability to lead as a standard-setter. The U.S. should match its research and industry might with a focus on building strong rules for a future that will almost certainly be shaped by CV.
CV refers to related software and hardware that captures and extracts information about the physical world, using that data to achieve a specific task — much as the human eye might. Unlike a person, these algorithmic ‘eyes’ are tireless and, when trained appropriately, often more accurate than their biological kin. With increasing reliability, CV techniques such as facial recognition, object detection, and image segmentation are enhancing our capabilities across critical sectors including healthcare, government, agriculture, and manufacturing. While autonomous driving is probably the most well-known use case, CV could also empower frictionless shopping, secure and high-efficiency manufacturing, improved healthcare outcomes, and help shape fashion. CV startups have raised billions of dollars in recent years, a figure likely exceeded by investments from established firms and governments.
CV Around the World
China leads the world in applying CV technology, with the government prioritizing investment in AI and supporting CV-focused firms such as SenseTime, Mebavii, and Yitu. While Beijing previously allowed industry players to ideate, train, and deploy CV throughout the country with little scrutiny, recent public blowback has prompted the government to place guardrails on how data can be collected and used by industry. (Naturally, the authoritarian state’s security organs remain unrestrained in their use of the technology.) China’s Personal Information Protection Law (PIPL), which went into force late last year, sets parameters around the collection and use of peoples' data, with clear implications for CV. The PIPL sets stringent requirements that organizations operating in China will need to observe, providing a measure of predictability even as the cost of business has grown.
Europe is also taking regulatory steps, having proposed legal framework focused on AI with global implications for organizations akin to the supranational treatment afforded by the GDPR. The EU approach, which focuses on so-called "high-risk" instances of AI, will likely examine many applications of CV. Similar to the GDPR, the bloc’s landmark data privacy legislation, the EU hopes its proposal will become the de facto global standard. Brussels has stated as much, expressing an intent to “export its values across the world.”
In contrast, the U.S. exhibits an uneven approach to data privacy and protection with little focus on CV. Some states—including California, Colorado, and Virginia—have enacted comprehensive data protection laws and some localities have voted to restrict facial recognition. To date, however, no federal law regulates the commercial use of CV. U.S. federal law enforcement routinely employs CV in contexts ranging from everyday crime-fighting to cross-border travel, though this use is regulated by protections afforded by the Privacy Act. Despite these protections, the sheer amount of data that can be inferred using this technology poses questions about how much information authorities need to retain to meet their missions. This disparate approach makes it a difficult exercise for citizens and businesses alike to understand what ground rules, if any, apply to them around CV.
The Future of CV Regulation
Too much regulation hinders technological innovation. Yet for both the public and private sector, many potential uses will sit in an ethical gray zone. There are pressing questions for policymakers to consider. Is CV-powered work monitoring appropriate? Who is at fault when a CV algorithm results in a miscarriage of justice? To what standard must algorithms be able to offer human-interpretable ‘explainability’ into their decision-making? Most potential use cases will not be clearly classifiable as benign or impermissible.
Recent polling suggests that Americans are divided on the potential of AI and whether it will be misused. While certain consumer-focused CV use cases, such as Apple’s Face ID, have entered the public consciousness, most have not. With AI adoption accelerating, new applications are certain to become subject of public and legal scrutiny, particularly when accounting for the incentives of polemicists intent on inflaming issues of public policy.
These uncertainties make it difficult for citizens to understand their rights. They also complicate the ability of companies to ensure regulatory compliance and measure business risk. Over time, these uncertainties also dull the U.S.’ ability to remain an attractive host to innovation. There are a few key areas where policymakers can make a real impact addressing these challenges.
As a first measure, federal agencies should play a far more active role as a clearinghouse to achieve consensus on frameworks for ethical, robust, and authoritative AI and CV standards. The AI Risk Management Framework, spearheaded by the National Institute of Standards and Technology (NIST), is a good start. So is the creation of a top-level National Artificial Intelligence Advisory Committee housed within the Commerce Department. These activities should be accelerated and serve as the bedrock for how Congress may approach subsequent legislation related to AI. Furthermore, research suggests that scientific organizations such as NIST enjoy the greatest confidence from AI stakeholders, underscoring the critical role they must play.
Second, more opportunities for regulatory sandboxes should be identified and stood up on federal and state levels. These would be regulatory ‘safe grounds’ where specific innovations in CV can be tested and evaluated transparently—potentially in partnership with public, private, and academic stakeholders. These fora would provide insight into the viability of new applications or assess sensitive research areas, such as measuring algorithmic bias, in a calibrated environment. They would also encourage risk-taking, inform regulatory best-practices, and foster cross-sectoral exchange. This is a focus area highlighted in the EU’s proposed AI framework, piggy-backing off similar vehicles focused on GDPR compliance in areas such as healthcare data. A robust testbed exists in the U.S. for autonomous vehicles, and this should be expanded to many other areas of CV innovation.
Finally, the U.S. should be at the vanguard in promoting and harmonizing global standards for CV and AI. With the EU and others still developing legislation, the U.S. has an opportunity to influence the design of universal policy frameworks that promote innovation while restricting dignity-violating use cases. Regulatory codification of AI systems is not a trivial matter; it may determine how easy it is for CV-powered surveillance tools, such as those exported by China, to be used by governments bent on far-reaching social control.
The transformative potential of CV is too great for policy passivity from U.S. lawmakers. Industry, academic, and public experts should be shaping the contours of what would eventually become national legislation, with a vision to influence global standards that promote commercial and technological innovation while respecting civil liberties and privacy. To do otherwise is to enable continuing uncertainty about the technology. Worse yet, it risks ceding ground to policymakers in Beijing, who bear no reluctance in shaping global AI standards to mirror what we see within China’s borders.