s governments around the world pursue different approaches to foster AI innovation while also regulating AI development, deployment, and use, public distrust of AI has been growing due to the technology’s unmanaged, disruptive effects on various sectors. If such distrust of AI continues to go unchecked, the market demand for AI adoption may decline, potentially frustrating the progress of AI innovation and its implementation across communities.
What is required in this increasingly divisive environment is a shared vision of AI that respects the public’s genuine aspiration for the technology but combines it with norms that can properly actualize such a vision. Government and key stakeholders should disregard AI as a mere utilitarian tool and instead consider how this technology could enhance society’s cultural, educational, and other standards.
For instance, the early internet spurred public excitement due to its seemingly endless potential to enhance the information society with democratized benefits for all. If there is a shared vision of AI that can inject similar levels of widespread enthusiasm, then strong foundational AI norms can lead the way toward realizing the public’s expectations.
Furthermore, any pursuit of AI regulation, guidance, and best practices should always principally support society at large. Much of the public’s distrust of AI appears to stem from the growing perception that it is chiefly being used to serve everyone but the people. This belief can grow into the cynical belief that AI is only a disrupting force used to serve only a few. If the public’s distrust of AI is left unaddressed, widespread adoption of this technology will stall, leading society to lose its opportunity to harness the benefits of AI for all.
Fortunately, the OECD AI Principles and NIST’s AI trustworthiness characteristics provide a roadmap for policymakers and industry personnel to pursue a human–centric approach to AI development, deployment, and use. Even with a lack of a mandatory regulatory regime, there is a ripe opportunity for government, industry, academia, and other organizations to collaborate and pursue a shared AI standard that supports the people. Intersectoral collaboration is key to building a consensus on AI policy.
a global affairs media network
Human–centric AI: Consensus building towards AI innovation

Image courtesy of Ecole polytechnique from Paris, CC BY-SA 2.0 , via Wikimedia Commons.
March 26, 2025
A human-centric AI approach rooted in trust, ethics, and shared vision is key to fostering innovation and responsible adoption, writes Daniel Shin.
A
s governments around the world pursue different approaches to foster AI innovation while also regulating AI development, deployment, and use, public distrust of AI has been growing due to the technology’s unmanaged, disruptive effects on various sectors. If such distrust of AI continues to go unchecked, the market demand for AI adoption may decline, potentially frustrating the progress of AI innovation and its implementation across communities.
What is required in this increasingly divisive environment is a shared vision of AI that respects the public’s genuine aspiration for the technology but combines it with norms that can properly actualize such a vision. Government and key stakeholders should disregard AI as a mere utilitarian tool and instead consider how this technology could enhance society’s cultural, educational, and other standards.
For instance, the early internet spurred public excitement due to its seemingly endless potential to enhance the information society with democratized benefits for all. If there is a shared vision of AI that can inject similar levels of widespread enthusiasm, then strong foundational AI norms can lead the way toward realizing the public’s expectations.
Furthermore, any pursuit of AI regulation, guidance, and best practices should always principally support society at large. Much of the public’s distrust of AI appears to stem from the growing perception that it is chiefly being used to serve everyone but the people. This belief can grow into the cynical belief that AI is only a disrupting force used to serve only a few. If the public’s distrust of AI is left unaddressed, widespread adoption of this technology will stall, leading society to lose its opportunity to harness the benefits of AI for all.
Fortunately, the OECD AI Principles and NIST’s AI trustworthiness characteristics provide a roadmap for policymakers and industry personnel to pursue a human–centric approach to AI development, deployment, and use. Even with a lack of a mandatory regulatory regime, there is a ripe opportunity for government, industry, academia, and other organizations to collaborate and pursue a shared AI standard that supports the people. Intersectoral collaboration is key to building a consensus on AI policy.