.
T

here is a basic problem with how governments approach AI policy: They are making decisions about a labor market they cannot see. National employment surveys were not designed to track where AI talent is being trained, where it moves, or why it leaves. By the time official surveys are even redesigned to ask the right questions, the landscape has already shifted again. And so the governance gap in emerging technology is not really about missing rules, it is about missing information. 

I constantly run into this in my own work. At interface, where I lead research on AI workforce policy across the EU, we built a cross–country dataset tracking AI professionals across Europe because nothing comparable existed in the public domain. To make sense of it, we developed a three–tiered classification framework that distinguishes between the technical layers of the AI workforce—from applied users to deep infrastructure builders. It sounds straightforward, but most existing datasets lump these roles together, which makes it nearly impossible to understand where specific talent gaps actually sit. The framework is now being adopted by researchers and organizations globally, which speaks less to our ingenuity than to how basic the gap was. What our data reveals is often uncomfortable: Europe is training world–class AI talent and watching it walk out the door. These findings should be shaping immigration policy, industrial strategy, and education reform. Closing that visibility gap is a precondition for credible AI policymaking at the national level.

This is the first role non–government actors play in closing governance gaps: producing the evidence base. Think tanks, research organizations, and cross–border coalitions can move faster, dig deeper, and work across borders in ways that national statistical offices simply were not built to do. 

But measurement is only part of it. Consider AI literacy. When the EU's AI Act came into force, Article 4 required that organizations deploying AI systems ensure their staff is adequately trained, effective from February 2025. Reasonable enough. Except nobody had agreed on what "AI literacy" actually means. That work fell to a partnership between the European Commission, the OECD, and Code.org, which together developed a framework now being finalized for 2026. The regulation created the obligation. It took a multi–track partnership to make it meaningful. 

The same pattern holds for upskilling. The Commission's AI Skills Academy, part of its AI Continent Action Plan, is built explicitly on collaboration with industry through the Pact for Skills, alongside universities, digital innovation hubs, and AI Factories. Governments set the direction. Delivery required everyone else. 

The instinct in AI policy circles is still to treat partnerships as a nice complement to institutional action when they are much more fundamental. They are the mechanism through which multiple institutions can work together to govern a space that moves faster than any one of them alone.

About
Siddhi Pal
:
Siddhi Pal is Lead, AI Workforce & Innovation at Interface, and a member of World in 2050’s TEN.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

No good AI governance without good measurement

April 15, 2026

AI governance requires better workforce data, with partnerships helping governments measure talent flows and close policy blind spots, writes Siddhi Pal.

T

here is a basic problem with how governments approach AI policy: They are making decisions about a labor market they cannot see. National employment surveys were not designed to track where AI talent is being trained, where it moves, or why it leaves. By the time official surveys are even redesigned to ask the right questions, the landscape has already shifted again. And so the governance gap in emerging technology is not really about missing rules, it is about missing information. 

I constantly run into this in my own work. At interface, where I lead research on AI workforce policy across the EU, we built a cross–country dataset tracking AI professionals across Europe because nothing comparable existed in the public domain. To make sense of it, we developed a three–tiered classification framework that distinguishes between the technical layers of the AI workforce—from applied users to deep infrastructure builders. It sounds straightforward, but most existing datasets lump these roles together, which makes it nearly impossible to understand where specific talent gaps actually sit. The framework is now being adopted by researchers and organizations globally, which speaks less to our ingenuity than to how basic the gap was. What our data reveals is often uncomfortable: Europe is training world–class AI talent and watching it walk out the door. These findings should be shaping immigration policy, industrial strategy, and education reform. Closing that visibility gap is a precondition for credible AI policymaking at the national level.

This is the first role non–government actors play in closing governance gaps: producing the evidence base. Think tanks, research organizations, and cross–border coalitions can move faster, dig deeper, and work across borders in ways that national statistical offices simply were not built to do. 

But measurement is only part of it. Consider AI literacy. When the EU's AI Act came into force, Article 4 required that organizations deploying AI systems ensure their staff is adequately trained, effective from February 2025. Reasonable enough. Except nobody had agreed on what "AI literacy" actually means. That work fell to a partnership between the European Commission, the OECD, and Code.org, which together developed a framework now being finalized for 2026. The regulation created the obligation. It took a multi–track partnership to make it meaningful. 

The same pattern holds for upskilling. The Commission's AI Skills Academy, part of its AI Continent Action Plan, is built explicitly on collaboration with industry through the Pact for Skills, alongside universities, digital innovation hubs, and AI Factories. Governments set the direction. Delivery required everyone else. 

The instinct in AI policy circles is still to treat partnerships as a nice complement to institutional action when they are much more fundamental. They are the mechanism through which multiple institutions can work together to govern a space that moves faster than any one of them alone.

About
Siddhi Pal
:
Siddhi Pal is Lead, AI Workforce & Innovation at Interface, and a member of World in 2050’s TEN.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.