.
T

he future of AI is here, and it’s shocking. Except that it isn’t, really. From AI agents that can do deep research tasks and understand context, to genAI content that’s increasingly hard to distinguish from human generated, to super agents that not only understand missions and act independently but which can themselves create new AI agents to carry out their mission, much of this is unexpected while at the same time, being very expected. The pace of disruptive innovation is a surprise, perhaps, but we always knew things would move sneakily quickly. That’s why for years, experts (including many of ours) have spoken about standard setting and best practice norms, if not outright regulatory guardrails for AI. 

It was always going to be a task that feels a bit like a hamster wheel, running and never quite reaching a point where we arrive. Always chasing innovation with best practice, while looking for more broadly applicable standards to give innovators signposts to help guide development. Now that task looks more complex than ever, because we have at least three unique and increasingly divergent approaches to AI. China is leaning into state control of AI as a strategic asset—a trend we’ve seen and will continue to see many autocratic states lean toward. The EU is bolstering regulatory guidelines that are meant to be flexible and able to pivot with the times but which critics say are too restrictive and will stifle innovation. Meanwhile, the U.S. has pivoted toward an “innovate, innovate, innovate!” system to stay ahead of its competitors in the AI race, but which has many concerned about what a lack of standards and safety measures means for the future of society. 

There are echoes of a prisoners’ dilemma here, or of a tragedy of the commons—though it doesn’t fit either exactly. Without an internationally recognized set of standards—not regulations per say, but standards—we risk a race to the bottom in terms of making sure AI is safe and fair in preference for AI that is more powerful than that of our competitors. 

Truly global standards may be beyond us in the near term. As a society, or as a set of various global societies, we’ve understood for some time that with exponential technologies, we have for some time been at an inflection point. Having not managed to agree on standards for AI development, we are now entering a period of AI uncertainty we foresaw but could not prevent. 

What will this mean for our shared digital future? Regardless of what you may have read or heard, that’s unclear. To gain some insights into what scenarios we might see play out, or what we can do to steer toward scenarios we may prefer, Diplomatic Courier asked its expert community for their analysis. The response was exactly what we expected—overwhelming in volume and representative of a wide sampling of perspectives. 

We hope this digital compilation of commentaries gives you some insight into what the future of AI could be, what you would like it to be, and what you might be able to do to help that future come into being.

About
Shane Szarkowski
:
Dr. Shane C. Szarkowski is Editor–in–Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

What next for AI, what next for us?

Image via Adobe Stock.

April 1, 2025

The future of AI has become both very complex, and hotly contested—but it has not yet arrived. Diplomatic Courier today launched a new anthology of commentaries from our W2050 expert network on what the future of AI should look like, why, and how to get there.

T

he future of AI is here, and it’s shocking. Except that it isn’t, really. From AI agents that can do deep research tasks and understand context, to genAI content that’s increasingly hard to distinguish from human generated, to super agents that not only understand missions and act independently but which can themselves create new AI agents to carry out their mission, much of this is unexpected while at the same time, being very expected. The pace of disruptive innovation is a surprise, perhaps, but we always knew things would move sneakily quickly. That’s why for years, experts (including many of ours) have spoken about standard setting and best practice norms, if not outright regulatory guardrails for AI. 

It was always going to be a task that feels a bit like a hamster wheel, running and never quite reaching a point where we arrive. Always chasing innovation with best practice, while looking for more broadly applicable standards to give innovators signposts to help guide development. Now that task looks more complex than ever, because we have at least three unique and increasingly divergent approaches to AI. China is leaning into state control of AI as a strategic asset—a trend we’ve seen and will continue to see many autocratic states lean toward. The EU is bolstering regulatory guidelines that are meant to be flexible and able to pivot with the times but which critics say are too restrictive and will stifle innovation. Meanwhile, the U.S. has pivoted toward an “innovate, innovate, innovate!” system to stay ahead of its competitors in the AI race, but which has many concerned about what a lack of standards and safety measures means for the future of society. 

There are echoes of a prisoners’ dilemma here, or of a tragedy of the commons—though it doesn’t fit either exactly. Without an internationally recognized set of standards—not regulations per say, but standards—we risk a race to the bottom in terms of making sure AI is safe and fair in preference for AI that is more powerful than that of our competitors. 

Truly global standards may be beyond us in the near term. As a society, or as a set of various global societies, we’ve understood for some time that with exponential technologies, we have for some time been at an inflection point. Having not managed to agree on standards for AI development, we are now entering a period of AI uncertainty we foresaw but could not prevent. 

What will this mean for our shared digital future? Regardless of what you may have read or heard, that’s unclear. To gain some insights into what scenarios we might see play out, or what we can do to steer toward scenarios we may prefer, Diplomatic Courier asked its expert community for their analysis. The response was exactly what we expected—overwhelming in volume and representative of a wide sampling of perspectives. 

We hope this digital compilation of commentaries gives you some insight into what the future of AI could be, what you would like it to be, and what you might be able to do to help that future come into being.

About
Shane Szarkowski
:
Dr. Shane C. Szarkowski is Editor–in–Chief of Diplomatic Courier and the Executive Director of World in 2050.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.