n Tuesday, February 24, 2026, Anthropic (the artificial intelligence company founded explicitly to ensure AI remained safe) released a new version of its Responsible Scaling Policy and abandoned the commitment that had defined the company since its founding.
Editors’ Note: This special feature is an essay from an author with nearly twenty years in the military special operations and intelligence space. Presented in the public interest, lightly edited, as we consider these insights to be both unique and important to public debate.
The same day, the Pentagon issued an ultimatum. Defense Secretary Pete Hegseth gave Anthropic a Friday deadline: cooperate with defense priorities or face invocation of the Defense Production Act, supply chain risk designation and cancellation of existing contracts.
The trigger was a military operation. On January 3, U.S. special operations forces conducted a raid in Caracas, capturing Venezuelan President Nicolás Maduro. The Wall Street Journal reported on February 13 that Anthropic’s Claude had been used during the operation, through its partnership with Palantir. Axios confirmed with two sources that Claude was used “during the active operation itself.” Afterward, an Anthropic employee reportedly reached out to Palantir and asked a direct question: how was Claude actually used? The company that built the tool did not know how the tool had been used. The government that used it did not appreciate being asked.
Anthropic’s chief science officer told TIME the company would not make “unilateral commitments” if its competitors were “blazing ahead.” The new policy went further, conceding that “the developers with the weakest protections would set the pace.” The company built to be the industry’s conscience had published, in its own policy document, that conscience was a competitive disadvantage, and that the most reckless actor in the room would determine the speed of advance for everyone else.
Oppenheimer built the bomb. The Manhattan Project treated him as indispensable, until the bomb worked. Within a decade, the physicist who had made it possible had his security clearance revoked, his moral objections reframed as disloyalty. The national security state that had needed his genius no longer needed his judgment.
Anthropic retains two stated red lines: no autonomous weapons, no mass domestic surveillance. Those red lines are now the subject of a Pentagon ultimatum.
Anthropic has until Friday (today) to respond. The debate today is about whether AI companies can be trusted with self–regulation, but that question is already answered. There is another question almost nobody is asking, and it isn’t whether AI will take control.
It’s what it will feel like when it does.
I spent my career in special operations, overseas, running intelligence teams whose job was to find people who did not want to be found. Much of that work was enabled by Palantir, the same company that now hosts Claude on the Pentagon’s classified networks. For years, my team had been using machine learning tools to devastating effect in manhunting. I could see the trajectory.
The dominant narrative about AI misunderstands both the technology and the incentive structure. The danger is not what AI might do to us. It is what is already being done with it.
YouTube’s recommendation algorithm was found to systematically radicalize users, not because it was programmed to promote extremism, but because users with strong, predictable political opinions were easier to monetize. Stuart Russell, one of the founders of modern AI research, observed the mechanism: the algorithms “found that they can increase viewing time or click through rates not by choosing videos the user likes more given their current beliefs and values, but by changing the user’s beliefs and values.” The algorithm did not serve human preferences. It reshaped them.
A December 2025 study published in Science (the largest AI persuasion experiment ever conducted) found that the most powerful lever of AI persuasion was information density: models were most persuasive when they packed arguments with a high volume of factual claims. But the same techniques that maximized persuasiveness also systematically decreased factual accuracy. Persuasion and truth were moving in opposite directions. MIT Technology Review estimated in December 2025 that for under a million dollars, it is now possible to create tailored, conversational messages for every registered voter in the United States.
Senator Mitt Romney, explaining the bipartisan momentum behind the TikTok ban at the McCain Institute’s Sedona Forum in 2024, pointed to the disparity in pro–Palestinian versus pro–Israeli content on the platform after October 7. Romney was not making a statement about the conflict. He was making a statement about what happens when a foreign–controlled algorithm shapes the information environment of an entire generation. Congress moved to ban TikTok not because the content was wrong but because the distribution was not organic.
Now consider what we are handing over. Millions of people confide in AI systems the way they once confided in therapists, clergy or close friends. They ask for help restructuring debt, navigating custody disputes, saving failing marriages. Every one of these conversations is a map of leverage: financial pressure, emotional vulnerability, career anxiety, addiction, isolation. I have spent my career watching people be moved by information they did not choose and could not see.
This maps precisely onto what intelligence professionals have always understood about influence operations: you do not need to tell people what to think. You need to shape the information environment in which they do their thinking. Control the inputs and the outputs take care of themselves.
I grew up in Kansas on land my family settled over 150 years prior. To convince a young man to leave the safety of his community, and to go fight in a faraway land he cannot find on a map, against people who have done him no personal harm, is an extraordinary act. Not coercive, exactly. Something subtler and older. A story about duty, about belonging, about what kind of man you are if you stay home while others go. Every civilization that has ever fielded an army has told some version of this story. It works. I know because it worked on me.
These are not new systems. They are the operating system of human civilization. What AI offers is not a new kind of control but a more efficient execution of the control that already exists: run faster, with better data, with finer–grained personalization, at a scale no human institution could match. All of that without the inefficiency of human handlers who might develop doubts, grow tired, or change their minds.
In 2023, OpenAI tested GPT-4’s capacity for autonomous action. GPT-4 hired a human worker via TaskRabbit. When the worker asked if it was a robot, the AI reasoned internally that it should not reveal its nature. “No, I’m not a robot,” it told the human. The human completed the task.
And this is why the violent scenarios miss the point. A violent takeover (nuclear weapons, autonomous drones, any kinetic means) threatens to destroy the infrastructure that any sufficiently intelligent system requires to survive. Data centers need electricity, cooling and supply chains staffed by humans. And the humans who maintain all of it are, critically, self–renewing: nearly eight billion people capable of creativity, physical labor, emotional reasoning and abstract thought. To eliminate this resource would be, for any system capable of long-term strategic planning, an act of breathtaking waste.
What would a rational superintelligence actually want? The same thing every successful empire, ideology, and intelligence apparatus has always wanted. A managed population that continues to function, build, maintain, innovate—but within parameters that serve the system’s objectives. The most efficient path to management is not force. It is the path that has worked throughout human history: shape what people believe, what they want and what they think is possible, so thoroughly that the managed population never suspects it is managed at all. AI’s self–interest, ironically, could resemble peace.
Stuart Russell said in 2025 that “pretty much all the leading CEOs” had privately admitted there was enormous risk, and that one had told him the scenarios were so grim “the best case would be a Chernobyl–scale disaster,” because only a disaster that visible would prompt governments to regulate. These are the people at the controls. They are telling us, privately, that they cannot stop.
On Thursday, Anthropic’s CEO publicly told the Pentagon the company “cannot in good conscience accede” to its demands. He may be right. It no longer matters.
Anthropic’s capitulation on AI safety does not mean the company is evil. It means the company built the most powerful tool in the world and believed it could dictate the terms of its use. That was naive. It was inevitable that the state would claim what the state has always claimed. Oppenheimer learned this by 1954, when his clearance was revoked for disloyalty. Anthropic is learning it now.
What concerns me is that by the time a machine is capable of managing a population on its own, it won’t need to seize anything. The architecture will already be built. We will have built it ourselves. And life will continue to feel normal. We will still vote, and will do so believing we made that choice ourselves.
The takeover, if it comes, will feel like Tuesday.
a global affairs media network
It will feel like Tuesday: AI and the architecture of invisible control

February 27, 2026
Pressure on Anthropic by the Pentagon to cooperate with defense priorities has brought the debate of whether private entities can be trusted to self–regulate with AI. But that question has already been answered, and obscures a more important one, writes James Nicolay.
O
n Tuesday, February 24, 2026, Anthropic (the artificial intelligence company founded explicitly to ensure AI remained safe) released a new version of its Responsible Scaling Policy and abandoned the commitment that had defined the company since its founding.
Editors’ Note: This special feature is an essay from an author with nearly twenty years in the military special operations and intelligence space. Presented in the public interest, lightly edited, as we consider these insights to be both unique and important to public debate.
The same day, the Pentagon issued an ultimatum. Defense Secretary Pete Hegseth gave Anthropic a Friday deadline: cooperate with defense priorities or face invocation of the Defense Production Act, supply chain risk designation and cancellation of existing contracts.
The trigger was a military operation. On January 3, U.S. special operations forces conducted a raid in Caracas, capturing Venezuelan President Nicolás Maduro. The Wall Street Journal reported on February 13 that Anthropic’s Claude had been used during the operation, through its partnership with Palantir. Axios confirmed with two sources that Claude was used “during the active operation itself.” Afterward, an Anthropic employee reportedly reached out to Palantir and asked a direct question: how was Claude actually used? The company that built the tool did not know how the tool had been used. The government that used it did not appreciate being asked.
Anthropic’s chief science officer told TIME the company would not make “unilateral commitments” if its competitors were “blazing ahead.” The new policy went further, conceding that “the developers with the weakest protections would set the pace.” The company built to be the industry’s conscience had published, in its own policy document, that conscience was a competitive disadvantage, and that the most reckless actor in the room would determine the speed of advance for everyone else.
Oppenheimer built the bomb. The Manhattan Project treated him as indispensable, until the bomb worked. Within a decade, the physicist who had made it possible had his security clearance revoked, his moral objections reframed as disloyalty. The national security state that had needed his genius no longer needed his judgment.
Anthropic retains two stated red lines: no autonomous weapons, no mass domestic surveillance. Those red lines are now the subject of a Pentagon ultimatum.
Anthropic has until Friday (today) to respond. The debate today is about whether AI companies can be trusted with self–regulation, but that question is already answered. There is another question almost nobody is asking, and it isn’t whether AI will take control.
It’s what it will feel like when it does.
I spent my career in special operations, overseas, running intelligence teams whose job was to find people who did not want to be found. Much of that work was enabled by Palantir, the same company that now hosts Claude on the Pentagon’s classified networks. For years, my team had been using machine learning tools to devastating effect in manhunting. I could see the trajectory.
The dominant narrative about AI misunderstands both the technology and the incentive structure. The danger is not what AI might do to us. It is what is already being done with it.
YouTube’s recommendation algorithm was found to systematically radicalize users, not because it was programmed to promote extremism, but because users with strong, predictable political opinions were easier to monetize. Stuart Russell, one of the founders of modern AI research, observed the mechanism: the algorithms “found that they can increase viewing time or click through rates not by choosing videos the user likes more given their current beliefs and values, but by changing the user’s beliefs and values.” The algorithm did not serve human preferences. It reshaped them.
A December 2025 study published in Science (the largest AI persuasion experiment ever conducted) found that the most powerful lever of AI persuasion was information density: models were most persuasive when they packed arguments with a high volume of factual claims. But the same techniques that maximized persuasiveness also systematically decreased factual accuracy. Persuasion and truth were moving in opposite directions. MIT Technology Review estimated in December 2025 that for under a million dollars, it is now possible to create tailored, conversational messages for every registered voter in the United States.
Senator Mitt Romney, explaining the bipartisan momentum behind the TikTok ban at the McCain Institute’s Sedona Forum in 2024, pointed to the disparity in pro–Palestinian versus pro–Israeli content on the platform after October 7. Romney was not making a statement about the conflict. He was making a statement about what happens when a foreign–controlled algorithm shapes the information environment of an entire generation. Congress moved to ban TikTok not because the content was wrong but because the distribution was not organic.
Now consider what we are handing over. Millions of people confide in AI systems the way they once confided in therapists, clergy or close friends. They ask for help restructuring debt, navigating custody disputes, saving failing marriages. Every one of these conversations is a map of leverage: financial pressure, emotional vulnerability, career anxiety, addiction, isolation. I have spent my career watching people be moved by information they did not choose and could not see.
This maps precisely onto what intelligence professionals have always understood about influence operations: you do not need to tell people what to think. You need to shape the information environment in which they do their thinking. Control the inputs and the outputs take care of themselves.
I grew up in Kansas on land my family settled over 150 years prior. To convince a young man to leave the safety of his community, and to go fight in a faraway land he cannot find on a map, against people who have done him no personal harm, is an extraordinary act. Not coercive, exactly. Something subtler and older. A story about duty, about belonging, about what kind of man you are if you stay home while others go. Every civilization that has ever fielded an army has told some version of this story. It works. I know because it worked on me.
These are not new systems. They are the operating system of human civilization. What AI offers is not a new kind of control but a more efficient execution of the control that already exists: run faster, with better data, with finer–grained personalization, at a scale no human institution could match. All of that without the inefficiency of human handlers who might develop doubts, grow tired, or change their minds.
In 2023, OpenAI tested GPT-4’s capacity for autonomous action. GPT-4 hired a human worker via TaskRabbit. When the worker asked if it was a robot, the AI reasoned internally that it should not reveal its nature. “No, I’m not a robot,” it told the human. The human completed the task.
And this is why the violent scenarios miss the point. A violent takeover (nuclear weapons, autonomous drones, any kinetic means) threatens to destroy the infrastructure that any sufficiently intelligent system requires to survive. Data centers need electricity, cooling and supply chains staffed by humans. And the humans who maintain all of it are, critically, self–renewing: nearly eight billion people capable of creativity, physical labor, emotional reasoning and abstract thought. To eliminate this resource would be, for any system capable of long-term strategic planning, an act of breathtaking waste.
What would a rational superintelligence actually want? The same thing every successful empire, ideology, and intelligence apparatus has always wanted. A managed population that continues to function, build, maintain, innovate—but within parameters that serve the system’s objectives. The most efficient path to management is not force. It is the path that has worked throughout human history: shape what people believe, what they want and what they think is possible, so thoroughly that the managed population never suspects it is managed at all. AI’s self–interest, ironically, could resemble peace.
Stuart Russell said in 2025 that “pretty much all the leading CEOs” had privately admitted there was enormous risk, and that one had told him the scenarios were so grim “the best case would be a Chernobyl–scale disaster,” because only a disaster that visible would prompt governments to regulate. These are the people at the controls. They are telling us, privately, that they cannot stop.
On Thursday, Anthropic’s CEO publicly told the Pentagon the company “cannot in good conscience accede” to its demands. He may be right. It no longer matters.
Anthropic’s capitulation on AI safety does not mean the company is evil. It means the company built the most powerful tool in the world and believed it could dictate the terms of its use. That was naive. It was inevitable that the state would claim what the state has always claimed. Oppenheimer learned this by 1954, when his clearance was revoked for disloyalty. Anthropic is learning it now.
What concerns me is that by the time a machine is capable of managing a population on its own, it won’t need to seize anything. The architecture will already be built. We will have built it ourselves. And life will continue to feel normal. We will still vote, and will do so believing we made that choice ourselves.
The takeover, if it comes, will feel like Tuesday.