Control over AI is no longer an abstraction. In February, the Pentagon demanded full, unrestricted access to Claude from Anthropic and issued an ultimatum: lift all restrictions or lose the $200 million contract — and be labeled a "supply chain risk." Anthropic refused.
In practice, this is no longer hypothetical. In January 2026, Claude, via Palantir, was used in the operation to capture Maduro: analyzing satellite imagery, processing intercepts, and coordinating the assault in real time. The model was there at the hottest moment — when special forces were pulling the dictator out of Caracas. And just a month later, the same AI model was helping the Americans in strikes on Iran as part of a joint special operation with Israel.
Developers are now facing a stark choice: models for people — or for those who wield power over them. OpenAI signed its own contract with the DoD, and #CancelChatGPT took off almost immediately. Thousands of users moved to Claude, which chose to lose the money rather than hand the military full control.
Can they hold the line when the other side offers military budgets, state power, and strategic advantage? And can the state resist the temptation to use AI to control its own citizens?
A Shift in the Era
We have already seen countries demand encryption keys from messaging platforms and threaten bans if they refused to hand over user data. Now the issue is no longer private correspondence. Now governments want the full functionality of AI models.
Telegram and WhatsApp survived pressure — lawsuits, accusations, threats of blocking. But AI developers are more vulnerable. They depend on investment, permits for building data centers, export licenses, and government contracts. Pressure is easier here: a contract can be terminated, a company can be labeled a "supply chain risk," and access to infrastructure can be restricted.
Artificial intelligence replaces entire analytical teams. Speed of processing, precision, and uninterrupted analysis turn a model into a force multiplier for the military machine. In the Caracas operation, Claude was used not only to analyze satellite imagery and intercepts. In the critical phase, it also helped coordinate actions in real time.
Control over AI means control over the speed of the OODA loop — observe, orient, decide, act. In geopolitics, the winner is the one who can orient faster and decide faster.
That is why this is not about "technical settings." It is about removing red lines. Any built-in ethics is a potential delay at a critical moment. For the military machine, that is a risk. For developers, it is a principle.
Why the Pentagon Wants Full Access
The success of the operation to capture Maduro in January 2026 — where Claude, via Palantir, coordinated the assault in real time — gave the Pentagon a taste of superiority. Now the goal is to make that superiority systemic. Should the state have red lines in its use of AI, especially in areas where the human factor can lead to mistakes? In modern warfare, victory belongs to whoever can process information faster and make decisions faster.
A U.S. president gets only four years, and approval ratings are fickle. The White House is also under pressure from a domestic crisis of trust: the return of the Epstein files to the headlines has even produced a diversionary war theory, reinforcing the sense that Trump is making decisions under intense political time pressure.
And the Pentagon has reasons to demand full submission from AI developers. The conflict with Iran is not fading — it is expanding. Tehran is resisting with everything it still has in its arsenal. Khamenei's removal only brought his son, Mojtaba Khamenei, to power — reportedly an even harder and more uncompromising leader. Russia, which backs Iran in this war, is benefiting from the spike in oil prices, which at one point reached $120 a barrel — and that may not be the ceiling. On this front, every delay in analysis costs money, allies, and room for maneuver.
There is still no result in Ukraine either. That conflict also creates strong demand for a system that can connect satellite data, intercepts, logistics, troop movements, and consequence forecasting faster than any human team.
And then there is China. Beijing does not ask developers for permission when it comes to "red lines." The PLA is integrating models — including DeepSeek and Huawei chips — into its decision chain without ethical debate. There are no restrictions on autonomous systems, no public arguments over surveillance. If American models remain bound by Anthropic-style prohibitions — mass surveillance inside the U.S., fully autonomous weapons — then falling behind will become inevitable, not only technologically, but operationally as well.
A Talent Split
OpenAI's military turn did not just damage the company's reputation. It also shook its internal stability. Over the past few months, the company has lost several prominent figures, and the reasons behind those departures show that the conflict in the AI sector is no longer just between governments and companies. It is now running through the labs themselves.
Some left because of OpenAI's broader drift away from its original mission toward a harsher commercial logic. Researcher Zoe Hitzig left the company in February, criticizing the push toward ads in ChatGPT and comparing that turn to Facebook's mistakes. Her argument was simple: a model into which users pour their thoughts should not become an advertising engine.
But then the conflict became sharper. After OpenAI's deal with the Pentagon, hardware chief Caitlin Kalinowski left the company. She made it clear that what troubled her was the lack of serious deliberation around red lines — mass surveillance without judicial oversight and lethal autonomy without human authorization. OpenAI later began emphasizing its safeguards more explicitly, but the resignation itself made one thing clear: even inside the company, not everyone is willing to calmly accept AI's integration into the defense apparatus.
That is an important signal. This is no longer about disagreements on X or another wave of user outrage. When people working on models, safety, and hardware start leaving, it means the fault line runs through the very core of the industry. One camp accepts the logic of the state, military contracts, and "lawful use." The other is still trying to hold the line — even at the cost of walking away. And that is no longer just OpenAI's private crisis. It is a symptom of a much broader split across the AI industry.
China and Distillation
In the new five-year plan adopted in March 2026, Beijing made artificial intelligence one of the central pillars of its economic and technological policy. The document explicitly calls for scaling AI+ across the entire economy, building hyper-scale computing clusters, and developing open-source ecosystems as a tool of competition with the United States. Open source here is not about altruism. It is about speed, scale, and bypassing export controls.
Distillation, too, has long since left the lab and become a weapon in the race. On February 23, Anthropic said that DeepSeek, Moonshot AI, and MiniMax had created more than 24,000 fake accounts and carried out over 16 million interactions with Claude in order to extract agentic reasoning, tool use, and coding capabilities at industrial scale. Almost at the same time, OpenAI, in a memorandum to the House Select Committee, accused DeepSeek of bypassing access restrictions, using obfuscated routers, and distilling ChatGPT outputs — in effect, free-riding on American advances.
These are no longer isolated violations or gray-zone techno-hacking. For China, distillation is a way to accelerate a catch-up race without years of research cycles and billions of dollars in frontier R&D. In this logic, safeguards do not matter: they are bypassed, and distilled models often end up stripped of built-in constraints.
This is where the real open-source dilemma emerges. Closed models offer more control: safeguards, access limits, and a higher barrier to distillation. But that same closedness slows innovation and leaves room for those catching up through state mobilization of resources, fake accounts, and systematic circumvention of restrictions.
Open models accelerate the Western ecosystem and give startups and researchers an edge, but they also spread instantly — to military contractors, authoritarian regimes, security structures, and states that recognize no red lines at all. Distillation makes that openness even more dangerous: the capabilities of the strongest models can be extracted quickly and transplanted into systems with no ethical brakes.
The argument is no longer about "openness" versus "corporate greed." The real question is different: what is more dangerous — centralized control over the strongest models, with the risk of falling behind in speed, or their rapid spread across the world through distillation, open repositories, and state-backed ecosystems, with the risk of losing control forever?
Three Trajectories
The future of AI will not be decided in laboratories or on X. It will be decided in the clash of three forces that are already in conflict.
The first trajectory is ethical restraint. Developers are trying to preserve red lines, even knowing that doing so will cost them contracts, access, and speed. Anthropic refuses to remove guardrails against mass surveillance and fully autonomous weapons, despite Pentagon pressure and the risk of losing hundreds of millions. This is the path in which models remain tools for people, not machines without brakes. But the price of that choice is growing isolation from the state and from military budgets.
The second trajectory is the full integration of AI into the security machine. States do not want limited access; they want the right to lawful use without external prohibitions imposed by the developer. In this logic, safeguards are not ethics but an obstacle. Red lines are not principles but delays. AI becomes part of the kill chain, where speed matters more than hesitation and strategic advantage matters more than corporate conscience.
The third trajectory is closed state systems without external oversight. China is already building this model: AI+ as an axis of the economy, distillation as a tool of acceleration, open source as a means of scaling, and integration into surveillance and military decision-making without public ethical debate. Here, AI is not debated. It is mobilized.
These trajectories do not coexist peacefully. They are fighting over what the logic of the technology itself will become. One tries to build in constraints so that AI does not turn into a weapon with no reverse gear. Another subordinates it to military efficiency. The third turns it into a closed instrument of the state.
The main question is no longer whether AI will become dangerous. The question now is different: which logic will prevail first — the logic of limits or the logic of force?
In 2026, the answer does not depend on elegant declarations. It depends on who moves faster: those trying to build brakes into the models, or those who see any brakes as weakness in a great-power game.
