Trump Blacklisted Anthropic. OpenAI Took the Deal Hours Later. Here's the Line That Split Them.
The deal that Anthropic refused
In late February 2026, the U.S. Pentagon made what it described as its last and final offer to Anthropic for a contract worth up to $200 million. Anthropic CEO Dario Amodei reviewed the terms and said the company could not, "in good conscience," accept them.
The specific sticking point: the Pentagon required the ability to use Claude for mass surveillance of populations and to power systems that make autonomous lethal decisions. Anthropic's usage policies prohibit both. The Pentagon wanted those restrictions removed or overridden for government deployments. Anthropic said no.
The next day, President Trump announced a six-month phase-out of Anthropic's models across all U.S. government agencies, designating the company a national security risk.
Hours after that announcement, OpenAI signed a deal with the Defense Department.
What OpenAI actually agreed to
OpenAI's agreement with the Pentagon is structured around three "red lines" — use cases OpenAI says it will not enable even for government customers. The company published a blog post outlining these constraints.
What's notable: the red lines are deliberately vague. OpenAI described them in terms of "indiscriminate harm," "weapons of mass destruction," and "undermining human oversight of AI" — language that sounds restrictive but leaves significant operational flexibility around surveillance, autonomous systems, and decision-making at scale.
Critics pointed out immediately that none of OpenAI's stated red lines would have blocked the specific capabilities Anthropic refused to provide. MIT Technology Review described the difference as "OpenAI's compromise is exactly what Anthropic feared."
OpenAI later adjusted the deal after critics raised alarms about potential surveillance use cases. The exact modifications weren't fully disclosed.
The military's uncomfortable admission
A remarkable detail emerged in the days after the ban: senior Pentagon officials privately acknowledged that Claude had become operationally indispensable.
Fortune reported on a Pentagon official who described the internal reaction when defense leadership realized how deeply Claude had been embedded in active workflows — summarizing intelligence reports, drafting analysis, processing classified documents. The official called it the "whoa moment." Losing access to Anthropic would create real operational disruption, not just a vendor swap.
Defense Secretary Hegseth granted the six-month transition window specifically because an immediate cutoff would have been too disruptive. Even while banning Anthropic, the government was acknowledging the ban would hurt.
That detail matters for enterprise buyers: the Pentagon — with effectively unlimited procurement resources — found itself operationally dependent on a vendor it then had to ban. Any organization building on a single AI vendor is exposed to the same dynamic at a smaller scale.
xAI enters the equation
The plan to replace Anthropic doesn't end with OpenAI. As part of the transition, xAI's Grok models are also being phased into government deployments — giving Elon Musk's company significant federal contract exposure.
This creates an interesting dynamic: Musk has been deeply involved in the current administration's AI policy through DOGE and advisory roles, while simultaneously positioning xAI as a direct beneficiary of decisions that disadvantage Anthropic. The appearance of conflict isn't subtle.
For developers, Grok's inclusion signals that the federal government AI stack is now being actively shaped by political relationships as much as technical capability. Anthropic's exclusion appears to be as much ideological as technical — a company that publicly champions AI safety found itself at odds with an administration that views safety constraints as obstacles.
What each company's position actually means
It's worth being precise about what Anthropic's refusal represents — and what OpenAI's acceptance represents.
Anthropic's position: Claude should not be used to identify individuals in mass surveillance databases or to make autonomous decisions about lethal action. These aren't edge cases in Anthropic's usage policies — they're explicitly prohibited in the published Acceptable Use Policy. Dario Amodei has argued publicly that AI systems that remove humans from lethal decision loops represent a categorical risk, not a matter of contractual discretion.
OpenAI's position: the three stated red lines exclude WMD development, indiscriminate attacks on civilians, and undermining AI oversight mechanisms broadly — but don't explicitly prohibit the Pentagon from using GPT models for surveillance infrastructure or as components in autonomous targeting chains, provided humans remain "in the loop" by some definition.
The gap between these positions is real, and it maps directly onto the question every enterprise buyer should ask: what will your AI vendor not do, even if you pay them?
The AI Accountability Act adds another layer
This situation arrived in the same month Congress passed the AI Accountability Act, requiring companies that deploy AI in consequential decisions — hiring, lending, healthcare, criminal justice — to conduct and publish regular bias audits.
The law doesn't cover government military deployments directly, but the timing highlights the growing tension between AI companies trying to embed safety constraints into their products and governments (and enterprises) that want those constraints removed or made optional.
Anthropic built usage restrictions into its core product. The Pentagon wanted an exception. The exception was the whole point.
What this means for enterprise AI buyers
The Anthropic-Pentagon situation is the most visible instance yet of a pattern that will repeat: AI vendors have usage policies, and large institutional buyers will push against those policies when they conflict with operational requirements.
A few practical takeaways:
First, vendor selection is increasingly a values choice, not just a capability choice. Anthropic's willingness to walk away from a $200M contract suggests its usage restrictions are not negotiable for enterprise customers either. If your use case involves the specific capabilities Anthropic prohibits — population-level monitoring, autonomous decision systems, certain types of content — Claude probably isn't the right tool regardless of its capabilities.
Second, OpenAI's more flexible stance on government use has a flip side: the vagueness of its red lines means less certainty about what it won't do. That may be fine for most enterprise buyers, or it may be exactly the concern, depending on your organization's own risk tolerance.
Third, multi-vendor strategy is now an operational risk management question, not just a price negotiation. The Pentagon discovered this the hard way. Organizations that have deeply embedded a single LLM provider's models into workflows should model what a sudden loss of that vendor would cost.
The immediate impact on the developer ecosystem
For developers building applications that serve U.S. government customers, the picture shifted significantly in late February 2026. Claude is now off the table for government contracts during the transition period, and potentially indefinitely depending on how the political situation evolves.
Anthropic's broader commercial business is separate from the government sector and remains unaffected. The company's enterprise customers in finance, healthcare, and tech are not covered by the executive order. Claude 4.6 Opus remains the technical leader on SWE-bench and a number of other benchmarks, and its commercial trajectory is independent of the Pentagon dispute.
But the episode clarifies something that had been abstract: when AI companies say their models have safety constraints, those constraints are real enough to cost a major contract. Anthropic isn't just publishing acceptable use policies as compliance theater. OpenAI's constraints are more permissive, and the market — in this case, the U.S. government — is actively selecting for that permissiveness.
Whether that's good or bad depends entirely on what you're building and for whom.
Key Takeaways
- ✓Trump ordered a 6-month phase-out of Anthropic across all U.S. government agencies after Anthropic refused to remove usage restrictions on mass surveillance and autonomous weapons
- ✓OpenAI signed a Pentagon deal hours after the Anthropic ban, accepting terms Anthropic called unconscionable — with three stated 'red lines' that remain vague
- ✓The core dispute: Anthropic's usage policy prohibits using Claude for systems that surveil populations or make autonomous lethal decisions; the Pentagon required the ability to do both
- ✓During the 6-month transition, the Pentagon acknowledged Claude was 'indispensable' — a senior defense official called losing it a genuine operational risk
- ✓xAI (Grok) is being phased in alongside OpenAI as a government-approved alternative, giving Elon Musk's company significant federal AI contract exposure
- ✓The deal forces a clarifying question for every enterprise using AI: what is your vendor actually willing to do with your data and use cases?
Skila AI Editorial Team
The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.
About Skila AI →Related Resources
Weekly AI Digest
Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.
Join 1,000+ AI enthusiasts. Free forever.