Anthropic Just Became the First US Tech Company Blacklisted as a National Security Risk. Then Their Competitors Sided With Them.
On February 27, 2026, the Trump administration did something that had never been done to an American technology company: it designated Anthropic as a supply chain security risk. The same classification framework historically reserved for Chinese state-owned enterprises like Huawei and SMIC was applied to a San Francisco AI startup that had, six months earlier, been given classified access to Pentagon networks.
The $200 million Department of Defense contract signed in July 2025 — the one that made Claude the first frontier AI model approved for classified government networks — was ordered terminated. Federal agencies were given a six-month phase-out window. And the company that refused to remove safeguards against autonomous weapons and mass surveillance systems found itself on the wrong side of the largest AI policy confrontation since the field became strategically significant.
Then something unexpected happened.
Why Anthropic Refused
The dispute traces back to two specific clauses in Anthropic's enterprise usage policy that the Pentagon demanded be waived. The first prohibited using Claude in fully autonomous weapons systems — systems that identify and engage targets without meaningful human oversight. The second prohibited using Claude for mass surveillance of American citizens.
Anthropic refused both waivers. Not because of commercial risk, but because removing them would have violated the company's stated mission: developing AI that is safe and beneficial to humanity. CEO Dario Amodei said in a statement that the company "cannot and will not remove safety restrictions that exist to protect people from catastrophic misuse, regardless of who is asking."
From the Pentagon's perspective, a contractor refusing to comply with mission requirements is disqualifying. From Anthropic's, shipping Claude into autonomous weapons systems crosses a line the company drew when it was founded.
The administration chose escalation. Defense Secretary Pete Hegseth invoked supply chain risk authority to bar all military contractors — not just Anthropic — from doing business with the company. It was a designation with teeth: TSMC, Lockheed Martin, Palantir, and every other defense contractor suddenly faced compliance pressure.
The Lawsuit
On March 9, Anthropic filed in two federal courts simultaneously: the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C. The core argument was First Amendment retaliation — that the government designated Anthropic as a security risk not because of genuine supply chain vulnerability, but because the company refused to comply with demands that would violate its values.
Legal scholars noted the structural problem immediately. The supply chain risk framework has never been successfully applied to a domestic company on the basis of policy disagreement. China-based entities face it because of government ownership requirements and potential forced access by Chinese intelligence agencies. Anthropic is American, venture-backed, and has no foreign government relationships. The legal theory the administration is relying on is largely untested in this context.
Anthropic is asking for an injunction to suspend the designation while litigation proceeds. If granted, the six-month phase-out order would pause and federal agencies could continue using Claude during the legal process.
Then OpenAI and Google Engineers Filed a Brief
Here's where the story shifts from corporate legal dispute to something larger.
On March 10, more than 30 employees from OpenAI and Google DeepMind — including Google's chief scientist Jeff Dean — filed an amicus brief supporting Anthropic's legal challenge. This is not something that happens in competitive industries. The companies are direct competitors. OpenAI and Anthropic are battling for the same enterprise contracts, the same developer relationships, the same cloud partnerships.
Their argument in the brief: if the government can use supply chain risk designation to punish an AI company for refusing to remove safety restrictions, then every AI lab in America is now exposed to the same pressure. The brief framed the question as whether federal agencies can use procurement power to coerce AI companies into abandoning safety policies. The signatories argued the answer has to be no, regardless of who is asking.
Jeff Dean, who built Google's AI research division over more than a decade, writing on behalf of a competitor's legal case against the federal government is not a routine event.
What Enterprise AI Buyers Need to Know Right Now
The business implications are more immediate than the legal ones.
First, the six-month phase-out clock is running. Federal agencies, government contractors, and any company that handles government-classified data or sits in the federal supply chain now has a compliance question to answer. Using Claude in systems that touch federal government work could create liability during the phase-out period. Most legal teams are advising a wait-and-see posture pending the injunction ruling.
Second, this is not an Anthropic-specific risk. The mechanism the administration used is broad. If the legal theory holds, any AI vendor that refuses government demands for feature modifications could face the same designation. That creates a new category of AI procurement risk that didn't exist in 2025: geopolitical alignment risk. Enterprise buyers now have a new question to add to their vendor due diligence: what happens when a government demands this vendor change how their model behaves?
Third, the AI safety argument just got a lot more complicated. For years, "AI safety" meant technical alignment work — preventing models from behaving in unintended ways. This case introduced a new definition: institutional safety, meaning the organizational policies that determine what a model will and won't do. Anthropic's refusal to waive its weapons and surveillance restrictions is now the front line of a federal lawsuit, not an academic discussion.
The Three Scenarios
Based on the legal landscape as of March 15, three outcomes are possible:
Anthropic wins the injunction: The six-month phase-out pauses. The core legal question goes to full litigation, likely taking 12-18 months. Federal agencies continue using Claude. The designation has no immediate practical effect. This is the scenario most favorable for enterprise AI adoption stability.
Injunction denied, but Anthropic wins at trial: The designation takes effect. Federal agencies stop using Claude. Anthropic loses the government market for 1-2 years while the case works through the courts. If they ultimately prevail, the precedent cuts hard against future supply chain coercion of AI vendors — but the short-term damage is real.
Administration settles: Diplomatic resolution, likely involving some compromise on specific use cases. The autonomous weapons and mass surveillance prohibitions remain in place for those scenarios, but carve-outs are created for other defense applications. This is the politically expedient path if the administration's legal position looks weak after the injunction ruling.
The Benchmark Context
It's worth noting what was actually cut off when the Pentagon contract ended. Claude wasn't being used for weapons targeting. It was deployed on Palantir's classified networks at the "secret" classification level for intelligence analysis, document processing, and structured data queries — tasks where large context windows and instruction-following are valuable, and where the safety restrictions the Pentagon wanted removed would never have been triggered.
The autonomous weapons and surveillance prohibitions are use-case restrictions, not capability limitations. Claude's actual performance on classified intelligence tasks was reportedly strong enough that multiple Pentagon officials advocated internally against the termination decision. The contract ended over policy disagreement, not capability inadequacy.
What This Means for the AI Industry
The Anthropic case is the first major test of whether AI companies can maintain ethical constraints under government pressure. Every AI lab is watching the outcome.
If the administration prevails, the implicit message to the industry is clear: safety restrictions are negotiable when the government wants them removed. Labs that want government contracts will need to choose between safety commitments and federal revenue.
If Anthropic prevails, the precedent cuts the other way: that AI companies have First Amendment protection for their deployment policies, and that supply chain designation cannot be used as a coercion mechanism against domestic companies for policy disagreement.
The 30 OpenAI and Google employees who signed that amicus brief clearly believe the second outcome matters more than competitive advantage. In an industry where OpenAI, Google, and Anthropic spend most of their energy competing for the same customers, watching technical and research staff from competing organizations rally around a shared legal principle is genuinely unusual.
The injunction ruling is expected in the next few weeks. If you're making enterprise AI procurement decisions that involve any government-adjacent work, that ruling — not the trial outcome — is the immediate thing to watch.
Key Takeaways
- ✓Anthropic refused to remove safety restrictions on autonomous weapons and mass surveillance, triggering the first-ever supply chain risk designation against an American tech company
- ✓The $200M Pentagon contract signed July 2025 was terminated; federal agencies get 6 months to phase out Claude
- ✓Anthropic filed suit March 9 in two federal courts, arguing First Amendment retaliation
- ✓More than 30 OpenAI and Google DeepMind employees filed an amicus brief supporting Anthropic — competitors backing a competitor in a federal lawsuit
- ✓The injunction ruling (expected in weeks) determines whether the 6-month phase-out pauses — key event for enterprise AI buyers in government-adjacent work
- ✓This case is the first major legal test of whether AI companies can maintain safety policies under federal government pressure
Skila AI Editorial Team
The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.
About Skila AI →Related Resources
Weekly AI Digest
Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.
Join 1,000+ AI enthusiasts. Free forever.