Microsoft has led the technology industry’s charge in defense of Anthropic by filing a court brief in a San Francisco federal court that frames the Pentagon’s supply-chain risk designation as a threat to national security itself. The brief called for a temporary restraining order and was supported by a separate filing from Amazon, Google, Apple, and OpenAI. The case is being watched closely across the technology and defense sectors as a defining legal battle over the future of AI governance.
The Pentagon’s designation followed the breakdown of a $200 million contract in which Anthropic refused to allow its AI to be used for mass surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth applied the supply-chain risk label, which had never before been used against a US company, triggering the cancellation of Anthropic’s government contracts. Anthropic responded by filing two simultaneous lawsuits in California and Washington DC, arguing the designation was unconstitutional retaliation.
Microsoft’s court filing is informed by its own integration of Anthropic’s AI into military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements spanning defense, intelligence, and civilian agencies. Microsoft publicly argued that the government and the tech sector must collaborate to ensure advanced AI serves national security while being governed responsibly to prevent misuse.
Anthropic’s lawsuits argued that the supply-chain risk designation was an act of unconstitutional ideological punishment for the company’s publicly held views on AI safety. The company’s court filings disclosed that it does not believe Claude is currently safe or reliable enough for lethal autonomous decision-making, which it said was the genuine basis for the restrictions it sought in the contract. The Pentagon’s technology chief publicly stated that there was no chance of renegotiation.
Congressional Democrats have formally asked the Pentagon whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school, raising questions about human oversight and AI targeting tools. These inquiries are adding legislative pressure to an already intense legal confrontation. Together, the legal and legislative battles are shaping what may become the most consequential debate over AI governance in American history.
Microsoft Leads the Charge for Anthropic as Pentagon’s AI Blacklist Sets Up a Defining Legal Battle
RELATED ARTICLES
