AI Lab Anthropic Sues Pentagon Over Blacklisting for Refusing Unrestricted Military AI Use
AI Lab Anthropic Sues Pentagon Over Blacklisting for Refusing Unrestricted Military AI Use
Reported from the source
Quick summary: AI lab Anthropic has filed lawsuits against the US Pentagon after being designated a “supply chain risk” for refusing to allow unrestricted military use of its AI system, Claude. The company argues the designation is “unprecedented and unlawful,” penalizing it for advocating safe and responsible AI use, while the Pentagon insists it needs full AI functionality for any lawful purpose and that tech companies cannot dictate defense matters.
Tech start-up Anthropic is engaged in a legal dispute with the Trump administration’s Pentagon, challenging its designation as a “supply chain risk.” This move came after Anthropic refused to remove guardrails on its AI system, Claude, which would have allowed its unrestricted use for functions such as autonomous weapons or domestic surveillance. Anthropic CEO Dario Amodei had warned US Defense Secretary Pete Hegseth about the risks of untested AI in autonomous warfare and declined to lift use restrictions before a February 27 deadline. The Pentagon, in turn, argued that technology companies are not in a position to dictate matters of warfare, with Hegseth stating that US troops “will never be held hostage by the ideological whims of Big Tech.” Anthropic’s lawsuits, filed in California and Washington DC, aim to undo the designation and block its enforcement. The company asserts that the Trump administration’s actions are “unprecedented and unlawful,” penalizing it for upholding the principle that AI should maximize positive outcomes for humanity and be used in the safest and most responsible manner. Anthropic also argues that even advanced AI models are not reliable enough for automated weapons systems and that surveillance use would violate fundamental rights, claiming the government is “seeking to destroy” its economic value. Conversely, the Pentagon maintains its need for full use of AI-powered functionality for “any lawful” purpose, viewing Anthropic’s refusal as a private company imposing policy restrictions on defense matters. The designation of Anthropic is notable as it marks the first time a US-based tech company has been labeled a supply chain risk, a designation previously applied only to foreign entities like Huawei. This ban effectively bars Anthropic from doing business with federal agencies and could impact its dealings with contractors and suppliers. Despite a $200 million contract signed in July 2025 being canceled last month due to the disagreement, Claude remains heavily embedded in the Defense Department’s operational intelligence systems. US media have reported its significant use in planning the recent US-Israel attack on Iran.
Source: www.dw.com
