AI Company Anthropic Files Lawsuit Against US Defense Department Over Exclusion Issues

On Monday, Anthropic took a significant legal step by filing two lawsuits against the Department of Defense (DoD). The firm alleges that the government’s classification of it as a “supply chain risk” is not only unlawfully imposed but also infringes upon its First Amendment rights. This legal battle stems from a prolonged and contentious relationship between the two parties, driven largely by Anthropic’s efforts to implement safeguards to prevent the military from using its AI technologies for intrusive domestic surveillance or for the development of fully autonomous weaponry.
The lawsuits were officially lodged in both the Northern District Court of California and the U.S. Court of Appeals for the Washington D.C. Circuit. This legal action comes in the wake of the Pentagon’s decision to officially label Anthropic with the supply chain risk designation last Thursday, marking a historic instance where such a blacklisting measure has been applied to a U.S. company. Previously, the AI firm had expressed its intent to contest this designation, which also mandates that any contractors working with the government sever ties with Anthropic—a requirement that poses a serious risk to the company’s operational framework and overall business model.
In its lawsuits, Anthropic argues that the actions taken by the Trump administration are punitive measures intended to suppress the firm due to its noncompliance with governmental ideological demands. The company asserts that this constitutes a violation of its rights to free speech. Anthropic articulated this concern in its legal documents: “These actions are unprecedented and unlawful. The constitution does not permit the government to exercise its vast powers to penalize a company for its protected speech,” the firm stated in its California lawsuit.
Over the last year, Anthropic’s AI model, known as Claude, has been increasingly integrated into the operations of the Department of Defense. Notably, Claude was the sole AI model authorized for use in classified governmental systems until recently. Reports indicate that the DoD has relied on Claude in various military operations, including critical decisions regarding missile-targeting during its engagements in conflicts such as the one in Iran.
Despite the ongoing legal disputes, Anthropic has reiterated its commitment to providing AI solutions that enhance national security. In its filings, the firm highlighted a history of beneficial collaborations with the DoD, where it worked closely to tailor its systems for specialized applications. Anthropic also expressed an ongoing desire to maintain constructive negotiations with the government, as noted in their statements.
“Pursuing judicial review does not diminish our long-standing commitment to utilizing AI for the protection of national security. However, this is a necessary step to safeguard our business interests, our clientele, and our partnerships,” stated a spokesperson from Anthropic in an interview with the Guardian. “We remain open to exploring all avenues for resolution, including conversations with government officials.”
In its legal filings, the AI firm claims that the punitive measures enacted by the Trump administration and the Pentagon are causing “irreparable harm” to Anthropic. Interestingly, this assertion seems to stand in contrast to remarks made by Dario Amodei, CEO of Anthropic. He recently told CBS News that “the impact of this designation is fairly small,” and indicated that the company anticipates a positive outlook moving forward.
“The actions taken by the defendants aim to undermine the economic potential of one of the fastest-growing private enterprises globally, which is at the forefront of responsibly developing a technology of significant importance to our nation,” the firm claimed in its legal arguments.
As of now, the Department of Defense has not issued a response to requests for commentary regarding these allegations and lawsuits.
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
