🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Microsoft supports AI company Anthropic in its legal fight with the Pentagon.

Microsoft has recently endorsed Anthropic’s legal battle against the Pentagon, by submitting a court brief to support the AI firm’s quest to overturn a restrictive designation that effectively prohibits it from engaging in government contracts. This legal move highlights the escalating tensions within the tech industry regarding government involvement and AI ethics.

This week, Microsoft submitted an amicus brief to a federal court located in San Francisco. Microsoft, known for incorporating Anthropic’s AI functionalities into its systems for the US military, contended that a temporary restraining order was essential to avert significant disruptions for suppliers dependent on Anthropic’s technology. Notably, tech giants like Google, Amazon, Apple, and OpenAI have also rallied in support of Anthropic by signing onto a similar brief.

In a statement to the Guardian, Microsoft emphasized the necessity of having reliable access to cutting-edge technology for the Department of War. They stated, “Everyone wants to ensure AI is not used for mass domestic surveillance or to start a war without human control. The government, the entire tech sector, and the American public need a path to achieve all these goals together.” This statement underscores the critical intersection of AI technology and ethical considerations, particularly in military applications.

Microsoft has solidified its position as one of the Pentagon’s most integral tech partners, holding a substantial portion of the military’s Joe Biden-era $9 billion Joint Warfighting Cloud Capability contract, shared with other major players like Amazon, Google, and Oracle. In addition to this, Microsoft has engaged in numerous separate contracts worth billions, covering a spectrum of areas, such as defense, intelligence, and civilian services. Notably, during the Trump administration, Microsoft secured another multi-billion dollar deal aimed at advancing cloud services and AI capabilities within the federal government.

This decision arrived shortly after Anthropic initiated two lawsuits on Monday—one in federal court in California and the other in the DC Circuit Court of Appeals. These suits challenge the Pentagon’s decision to classify the company as a supply-chain risk, a designation that is unprecedented for any US company. This development signals a potentially transformative moment in the relationship between technology firms and governmental entities.

The controversy began when contract negotiations regarding a $200 million deal to implement Anthropic’s AI on classified military systems collapsed last month. This situation coincided with the United States’ preparations for military action against Iran.

The breakdown in talks stemmed from Anthropic’s insistence that its technology should not be employed for mass surveillance of US citizens or for the operation of autonomous lethal weapons. In response, Defense Secretary Pete Hegseth labeled the company a supply-chain risk. Following the Pentagon’s formal notification to Anthropic last week regarding this designation, the company reported that it has already started losing government contracts. Subsequently, the Pentagon’s chief technology officer, Emil Michael, affirmed in a CNBC interview that there was “no chance” of renegotiating with Anthropic.

In its legal complaint, Anthropic outlined the limitations and concerns associated with its technology. The company remarked, “Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare,” as stated in the filing. They conveyed that these usage restrictions stem from a distinctive understanding of the risks and limitations associated with their product.

Furthermore, Anthropic contended that its First Amendment rights are being infringed upon, asserting that the Pentagon had weaponized the supply-chain risk designation—customarily employed for firms tied to foreign adversaries like China—as a form of ideological retaliation due to the company’s public stance on AI safety and ethical usage.

On a related note, ongoing investigations by the Pentagon focus on an incident where a Tomahawk missile reportedly struck the Shajarah Tayyebeh elementary school, resulting in at least 175 fatalities according to Iranian reports. Preliminary findings from the investigation suggest that Washington may be culpable for these killings. It remains uncertain whether AI technology played any role in the governmental actions, which appear to have stemmed from targeting errors related to outdated information provided by the US Defense Intelligence Agency, as reported by The New York Times.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *