🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Trump directs US agencies to halt use of Anthropic technology amid ethical concerns over AI.

Donald Trump announced on Friday his decision to instruct all federal agencies to “IMMEDIATELY CEASE” the utilization of Anthropic technology. This statement marks a significant stance against the burgeoning influence of artificial intelligence in national security matters.

Tensions heightened between the Department of Defense and Anthropic, as negotiations reached an impasse, with neither party willing to compromise as a deadline for an agreement expired on Friday afternoon. The Pentagon’s demands centered on urging the AI company to relax its ethical standards concerning its systems, threatening severe repercussions if the company did not comply.

Just an hour before the deadline lapsed, Trump made his views known via Truth Social, stating: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.” His post indicated a clear discontent with the ongoing negotiations and the role of AI in governmental decisions.

Trump continued, reinforcing his belief in national sovereignty: “WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.” His strong rhetoric highlighted the growing concerns among some political leaders regarding the ethical implications of AI technologies in defense.

Following the expiration of the deadline, Defense Secretary Pete Hegseth declared that the Pentagon would classify Anthropic as a supply-chain risk to national security, arguing that the company’s stances were “fundamentally incompatible with American principles.” This classification is often reserved for foreign threats and could jeopardize Anthropic’s collaborations with other suppliers and contractors.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote on X, expressing an unyielding commitment to ensuring that military operations would not be influenced by what he termed ideological whims of technology firms.

Despite this strong stance, Hegseth noted that the Pentagon would maintain a transitional period of up to six months during which it would still utilize Anthropic’s AI services. Additionally, the General Services Administration (GSA) echoed Hegseth’s directives and announced on Friday that it would terminate its contracts with Anthropic as well.

The situation remains dynamic, and despite the federal government’s withdrawal, there is still the possibility that Anthropic and the Pentagon might find a way to reach an agreement. Other tech firms could also step in to replace Anthropic in the military’s contracts if negotiations fail to produce results.

Anthropic did not immediately respond to requests for comments on these alarming developments. Earlier in the week, the company’s CEO, Dario Amodei, issued a statement indicating that they could not in good conscience yield to the Pentagon’s demands for unrestricted use of their AI tools for military purposes.

The public confrontation between the Department of Defense and Anthropic first emerged earlier this week when discussions began regarding the military’s usage of Anthropic’s Claude AI system. However, discussions quickly deteriorated as both sides failed to reach an agreement on safety protocols surrounding AI applications.

Anthropic has positioned itself as a leader in promoting safety-centric AI development, yet it has faced criticism and pressure from the Pentagon prior to this week’s public dispute. The U.S. defense officials have been advocating for unrestricted access to Claude’s capabilities, citing their potential in enhancing national security, while Anthropic has firmly opposed using its technology for mass surveillance or deploying autonomous weapon systems.

“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei stated during Thursday’s remarks. “We have never objected to specific military operations nor attempted to limit the use of our technology arbitrarily. However, we believe that in some cases, AI could undermine, rather than uphold, democratic values.”

In contrast, Pentagon spokesperson Sean Parnell emphasized that the defense department “has no interest” in deploying AI for mass surveillance or autonomous weapon systems. “This narrative is fake and being peddled by leftists in the media,” he argued, aiming to discredit any claims suggesting otherwise.

The turmoil has not gone unnoticed in Silicon Valley, where competitors of Anthropic have publicly rallied in their support. Notably, OpenAI CEO Sam Altman expressed solidarity with Anthropic, indicating that OpenAI shares the same ethical principles regarding military applications as its rival. This reflects a broader unity among AI firms regarding the limits they are willing to accept.

Close to 500 employees from OpenAI and Google have signed an open letter declaring that “we will not be divided.” Both companies maintain their own contracts with the military, further complicating the landscape of defense-related AI development. The letter suggests that the Pentagon is actively negotiating with other companies, including Google and OpenAI, to fulfill the needs Anthropic has resisted.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *