🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Don’t assume that the Pentagon or Anthropic is prioritizing public welfare | Bruce Schneier and Nathan E. Sanders

In a significant shift for the U.S. defense sector, OpenAI has emerged as the main supplier of AI technology, while Anthropic is now out. This development follows a week of heated exchanges involving top officials in the U.S. government aimed at the leading figures in the technology industry. The implications of this transition reflect deep concerns regarding existential risks associated with advanced technology, with the Pentagon identifying AI as crucial for national security. Central to this controversy is Anthropic’s assertion that its models should not be utilized for tasks like “mass surveillance” or “fully autonomous weapons,” which Defense Secretary Pete Hegseth dismissed as “woke” ideology.

This standoff reached a climax on Friday evening when Donald Trump issued a directive for federal agencies to cease using Anthropic’s models. Soon after, OpenAI sprang into action, possibly securing hundreds of millions in government contracts by forming a partnership with the administration to equip classified systems with AI.

Despite the heated rhetoric, this outcome may be favorable for both Anthropic and the Pentagon. In a free-market economy, both parties should be entitled to transact freely, adhering to established federal rules governing contracting and acquisitions. The only inconsistent element in this scenario appears to be the Pentagon’s punitive threats.

As AI models become more commoditized, the leading products display increasingly similar performance levels, making differentiation challenging. Notably, top models from Anthropic, OpenAI, and Google frequently surpass one another in incremental advancements every few months. In this environment, the most reputable models are usually preferred by users only about six out of ten times, illustrating a remarkably close competition.

In this competitive landscape, branding assumes critical importance. Anthropic and its CEO, Dario Amodei, are positioning the company as a moral and trustworthy AI provider. This branding holds significant value for both consumer and corporate sectors. In stepping into Anthropic’s void within government contracts, OpenAI’s CEO, Sam Altman, promised to uphold ethical standards similar to those Anthropic faced criticism for. However, how this can be genuinely achieved remains ambiguous, particularly in light of the statements from Hegseth and Trump, likely inflating the political dimensions surrounding OpenAI’s offerings in public perception.

Taking a principled stand against the Pentagon can be advantageous for Anthropic, which may be better off sacrificing contracts to maintain a moral stance. By associating with those contracts, OpenAI could encounter challenges. The Pentagon, on the other hand, has alternative avenues; even without major tech companies willing to collaborate, the Department of Defense has already deployed numerous open weight models, which are publicly available and typically licensed for governmental use.

While admiring Amodei’s principled approach, it is important to recognize that such posturing is largely self-serving. Anthropic understood the implications of their earlier partnership with the Department of Defense, which involved a substantial $200 million investment. This was mirrored by their collaboration with Palantir in 2024.

Examination of Amodei’s formal statement and his essay on AI risks from January reveals his recurrent references to “democracy” and “autocracy,” often sidestepping the implications of collaboration with federal agencies. Furthermore, Amodei has embraced the notion of leveraging AI to bolster military superiority on behalf of democracies, responding to threats posed by authoritarian regimes. Although seemingly an inspiring vision, it presupposes a common commitment among democracies toward public welfare and peace.

Nevertheless, the Pentagon has legitimate grounds to stipulate that the AI systems it acquires must satisfy specific operational requirements. Unlike typical consumers, the Pentagon frequently procures products with lethal capabilities, such as tanks and weaponry. Consequently, its demands are inherently linked to the production of instruments capable of enforcing deadly force, which are continuing to be developed along a disturbing yet consistent trajectory toward increasing automation.

On the surface, the conflict appears to be a standard case of give-and-take in a market economy. The Pentagon’s unique requirements inform its purchasing decisions regarding products and services, leading companies to choose how to meet these demands and at what cost. In this respect, the situation mirrors the usual operations of a procurement office.

However, the dynamics are complicated under the Trump administration. Hegseth has not only threatened Anthropic with the loss of government contracts but has also, at least temporarily, classified the company as a “supply-chain risk to national security,” a label typically reserved for foreign entities. This designation prohibits not only government agencies but also their contractors from engaging with Anthropic.

Additionally, the government has hinted at invoking the Defense Production Act, potentially compelling Anthropic to retract contractual provisions or alter its AI models to eliminate built-in safety measures. The evolving demands from the government, alongside Anthropic’s responses, will shape the legal landscape in the coming weeks.

Notably, the advent of autonomous weapon systems is unlikely to fade. Just as primitive traps evolved into more sophisticated mechanical devices, society is still grappling with the ethical implications of technologies like landmines. The U.S. military has long employed systems such as the Phalanx CIWS, a 1980s shipboard defense mechanism that operates fully autonomously. Contemporary military drones are even capable of identifying and engaging targets devoid of direct human control. Thus, it is evident that AI will inevitably factor into military applications as it has with countless other innovations.


The takeaway from this situation should not be that one corporation is inherently more ethical than another or that a solitary corporate figure can halt the government’s move towards using AI for warfare or surveillance. Sadly, such barriers in our current climate are neither reliable nor enduring.

Instead, the focus should be redirected toward the necessity for robust democratic frameworks and the urgent call for reform in the United States. If the Pentagon is enforcing AI applications for mass surveillance or autonomous warfare that the public deems unacceptable, this serves as a clarion call for enhanced legal limitations on such military practices. Should the public be wary about government authority dictating when and how companies navigate ethical dilemmas concerning their technologies, then we must strengthen legal protections surrounding government procurement protocols.

Maximizing military capabilities within legal constraints is essential for the Pentagon. At the same time, companies such as Anthropic should work diligently to win trust from consumers and clients. However, we should not assume that either group operates solely in the public’s interest.

  • Nathan E Sanders serves as a data scientist affiliated with the Berkman Klein Center at Harvard University and co-authored the book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship with Bruce Schneier, a security technologist at the Harvard Kennedy School.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *