Trump is leveraging AI in his battles—this marks a perilous shift | Chris Stokel-Walker

The capabilities of artificial intelligence (AI) are vast and varied. It can help manage your shopping list, entertain children with personalized bedtime stories, increase workplace efficiency, and enhance government operations. Yet, a critical aspect that deserves urgent attention is the militarisation of AI and the inherent risks that come with it.
In recent months, the Trump administration reportedly utilized AI for regime change operations, illustrating a concerning trend. For instance, there were claims that AI had been employed in attempts to remove Nicolás Maduro from Venezuela, allowing boots on the ground to carry out the actual operations. More recently, AI tools assisted in an attack on Iran, helping with intelligence analysis and target identification.
The implications of these developments are significant. The employment of AI in military contexts has already resulted in casualties and increased tensions in the Middle East. Understandably, this raises serious ethical concerns.
Dario Amodei, the CEO of Anthropic, has found himself in a heated public disagreement with President Trump. This conflict arose over Amodei’s insistence on maintaining two crucial red lines: prohibiting the use of AI for widespread domestic surveillance and ensuring that fully autonomous weapons do not operate without human oversight. While Anthropic resisted these demands, OpenAI quickly forged an agreement with the Pentagon, claiming to have stronger ethical protections than those proposed by Anthropic.
Despite the technicalities of these agreements, the underlying reality is alarming. Tools initially designed for mundane tasks, such as drafting emails or writing cover letters, are now being integrated into frameworks that can incite violence and warfare.
In the past, discussions centered around the control of AI, particularly its military applications, took place mostly in academic circles. Concerns felt abstract, as they were based on hypothetical scenarios. With the recent actions concerning Maduro and the missile strikes in Iran, this previously theoretical fear has transformed into a disturbing reality.
Historically, the philosophy surrounding armed conflict dictated that formidable weapons serve as deterrence rather than engagement tools. The concept of mutually assured destruction has been a significant factor in preventing nuclear warfare. Disturbingly, early simulations indicate that AI systems may be prone to hastily deploy nuclear options. This signals a transition where nations will increasingly use AI in military strategy, not just for its effectiveness, but because the ethical questions surrounding automated warfare are becoming more contentious.
As military historians reflect on these recent events, they may well conclude that the evolution of AI in military contexts marks a pivotal moment akin to the atomic bomb’s use in Japan—a clear demarcation between the past and an uncertain future.
What actions can we take moving forward? Unfortunately, the options appear limited. Ideally, we should have implemented a comprehensive ban on military applications of AI. However, the gradual shift away from that idea has gained momentum in the last decade. For example, Demis Hassabis of DeepMind once took a principled stand against selling to Google unless it committed to not using AI for military purposes. However, last year, Alphabet quietly rescinded that promise. Trump’s administration has further eroded any notions of ethical restraint in this area.
In light of these developments, the international community must exert pressure to bring the United States back from this perilous path. Allies need to demand that the Trump administration not only exercise responsibility concerning military AI but also accept binding international agreements. These should encompass procurement standards and oversight measures that promote accountability rather than stifle ethical considerations. If the world’s leading military normalizes the deployment of consumer-level AI in operations aimed at regime change, we risk entering a vastly more precarious era in terms of AI governance.
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
