Experts caution about the danger to democracy posed by ‘AI bot swarms’ infiltrating social media.

A coalition of prominent experts has issued a stark warning that political leaders might soon harness swarms of AI agents designed to mimic human behavior, posing a significant threat to democratic processes. This alarming prediction comes amidst increasing concerns about the manipulation of public opinion through sophisticated AI-driven misinformation campaigns.
Among the key voices in this global consortium are Maria Ressa, a Nobel Peace Prize laureate and free-expression advocate, alongside AI and social science scholars from prestigious institutions like Berkeley, Harvard, Oxford, Cambridge, and Yale. These experts emphasize the emerging “disruptive threat” of sophisticated, hard-to-trace “AI swarms” that could infiltrate social media and messaging platforms.
The authors caution that a potential autocrat could exploit these AI swarms to manipulate public sentiment, allowing for the acceptance of canceled elections or the overturning of election results. There are forecasts that such technology might be widely implemented by the time the U.S. presidential election rolls around in 2028.
This warning has been detailed in a recent publication in Science, which not only highlights the risks involved but also calls for a unified global response to mitigate these threats. Suggestions include the development of “swarm scanners” and the application of watermarks to content as countermeasures against AI-generated misinformation. Early iterations of these advanced influence operations have already been noticed in elections in Taiwan, India, and Indonesia.
The authors articulated their concerns, stating: āA disruptive threat is emerging: swarms of collaborative, malicious AI agents. These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy.ā
Inga Trauthig, a recognized expert in propaganda technology, noted that political leaders might be hesitant to fully rely on such advanced AI as they prefer to maintain control over their campaigns. Additionally, there is skepticism regarding the effectiveness of these tactics due to votersā stronger susceptibility to offline influences.
The experts backing these warnings include Gary Marcus from New York University, a noted skeptic of the purported capabilities of contemporary AI models, who describes himself as a āgenerative AI realist.ā Audrey Tang, Taiwanās first digital minister, lends her voice, cautioning that āthose in the pay of authoritarian forces are undermining electoral processes, weaponizing AI, and employing our societal strengths against us.ā
Other notable experts in the consortium encompass David Garcia, a professor specializing in social and behavioral data science at the University of Konstanz; Sander van der Linden, the director of Cambridge University’s social decision-making lab; and Christopher Summerfield, an AI researcher at Oxford. Together, they project that political figures could potentially deploy an unlimited number of AIs to impersonate humans online, infiltrating communities and learning their vulnerabilities over time. Using this knowledge, they could disseminate increasingly persuasive falsehoods to sway public opinion.
The evolving capabilities of AI to analyze tone and content further enhance this threat, allowing these systems to effectively reproduce human interactions. They can employ relevant slang, maintain irregular posting patterns, and adaptively refine their strategies to avoid detection. The advancement of “agentic” AI augments their ability to autonomously devise and execute coordinated actions.
These AI entities are not restricted to merely social media operations; they might leverage messaging systems or even compose blogs and emails to optimize their influence based on the most effective communication channel, as observed by Daniel Thilo Schroeder, a research scientist affiliated with the Sintef research institute in Oslo.
Schroeder expressed concerns, stating: āItās just frightening how easy these things are to vibe code and just have small bot armies that can navigate online social media platforms, emails, and utilize these tools effectively.ā
Jonas Kunst, a professor of communication at the BI Norwegian Business School, echoed this sentiment, suggesting that if these bots evolve into a collective unit capable of exchanging information for malicious purposesāsuch as discerning community weaknessesāthen the coordination of these AI swarms would result in increased accuracy and efficiency in their objectives.
The effects of such AI-driven propaganda are already being observed in regions like Taiwan, where citizens are unknowingly subjected to targeted Chinese propaganda. Puma Shen, a Democratic Progressive Party MP in Taiwan fighting against disinformation, noted that AI bots have recently intensified interactions with voters on platforms like Threads and Facebook.
Shen remarked that these AIs often present an overwhelming amount of unverifiable information, leading to an āinformation overload.ā For instance, they might spread fake news articles suggesting that the U.S. is planning to abandon Taiwan. Another tactic employed by these bots is to encourage younger individuals to adopt a neutral stance on the complex China-Taiwan issue, subtly suggesting that undecided voters should refrain from forming strong opinions.
Shen further articulated the dangers posed by these bots, stating: āItās not telling you that Chinaās great, but itās encouraging them to be neutral ⦠this is very dangerous, because it leads to the label of people like me being radical.ā
Despite indications that the pace of AI advancements differs from the rapid progression boasted by Silicon Valley firms like OpenAI and Anthropic, independent AI experts have been consulted to evaluate these swarm-related warnings.
Trauthig pointed out that in the politically active year of 2024, while the technology was suitable for AI-driven microtargeting, there was an observable lack of its use, contrasting with earlier predictions. Most political propagandists still seem to rely on older technologies, hesitant to fully embrace these cutting-edge developments.
Michael Wooldridge, a professor specializing in AI foundations at Oxford University, believes that the potential for misuse is real: āI think it is entirely plausible that bad actors will try to mobilize virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion. Itās technologically feasible … the technology has progressed significantly and has become much more accessible.ā
Quick Guide
Contact us about this story
Show
Show
The best public interest journalism relies on first-hand accounts from people in the know.
If you have something to share on this subject, you can contact us confidentially using the following methods.
Secure Messaging in the Guardian app
The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.
If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select āSecure Messagingā.
SecureDrop, instant messengers, email, telephone and post
If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform.
Finally, our guide at theguardian.com/tipsĀ lists several ways to contact us securely and discusses the pros and cons of each.
Illustration: Guardian Design / Rich Cousins
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
