
7th February 2026 AI swarms could hijack democracy – without anyone noticing A new study warns that coordinated swarms of AI-generated personas could quietly manipulate public opinion online, creating the illusion of widespread agreement.
They don't march in the streets or storm the polls, but a new breed of AI-controlled personas could be the next big threat to democracy. In a new Science study, researchers warn of a next generation of AI-driven personas that can coordinate and adapt in real time to infiltrate online groups and influence public opinion. The authors of the paper, including Dr. Kevin Leyton-Brown from the University of British Columbia, describe how the combination of large language models and coordinated AI agents can allow malicious AI swarms to imitate human social dynamics. "The danger isn't only false content – it's synthetic consensus: the illusion that everyone agrees, engineered at scale," explains Dr. Daniel Thilo Schroeder, research scientist at the SINTEF foundation in Norway and first author of the paper. "Instead of repeating a script, swarms iterate. They probe audiences with many variants, measure responses and amplify the winners." Swarms of AI personas mimic humans so well they can infiltrate online communities, shape conversations, and tilt elections – all at machine speed. Unlike the old-school botnets, these agents coordinate in real time, adapt to feedback, and sustain coherent narratives across thousands of accounts. "We shouldn't imagine that society will remain unchanged as these systems emerge," says Dr. Leyton-Brown. "A likely result is decreased trust of unknown voices on social media. This might sound like a good thing, but it could empower celebrities and make it harder for grassroots messages to break through."
Advances in large language models and multi-agent systems allow a single operator to deploy thousands of AI "voices" that look authentic and talk like locals. They can run millions of micro-tests to find the most persuasive messages, creating a synthetic consensus that feels grassroots-driven but is engineered to manipulate democratic discourse. Full-scale AI swarms – potentially involving millions of autonomous agents with persistent, human-like personas – remain theoretical for now. But early warning signs already include AI-generated deepfakes and fabricated news outlets that influenced recent election debates in the U.S., Taiwan, Indonesia, and India, says Dr. Leyton-Brown. Monitoring groups also report pro-Kremlin networks flooding the web with content intended to poison future AI training data. Researchers say the next election could be the proving ground for this technology. The paper outlines several key ways to safeguard against AI swarms, such as monitoring coordination patterns in real time, implementing stronger verifications for accounts and publishing incidence reports. The authors also propose the use of agent-based simulations, which are computational models that simulate and give insight into how autonomous agents interact. Lastly, the authors outline policy changes that may be helpful, such as reducing monetisation of inauthentic engagement and increasing accountability. "We're not predicting outcomes, but the capability curve is clear: coordinated AI systems lower the cost of influence and raise the stakes for democracy," explains Dr. Jonas R. Kunst, Professor of Communication at BI Norwegian Business School and paper co-author.
Comments »
If you enjoyed this article, please consider sharing it:
|
||||||