Skip to main content

How AI Bots Could Sabotage 2024 Elections around the World

AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis

Man holding a glowing smart phone

Hate speech, political propaganda and outright lies are hardly new problems online—even if election years such as this one exacerbate them. The use of bots, or automated social media accounts, has made it much easier to spread deliberately incorrect disinformation, as well as inaccurate rumors or other kinds of misinformation. But the bots that afflicted past voting seasons often churned out poorly constructed, grammatically incorrect sentences. Now as large language models (artificial intelligence systems that create text) become ever more accessible to more people, some researchers fear that automated social media accounts will soon get a lot more convincing.

Disinformation campaigns, online trolls and other “bad actors” are set to increasingly use generative AI to fuel election falsehoods, according to a new study published in PNAS Nexus. In it, researchers project that—based on “prior studies of cyber and automated algorithm attacks”—AI will help spread toxic content across social media platforms on a near-daily basis in 2024. The potential fallout, the study authors say, could affect election results in more than 50 countries holding elections this year, from India to the U.S.

This research mapped the connections between bad actor groups across 23 online platforms that included Facebook and Twitter as well as niche communities on Discord and Gab, says the study’s lead author Neil Johnson, a physics professor at George Washington University. Extremist groups that post a lot of hate speech, the study found, tend to form and survive longer on smaller platforms—which generally have fewer resources for content moderation. But their messages can have a much wider reach.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Many small platforms are “incredibly well connected, to each other and internally,” Johnson says. This allows disinformation to bounce like a pinball across 4chan forums and other laxly moderated websites. If malicious content seeps out of these networks onto mainstream social sites such as YouTube, Johnson and his colleagues estimate that one billion people are potentially vulnerable to it.

“Social media lowered the cost for disseminating misinformation or information. AI is lowering the cost for producing it,” says Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics, who was not involved in the new study. “Now, whether you’re a foreign malign actor or a part of a smaller domestic campaign, you’re able to use these technologies to produce multimedia content that’s going to be somewhat compelling.”

Studies of disinformation in previous elections have pinpointed how bots at large can spread malicious content across social media, thereby manipulating online discussions and eroding trust. In the past bots would take messages created by a person or program and repeat them, but today’s large language models (LLMs) are enhancing those bots with a new feature: machine-written text that sounds convincingly human. “Generative AI alone is not more dangerous than bots. It’s bots plus generative AI,” says computational social scientist Kathleen Carley of Carnegie Mellon University’s School of Computer Science. Generative AI and large language models can also be used to write software, making it faster and easier for programmers to code bots.

Many early bots were limited to relatively short posts, but generative AI can make realistic, paragraphs-long comments, says Yilun Du, a Ph.D. student studying generative AI modeling at the Massachusetts Institute of Technology’s Computer Science & Artificial Intelligence Laboratory. Currently, AI-generated images or videos are easier to detect than text; with images and videos, Du explains, “you have to get every pixel perfect, so most of these tools are actually very inaccurate in terms of lighting or other effects on images.” Text, however, is the ultimate challenge. “We don’t have tools with any meaningful success rate that can identify LLM-generated texts,” Sanderson says.

Still, there are some tells that can tip off experts to AI-generated writing: grammar that is too perfect, for example, or a lack of slang, emotional words or nuance. “Writing software that shows what is made by humans and what is not, and doing that kind of testing, is very costly and very hard,” Carley says. Although her team has worked on programs to identify AI bot content on specific social media platforms, she says the tools are imperfect. And each program would have to be completely redone to function on a different website, Carley adds, because people on X (formerly Twitter), for instance, communicate in ways that are distinct from those of Facebook users.

Many experts doubt that AI detection programs—those that analyze text for the signs of a large language model’s involvement—can adequately identify AI-generated content. Adding watermarks to such material, or filters and guardrails into the AI models themselves, can’t cover all the bases, either. “In the area of using AI and disinformation, we’re in an arms race” with bad actors, Carley says. “As soon as we come up with a way of detecting it, they come up with a way of making it better.” Johnson and his colleagues also found that bad actors are likely to abuse base versions of generative AI, such as GPT-2, which are publicly available and have looser content filters than the current models. Other researchers predict that impending malicious content won’t be made with big companies’ sophisticated AI but instead generated by open-source tools made by a few or individual programmers.

But bots can evolve in tandem, even with these simpler versions of AI. In previous election cycles, bot networks remained near the fringe of social media. Experts predict that AI-generated disinformation will spread much more widely this time around. It’s not just because AI can produce content faster; social media use dynamics have changed, too. “Up until TikTok, most of the social media that we saw were friend-, follower-, social graph-based networks. It tended to be that people followed people who they were aligned with,” Sanderson explains. TikTok instead uses an algorithmic feed that injects content from accounts that users don’t follow, and other platforms have altered their algorithms to follow suit. Also, as Sanderson points out, it includes topics “that the platform is trying to discover if you like or not,” leading to “a much broader net of content consumption.”

In Sanderson’s previous studies of bots on Twitter, research assistants often labeled an account as a bot or not by looking at its account activity, including the photos and texts it posts or reposts. “It was essentially like this kind of Turing test for accounts,” he says. But as AI generation gets steadily better at removing grammatical irregularities and other signifiers of bot content, Sanderson believes that the responsibility of identifying these accounts will have to fall to social media companies. These companies have the ability to check metadata associated with the accounts, to which external researchers rarely have access.

Rather than going after false content itself, some disinformation experts think that finding and containing the people who make it would be a more practical approach. Effective countermeasures, Du suggests, could function by detecting activity from certain IP addresses or identifying when there’s a suspiciously large number of posts at a certain time of day.

This could potentially work because there are “fewer bad actors than bad content,” Carley says. And disinformation peddlers are concentrated in certain corners of the Internet. “We know that a bunch of the stuff comes from a few main websites that link to each other, and the content of those websites is generated by LLMs,” she adds. “If we can detect the bad website as a whole, we’ve suddenly captured tons of bad information.” Additionally, Carley and Johnson agree that moderating content at the level of small social media communities (posts by members of specific Facebook pages or Telegram channels, for instance) would be more effective than sweeping policies that ban entire categories of content.

Not all is lost to the bots yet, however. Despite reasonable concerns about AI’s impact on elections, Sanderson and his colleagues recently argued against overstating potential harms. The actual effects of increased AI content and bot activity on human behaviors—including polarization, vote choice and cohesion—still need more research. “The fear I have is that we’re going to spend so much time trying to identify that something is happening and assume that we know the effect,” Sanderson says. “It could be the case that the effect isn’t that large, and the largest effect is the fear of it, so we end up just eroding trust in the information ecosystem.”

Charlotte Hu is a science and technology journalist based in Brooklyn, N.Y. She's interested in stories at the intersection of science and society. Her work has appeared in Popular Science, GenomeWeb, Business Insider and Discover magazine.

More by Charlotte Hu