The way that AI affects the speed and scale of election disinformation is by using a more advanced form of technology than just bot farms. Instead of only being able to create fake content in a general sense, AI can now create culturally relevant localized messages that target specific fears of different voters at much lower costs than traditional methods. This creates an overwhelming amount of noise which often happens at a rate that will surpass normal fact-checking. One very obvious example of this is how AI was used right before the Slovakian elections in September 2023. Two days before the election, a recording of a candidate (which was created by an AI) appeared online. It showed the candidate talking about how they intended to rig the vote. It was released during a mandatory media blackout period, which prevented that candidate from being able to rebut the video. The timing of this type of disinformation is purposeful, because there is an established regulatory window during disinformation, allowing people to cast doubt on the truth. We are currently seeing a trend where the goal of disinformation driven by AI is not to get people to believe one specific lie, but instead cause them to doubt everything they see and hear. This creates a situation where if a voter cannot tell the difference between a real speech and an artificially-created speech, the resulting lack of trust and apathy towards the democratic process can have similar or equal levels of harm to this type of disinformation.
If you look closely, AI isn't changing elections in one dramatic way—it's changing them in a hundred quiet, hard-to-notice ones. One example that really stands out is how voice cloning has started to blur the line between what's real and what only sounds real. In 2024, voters in New Hampshire received robocalls that sounded like President Joe Biden telling them not to vote in the primary. It wasn't him. It was an AI-generated voice—convincing enough that, if you weren't paying close attention, you'd believe it. What's unsettling isn't just the technology itself—it's how easily it fits into existing habits. People already trust what they hear on the phone, especially if it sounds familiar. AI just lowers the cost and effort to exploit that trust at scale. And that's really the shift. Elections have always had misinformation, but AI makes it faster, cheaper, and more personalized. Instead of one misleading message broadcast to everyone, you can have thousands of slightly different versions, each tailored to a specific group, each feeling oddly credible. It doesn't mean elections are suddenly broken. But it does mean the burden has shifted a bit more onto voters—to pause, to question, to double-check even things that sound authentic. That's a subtle but meaningful change in how democracy feels on the ground.
AI is transforming elections through both threat and defense. One clear instance is AI-powered deepfake detection during the 2024 Indian elections. The Election Commission deployed machine learning systems to identify synthetic media in real-time, flagging over 500 manipulated videos within the first week. This technology analyzed facial inconsistencies, audio mismatches, and metadata anomalies faster than human fact-checkers ever could. However, the same AI tools can generate disinformation at scale—creating a technological arms race. The key insight: AI does not just threaten elections; it is essential for protecting them. Electoral bodies must invest in AI defense systems before adversaries weaponize AI offense. In modern democracy, the algorithm is both the poison and the antidote.
One of the ways AI is beginning to affect elections is by lowering the cost of tools that campaigns used to have to buy from political consulting firms. Traditionally, well-funded candidates had a clear advantage because they could hire professionals to develop strategy, write speeches, analyze voter data, and craft messaging. Today, many of those same functions can be handled by AI tools quickly and at very little cost. A candidate with limited resources can now take voter turnout data from past elections, drop it into a spreadsheet, and use AI to identify precincts with low turnout or neighborhoods where outreach might matter most. AI can help draft speeches, generate talking points, prepare debate responses, and even organize door-to-door walking routes based on demographic data and past voting behavior. Tasks that once required teams of consultants and days of work can now be done by a candidate or volunteer in a matter of minutes. We've already seen versions of this approach in recent campaigns. In Argentina's 2023 presidential race, outsider candidate Javier Milei relied heavily on digital tools and AI-assisted content generation to expand his reach without the traditional campaign infrastructure that established parties rely on. His campaign used AI-supported messaging and rapid-response digital content to test ideas, respond to news quickly, and maintain a strong online presence with relatively limited resources. Regardless of political views, it demonstrated how technology can help a candidate compete with far larger campaign operations. Looking ahead, this could have important implications for third-party and independent candidates. Historically, those campaigns have struggled less because of a lack of ideas and more because they couldn't afford the consultants and infrastructure required to compete with the major parties. AI doesn't remove every barrier, but it does narrow the gap by giving smaller campaigns access to strategic analysis, messaging support, and voter targeting tools that were once limited to well-funded political machines.
AI is reshaping how information spreads during elections, influencing both perception and engagement. For instance, in recent campaigns, AI-driven content recommendation systems amplified certain narratives, making it easier for voters to encounter highly targeted messaging based on their interests and behaviors. This can accelerate awareness but also risks creating echo chambers where diverse perspectives are limited. The key insight is that AI's role in elections is not just about efficiency; it highlights the responsibility of platforms and individuals to ensure information reaches people thoughtfully and without distortion.
One clear instance is how AI-generated deepfake audio and video impacted the 2024 elections across multiple countries. In the US presidential primaries, a robocall using AI-cloned audio of President Biden's voice was sent to thousands of New Hampshire voters telling them not to vote in the primary. It sounded convincingly like Biden and was designed to suppress voter turnout. The incident led to the FCC explicitly banning AI-generated voice calls in political campaigns. From a technology perspective, what made this alarming wasn't the sophistication of the AI, it was how cheap and accessible it had become. You can clone someone's voice with a few minutes of publicly available audio and generate convincing fake calls for pennies each. The barrier to creating election disinformation has essentially dropped to near zero. The broader pattern we're seeing is that AI doesn't need to change how people vote on a massive scale to affect an election outcome. It just needs to create enough confusion, suppress turnout in a few key areas, or erode trust in the information ecosystem. Elections in India, Indonesia, and several European countries all dealt with AI-generated misinformation campaigns in their recent cycles. The technology is outpacing regulation everywhere, and most electoral commissions are still figuring out how to even detect AI-generated content, let alone enforce rules against it.
AI has started to leave a real mark on elections, mostly through the spread of fake but convincing media. With today's tech, anyone can whip up realistic audio, images, or videos that look and sound authentic. Even if the events never happened. These fakes travel fast on social media, shaping opinions before fact-checkers even have a chance to respond. Take what happened during the 2024 U.S. election. People in New Hampshire got robocalls that used an AI-generated version of Joe Biden's voice. The message urged them not to vote in the primary and instead "save" their vote for later. This wasn't something Biden or his campaign did. Someone just used AI to pull off a convincing fake and blasted it out to thousands of phones. What really makes this kind of incident alarming isn't just the technology. It's how effortlessly it spreads. Anyone with the right software and access to robocalls can reach massive audiences in no time. Sure, plenty of people might sense something's off, but some will buy it or at least get thrown off. That's the big risk here, and it's not limited to the U.S. As these AI tools become easier to access, election officials, tech companies, and the media all need to move faster to flag and fight back against fakes. Otherwise, synthetic media could really mess with how people see the candidates and the issues right when it matters most.
AI is changing elections by making it easier to create and spread convincing false media that undermines trust. In my work I have seen attackers use AI to produce highly convincing deepfake videos and audio that impersonate candidates or officials. One example is a deepfake that falsely depicts a candidate making inflammatory statements; such content can be produced and circulated quickly and influence voters before verification is possible. That dynamic increases the need for robust verification, media literacy, and human review by campaigns and news organizations.
One interesting way AI is affecting elections isn't just through misinformation, which gets most of the headlines. It's through how campaigns understand and adapt to voter sentiment in real time. In the past, campaigns relied heavily on polling, focus groups, and slower forms of analysis to understand what voters cared about. That information could take days or weeks to collect and interpret. AI has changed that dramatically. Today, campaigns can analyze enormous volumes of online discussion—social media posts, local news comments, community forums—and detect patterns in how voters are reacting to certain issues almost immediately. For example, natural language models can identify when a particular topic suddenly spikes in emotional intensity within a region. Maybe housing costs, education policy, or a local infrastructure issue suddenly becomes a focal point of discussion. Campaign teams can see those shifts early and adjust messaging, outreach, or policy emphasis far faster than traditional polling would allow. What's fascinating is that this creates a new dynamic in elections. Instead of campaigns broadcasting fixed messages for months, they're increasingly operating like adaptive systems—continuously adjusting based on how public sentiment evolves. Of course, that also raises important questions about transparency and fairness. When AI allows campaigns to respond to micro-level sentiment almost instantly, political messaging can become highly tailored and fluid in ways voters may not even notice. In many ways, AI is turning elections into something closer to a real-time feedback loop between candidates and the public. The technology itself isn't inherently good or bad—but it's fundamentally changing how quickly political narratives can form, shift, and spread.
AI is starting to affect elections mainly through the way information spreads online. One major concern is how easily realistic looking content can now be created and shared at scale. A clear example is the use of AI generated audio during the United States election cycle. In one incident, voters received automated phone calls that sounded like a well known political leader telling people not to vote. The voice was created using AI to mimic the real person, which confused many voters until officials later confirmed it was fake. Situations like this show how AI can be used to influence public opinion or create doubt during an election. Because of this, governments and technology platforms are now paying closer attention to detecting AI generated content and warning voters when something might not be authentic.
Artificial Intelligence (AI) negatively influences elections through the generation of false or misleading information at an accelerated pace compared to traditional sources, which ultimately confuses voters, spread false information that appears legitimate and professional, and provide election officials with increased resources as they try to counteract the spread of information that is false or misleading. An example of how AI impersonates a public figure to erode public confidence in the electoral process occurred during the 2024 New Hampshire primary. During this election, voters received robocalls in an AI-generated voice (that of President Biden), which demonstrates how AI can be used to create confusion for voters and erode their trust in the electoral process.
In the electoral process, artificial intelligence is changing how election campaigns are run by allowing for the fast production and dissemination of false information to voters. Misinformation in the forms of false text, audio, and videos can confuse voters, create potential harm to political candidates, and produce distrust toward the information that voters rely on when voting in an election. For example of AI-based information disinformation was seen in Slovakia during the election of 2023. An AI-generated fraudulent audio recording was disseminated prior to the election that creates the impression that a candidate was involved in discussing election security. This case of AI used to produce false information that deceives voters is another way the use of AI can adversely affect the electoral process during critical times.
AI is affecting elections by lowering the cost of believable deception, so a fake message can look and sound official long enough to change behaviour. One clear instance was the AI voice-cloned robocall that mimicked President Biden ahead of the 2024 New Hampshire primary and told people not to vote. Even when it gets debunked, it burns trust and forces campaigns and election bodies to spend time on verification instead of persuasion.
AI is fundamentally transforming the electoral landscape worldwide by reshaping how campaigns target voters and disseminate information—a double-edged sword marked by both innovation and risk. On one hand, AI-driven data analytics enable unprecedented precision in voter outreach, tailoring messages that resonate personally and significantly boost engagement. On the other hand, the misuse of AI for disinformation campaigns poses a serious threat to democratic integrity. A notable example occurred during the 2019 Indian general election, where AI-powered social media bots amplified divisive narratives, significantly influencing public discourse at scale. This example highlights the urgent need for transparency and robust regulatory frameworks to safeguard electoral processes. For further insights, consider reading Battle of Bots in India Elections and India Elections, Democracy, and AI. As AI continues to evolve, balancing its strategic advantages with ethical governance remains critical to sustaining trust in democratic processes worldwide. Discussions around regulatory frameworks addressing these challenges are ongoing, as detailed in resources such as How Artificial Intelligence Influences Elections and Gauging the AI Threat to Free and Fair Elections. Steven Mitts, CEO, Founder & Entrepreneurial Coach https://stevenmitts.com
AI doesn't create the problem of election manipulation—it dramatically lowers the cost and speed of doing it. We've already seen how data and digital platforms can influence political outcomes. In 2018, I wrote a Forbes piece examining the Cambridge Analytica scandal and how targeted data and behavioral profiling were used to shape voter messaging. That moment showed how powerful digital persuasion could be even without synthetic media. AI raises the stakes because it allows realistic but false content to be produced at scale. Deepfake videos, for example, can show a candidate appearing to say or do something that never happened. When that kind of content spreads rapidly across social platforms, the challenge isn't just misinformation—it's the speed at which it moves compared to the time required to verify what's real.
AI is reshaping elections primarily through the speed and scale at which information can be created and distributed. One clear instance is the rise of AI generated political messaging and synthetic media that can imitate voices or visuals of public figures. This makes it easier for misleading content to circulate quickly across digital platforms before it can be verified. At the same time, election bodies and platforms are increasingly using AI tools to detect manipulated media and coordinated campaigns. The broader lesson is that AI is influencing both the spread of political narratives and the systems built to safeguard electoral integrity.
Artificial intelligence has begun to influence elections in ways that go far beyond the obvious concern about deepfakes. The larger shift is the speed at which narratives can spread and be reinforced through automated content, micro targeted messaging, and algorithm driven amplification. Political groups now have the ability to generate thousands of variations of the same message, test which ones gain traction, and then rapidly scale the most effective versions across different audiences. That creates a communication environment where voters may see very different versions of the same issue depending on what algorithms believe will hold their attention. From a digital strategy perspective, the pattern looks similar to what marketers have experienced in search and social platforms for years. Teams that understand how algorithms distribute information tend to dominate visibility. Work we do at Scale by SEO often highlights this dynamic in a different context. When analyzing search results we see how structured content, topical authority, and engagement signals influence which information surfaces first. Elections are increasingly shaped by the same mechanics. The challenge for voters and institutions is learning how to distinguish authentic information from algorithmically amplified noise. Transparency in data sources and stronger digital literacy will likely become as important to democratic processes as traditional campaign messaging.