verba
politics and power

Elections and AI: A Boon, a Curse, and Fatigue

Aleksei Chesnokov ·

Elections and AI: A Boon, a Curse, and Fatigue
Lincoln Memorial, West Potomac Park. Photo by Sarathewise. CC BY-SA 3.0

So far, the loudest fears that artificial intelligence would steal free and fair elections have not really materialized. At least not head-on. The mass deepfakes that were supposed to flip electoral outcomes tend to provoke irritation rather than trust. The term AI slop has settled into public consciousness almost as quickly as spam — something intrusive, cheap, and best ignored on sight.

That is precisely why campaign teams today think twice before deploying AI tools in the public-facing parts of their strategies. This is especially true when it comes to deepfakes targeting opponents. Any exposure in this game usually benefits the victim of the attack and damages the credibility of those willing to wade into technological mud.

In the practical, behind-the-scenes work of campaigns, however, AI looks almost unequivocally like a blessing. Models and chatbots have already become standard tools for targeting, tailoring campaign messages to different audience segments, analyzing competitors, and monitoring public sentiment. In this sense, AI often functions as a barometer — filtering out emotion and cooling the overheated instincts of political strategists.

But there is a less obvious side to this story. As voters grow accustomed to dismissing AI-generated content as background noise, they increasingly discard the very impulse to engage at all. Fact-checking requires effort, time, and attention — resources that are chronically scarce. Trust gives way not to informed skepticism, but to weary cynicism: the sense that “everyone is lying,” and therefore no version of reality deserves serious consideration.

And this is where the real danger emerges. Cynicism and apathy become the perfect environment for manipulation — not because people believe falsehoods, but because they stop caring where the truth lies.

A Cold Shower Instead of an Apocalypse

Despite all the scare stories, there is still no convincing evidence that AI is radically reshaping election outcomes. Deepfakes are not stealing millions of votes. They do something far worse: they erode the desire to vote at all. People increasingly feel powerless in political decision-making, sliding into apathy and irritation.

This is what the “erosion of trust” in elections actually feels like: the sense that nothing can be trusted, not even the president’s voice on the phone. In New Hampshire in 2024, Democratic voters received robocalls allegedly from Joe Biden urging them not to participate in the primaries. The episode triggered a media panic about the threat of AI. But its main effect was far more banal and corrosive: a lingering feeling of unease among voters.

There were similar cases in Slovakia in 2024 and Nigeria in 2023. In Slovakia, a deepfake audio recording surfaced in which liberal leader Michal Šimečka appeared to discuss election fraud and bribes with journalist Monika Tódová. It spread on Telegram just 48 hours before the campaign silence period, slipping past Meta’s moderation. In Nigeria, an AI-generated audio clip falsely attributed plans for vote rigging to Atiku Abubakar. The recording contained obvious artifacts, yet still went viral in the final hours before the election.

Even after these recordings were debunked, many voters were left in a state of uncertainty. Some went further, using the very existence of fakes as proof that “everything is manipulation anyway.”

Turkey offers a revealing counterpoint. There, a candidate caught in a compromising video claimed it was a deepfake, despite forensic analysis confirming its authenticity. But by that point, the audience was already polarized. The case illustrates a crucial point: even without synthetic media, the mere narrative of deepfakes can be enough to deny reality and undermine basic trust in visual evidence.

Research on information overload and “flood the zone” strategies shows that under constant conflict and noise, people deliberately reduce their news consumption. In the United States, 65 percent of adults report cutting back on news because of stress and fatigue. Social media fatigue compounds the effect: chaotic feeds simultaneously encourage the sharing of unverified content and withdrawal from political engagement altogether.

The result is subtle but significant. The louder the noise and the deeper the exhaustion, the lower the motivation to engage critically. Falsehoods begin to pass on autopilot, encountering little resistance simply because fewer people are paying attention.

Election and AI analysts increasingly note that today’s “dirty tech” is less about direct persuasion and more about demobilization. AI is deployed to deliver personalized, emotionally charged messages that do not need to be fully credible. It is enough to blur a sense of meaning and responsibility — and that alone can be politically decisive.

A War of Memes Instead of a War of Deepfakes

In a world of simulations, memes hit harder than deepfakes. They do not pretend to be true and they do not require evidence. They burrow straight into emotion: laughter, anger, disgust. A meme does not ask you to believe. It gives you the feeling of “I get it” or “I’m in on it.” That is precisely why memes have proven more effective than all the apocalyptic warnings about AI.

Research by the Brookings Institution between 023 and 2025 captures this shift clearly. Memes have become one of the main channels for spreading disinformation disguised as satire. Their power lies not in plausibility, but in virality. In 2024, AI-generated parodies — from Kamala Harris portrayed as a “deep state puppet” to images of Donald Trump in handcuffs — amassed tens and even hundreds of millions of views. But they circulated as jokes, not as “evidence.”

Instead of a war of deepfakes, what emerged was a war of memes. In 2024, supporters of Donald Trump pushed an absurd meme reading “Vote Trump or Haitians eat us,” featuring AI-generated cats and ducks. According to NewsGuard and the Center for Countering Digital Hate, its combined reach exceeded 150 million impressions on X and TikTok. Framed as a “joke,” the meme slipped past moderation despite its overtly conspiratorial and racist content.

In 2025, this memetic logic escalated into a direct political duel. Trump and California Governor Gavin Newsom began systematically trolling each other, mirroring each other’s style and visual language. Trump posted AI-generated videos on Truth Social showing Newsom in a Trump 2028 cap and clips where Newsom punches a punching bag with Trump’s face. Newsom fired back with all-caps posts in Trump’s trademark style — “TACO TRUMP” (Trump Always Chickens Out), AI parodies featuring J.D. Vance and a couch, and memes depicting Trump in diapers or with ketchup standing in for blood.

Newsom signed his posts as Gavin Newsom (GCN), deliberately echoing Trump’s DJT branding, and within hours individual posts were pulling in between 5 and 20 million views. This was no longer improvisation or random “internet humor.” It was a test run of memetic style as a tool for a future 2028 presidential campaign. The exchange played out simultaneously on X, Truth Social, and TikTok, with minimal moderation, because humor is easily classified as “satire.”

AI can mass-produce memes, but it struggles with context. Grok or Midjourney can effortlessly generate images of Trump in prison or Harris with a hammer and sickle. That works inside echo chambers, but outside them it often comes across as cringe and alienating. As NPR journalists have noted, a meme is a wink, not a lie. That is precisely why the promised “deepfake apocalypse” never arrived.

Irony does not cure democracy. It helps people survive an overloaded reality. Memes offer catharsis: in an age of fake-fatigue, they allow a reaction to absurdity without the effort of fact-checking. The cost, however, is high. Memes disguise racism and conspiracy thinking as jokes, intensify polarization, and blur the line between satire and disinformation. Voters may not believe memes — but they react to them emotionally. And that emotional shift is enough to change the tone of campaigns, even if the ballots themselves remain unchanged.

Key Mechanisms

It may sound almost paradoxical, but Zohran Mamdani’s victory in New York was driven primarily by direct, offline work with voters. Thousands of supporters went door to door, speaking to people face to face. Mamdani himself was constantly present in the city — on the streets, at workplaces, in local communities. He was the central figure of his own social media and YouTube channel, not an avatar assembled by a campaign staff.

His opponents, by contrast, relied on the model that has dominated recent years: campaigning on X and other platforms, automation, and the mass production of slop content. The contrast was stark. Mamdani’s campaign demonstrated a crucial point: in the age of AI, physical presence does not disappear — it becomes a scarce resource. People still respond to contact, authenticity, and participation, especially where everything else feels like a simulation.

This inspires cautious optimism. Even in an era of automated politics, people seem instinctively drawn to the real: to live texts, live voices, live encounters. One wants to believe that this logic can extend beyond elections — that we will read authors rather than applications, watch films with actors rather than avatars, and at least occasionally rely on our own critical thinking instead of the answers of AI models governed by their creators’ algorithms and their sponsors’ interests.

But this example does not negate technology. It simply shows how it is actually being used today.

AI has already transformed electoral campaigning — not through any “magic of persuasion,” but through scale and cost. Large language models have taken over the work of copywriters. ChatGPT, Claude, and their peers generate millions of campaign messages tailored to psychographics — age, interests, fears, emotional triggers — and adjust them in real time based on audience response. What once required large teams, A/B testing, and weeks of work is now automated and nearly free.

Over the past two years, a key shift has taken place: from blunt spam (40–50 messages a day) to precision “surgery” — 3–5 messages per week, embedded in a specific context. In 2024, information fatigue reached 64 percent in battleground regions, leading to attention drop-off. By 2025, campaign teams had moved to frequency capping, contextual targeting without cookies, and micro-segmentation. This approach rarely changes minds, but it can hold wavering voters or boost turnout by 2–5 percent — and precisely for that reason, it barely registers as manipulation.

Alongside this, bots operate not as debaters, but as architects of the environment. Coordinated networks of AI agents amplify hashtags and trends, creating the impression that *“everyone thinks this way.” Social media wargame experiments show that around 15 percent of such agents in a discussion is enough to shift the perceived consensus. Radical positions begin to look normal, moderate ones marginal, and the legitimacy of elections is undermined before voting even begins.

Deepfake technologies add a final layer. Their main effect is not direct influence on outcomes, but the destruction of evidentiary value. Audio and video have ceased to function as hard evidence. Even debunked fakes leave behind a residue of doubt, while real kompromat is easily dismissed with the phrase “this is AI too.” The liar’s dividend kicks in — a cultural erasure of the boundary between fact and simulation.

The Tension Map, 2026–2027

The next two years will not be a series of isolated elections, but a single global stress test for democracy under AI pressure. Nearly 50 national campaigns in 2026, followed by key electoral cycles in 2027, will unfold across radically different cultural contexts — yet within an almost identical technological environment. Deepfake attacks, next-generation bot networks, and algorithmic microtargeting are now available to everyone.

The main pressure points on this map are already visible:

United States: Post-Election Retaliation.
The 026 midterms (November 3) will be the first major testing ground for AI tools operating under a Trump administration. What’s at stake is not just seats in Congress, but the legitimacy of results themselves — fought through meme warfare and automated campaigning amid extreme social polarization.

Europe: A Laboratory of Regimes.
Attention is focused on Hungary, Sweden, and the Baltic states. Here, AI tools operate in different registers: from reinforcing Viktor Orbán’s “illiberal democracy” to stress-testing the resilience of Scandinavian institutions and defending post-Soviet information spaces against aggressive external interference.

Latin America: A Zone of Vulnerability.
Presidential races in Peru, Colombia, Brazil, and Costa Rica represent a high-risk zone. Weak institutions, political violence, and near-total reliance on social media create ideal conditions for dark targeting and destabilizing deepfake campaigns.

Eastern Europe and Asia: AI as a Tool of Control.
In Russia, Armenia, and Japan, technology is used less to persuade undecided voters and more to simulate legitimacy and maintain control. Very different political systems rely on a similar technological stack to filter reality.

Africa and Crisis Zones: Digital Chaos.
In countries like Zambia or Haiti, AI-driven disinformation is being tested under conditions of extreme institutional fragility. In such “failed states,” even a minor information injection amplified by neural networks can trigger disproportionately severe offline consequences.

The key takeaway of this period is not where AI will “steal” a victory. What matters is that the technology is being stress-tested simultaneously across all ecosystems — under freedom and censorship, stability and collapse. The same stack — TikTok, Telegram, X, and generative models — has become a universal weapon, equally effective in the hands of established leaders and anonymous operators of chaos.

Big Tech: Arbiters Without a Mandate

AI does not “interfere” in elections on its own. Behind every filter, label, and redirect stand specific corporations that, over the past few years, have shifted from neutral infrastructure to private arbiters of the electoral process. Increasingly, the rules of political competition are set not by laws or parliaments, but by decisions made in boardrooms in Silicon Valley.

The political landscape has already absorbed this reality. Tech leaders are no longer lobbyists or “invited experts.” They are embedded in power itself. They move freely through the White House, sit at the same table with Donald Trump, and discuss not only markets but the boundaries of what is permissible. Elon Musk flies with the president on the same jet for Thanksgiving, a fully set table and no protocol distance between them. They are received by King Charles III and Queen Camilla as strategic partners, not merely as wealthy businessmen. This is the new hierarchy: platform owners have become core infrastructure actors, shaping information flows, legitimacy, and the very form of public reality.

The poles of this power are clearly defined. Meta, under Mark Zuckerberg and Nick Clegg, has built a model of hard control: AI-content labeling under the C2PA standard, bans on AI-generated political advertising without disclosure, and the removal of more than 20 million disinformation posts in 2024 alone. This is no longer moderation. It is a private company unilaterally deciding which narratives are acceptable in the global public sphere.

On the opposite flank stands Elon Musk’s X. Here, the owner is not an arbiter but an active participant in the conflict. Musk personally amplifies AI parodies and deepfakes targeting political opponents, including a video featuring Kamala Harris that reached a record 150 million views. The platform shifts from a space for debate into a tool of personalized information warfare.

Between these extremes lies the “soft power” of Google and OpenAI. The teams led by Sundar Pichai and Sam Altman have chosen a strategy of algorithmic avoidance. Gemini and GPT-4o refuse to generate images of candidates or answer sensitive political queries, instead redirecting users to official sources. This is a form of technocratic silence: if an algorithm deems a topic dangerous, the discussion is simply switched off — quietly, without public scandals or overt censorship.

Deeper still, at the infrastructure level, control becomes nearly invisible. Companies like Cloudflare and coalitions such as C2PA embed watermarks and filters directly into the fabric of the internet. Content without the proper markers may not be explicitly blocked, but it gradually disappears from search results and recommendation systems. As the Trump administration in 2026 begins a campaign against what it calls “burdensome” AI regulation, these actors emerge as key lobbyists, investing hundreds of millions of dollars to protect their technological interests.

Formally, Big Tech signed the “AI Elections Accord,” pledging to safeguard democracy. In practice, the level of that protection depends not on voters’ will but on market priorities. The United States and the EU receive sophisticated control systems; the Global South is left with maximum exposure to risk. Big Tech has become a set of political actors without a mandate — unelected and unaccountable to the electorate.

Today, the conditions under which choice is even possible are determined not by law or election commissions, but by software code and algorithmic architecture. And it is at this invisible level that far more is now decided than on paper ballots themselves.

If you’re finding meaning here, stay with us a bit longer — subscribe to our other channels for more ideas, context, and insight.