2026/02/24 Andreas Jungherr

New Journal Article: Artificial Intelligence in Election Campaigns

Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?

In a new study in Political Communication, Adrian Rauchfleisch, Alexander Wuttke, and I address these questions.

Our first contribution is conceptual: we identify three distinct types of AI use in election campaigns

– Campaign operations
– Voter outreach
– Deception

This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes.

Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences.

Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently.

Importantly, and counterintuitively: Normative disapproval does not translate into electoral penalties. Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents.

This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability.

Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly.

This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections.

For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance.

As AI becomes more embedded in political campaigns and everyday party operations, understanding and regulating its use is crucial. It is especially important not to paint all AI use with the same broad brush.

Some AI uses could be democratically helpful, like enabling resource-strapped campaigns to compete. But fears of AI-enabled deception can overshadow these benefits, potentially stifling positive uses. Thus, discussing AI in elections requires considering its full spectrum.

The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back.

Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications

Abstract: As political parties around the world experiment with Artificial Intelligence (AI) in election campaigns, concerns about deception and manipulation are rising. This article examines how the public reacts to different uses of AI in elections and the potential consequences for party evaluations and regulatory preferences. Across three preregistered studies with over 7600 American respondents, we identify three categories of AI use: campaign operations, voter outreach, and deception. While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations. However, parties engaging in AI-enabled deception face no significant drop in favorability, neither with supporters, opponents, nor independents. Instead, deceptive AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development. These findings indicate that public disapproval of deceptive uses of AI does not directly translate into incentives for parties to forgo them, at least in the polarized political environment of the US.

  • Andreas Jungherr, Adrian Rauchfleisch and Alexander Wuttke. 2026. Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications. Political Communication (Online First). doi: 10.1080/10584609.2025.2611913.
  • , ,

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.