Mit Ramona Westhof habe ich im Deutschlandfunk Kultur über die Rolle und den Einfluss von Wahlkampf gesprochen.
New Journal Article: Artificial Intelligence in Election Campaigns
Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?
In a new study in Political Communication, Adrian Rauchfleisch, Alexander Wuttke, and I address these questions.
Our first contribution is conceptual: we identify three distinct types of AI use in election campaigns
– Campaign operations
– Voter outreach
– Deception
This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes.
Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences.
Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently.
Importantly, and counterintuitively: Normative disapproval does not translate into electoral penalties. Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents.
This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability.
Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly.
This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections.
For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance.
As AI becomes more embedded in political campaigns and everyday party operations, understanding and regulating its use is crucial. It is especially important not to paint all AI use with the same broad brush.
Some AI uses could be democratically helpful, like enabling resource-strapped campaigns to compete. But fears of AI-enabled deception can overshadow these benefits, potentially stifling positive uses. Thus, discussing AI in elections requires considering its full spectrum.
The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back.
Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications
Abstract: As political parties around the world experiment with Artificial Intelligence (AI) in election campaigns, concerns about deception and manipulation are rising. This article examines how the public reacts to different uses of AI in elections and the potential consequences for party evaluations and regulatory preferences. Across three preregistered studies with over 7600 American respondents, we identify three categories of AI use: campaign operations, voter outreach, and deception. While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations. However, parties engaging in AI-enabled deception face no significant drop in favorability, neither with supporters, opponents, nor independents. Instead, deceptive AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development. These findings indicate that public disapproval of deceptive uses of AI does not directly translate into incentives for parties to forgo them, at least in the polarized political environment of the US.
New publication: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change
I’m happy to share a new article published in Telematics and Informatics, co-authored with Adrian Rauchfleisch, Joshua Philip Suarez, and Nikka Marie Sales:
The public launch of OpenAI’s ChatGPT in late 2022 was more than a technological milestone. It became a global focusing event, a moment when people around the world articulated hopes, fears, and expectations about Artificial Intelligence in ways that were public, visible, and highly consequential.
Our study asks: What can this digital conversation tell us about how societies understand technological change?
To answer this, we analyzed 3.8 million tweets from 1.6 million users across 117 countries during the first months of ChatGPT’s public availability.
We examined:
- Who participated in the debate
- When different groups joined
- How users evaluated ChatGPT
Rather than assuming “the public” reacts uniformly, our aim was to map patterns of participation, evaluation, and change over time.
Key Findings
1. Professional Background Shaped Early Participation
Users with technical skills (e.g., coding, math) were among the earliest to engage with ChatGPT. Their reactions were, on average, more positive.
By contrast, users whose professional skills are writing- or creativity-centered joined the conversation later and were more negative in tone.
Early optimism was not evenly distributed: it aligned with skillsets that could capitalize on the technology.
2. Cultural Context Influenced Reactions
Patterns of engagement varied across countries and cultural environments.
- Users from more individualistic cultures engaged earlier but showed greater skepticism and criticism.
- Users from cultures with high uncertainty avoidance were less likely to express positive views about ChatGPT overall.
Public discourse about AI reflects not only technological affordances, but also cultural values, norms, and expectations.
3. Aggregated Trends Hide Important Dynamics
At a global level, conversation about ChatGPT became increasingly critical over time.
However, this wasn’t because early adopters changed their minds. Instead, later entrants were systematically more skeptical than early ones.
What looks like “opinion change” is often “composition change”, different groups entering the debate at different times.
This challenges simplistic narratives that public sentiment inevitably “sours” as novelty fades. The story is more about who speaks when, not merely about how attitudes shift.
Broader Implications
Our findings suggest that debates about emerging technologies are shaped by:
- Economic interests
- Professional identities
- Cultural beliefs
- Social expectations
Public discourse about AI is not just a reaction to an innovation—it is an early signal of future societal fault lines, including:
- Who is positioned to benefit
- Who fears displacement
- How societies negotiate uncertainty
- Where resistance may emerge
In this sense, digital public debate acts as a window into the social meaning of technological change, revealing cross-national variation in values, politics, and aspirations.
Why This Matters
AI systems are being adopted at unprecedented speed–often faster than institutions or policymakers can respond. Online conversations are one of the earliest and most accessible indicators of how different groups interpret and evaluate these shifts.
Studying these debates helps us understand:
- Emerging opportunities and anxieties
- Sources of resistance or enthusiasm
- The social distribution of technological “winners” and “losers”
- How norms and expectations are being renegotiated
In short, public talk about AI is not noise. Instead, it provides important information for governance and public concerns.
The article is available open access here: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change: Telematics and Informatics.
Abstract: Public product launches in Artificial Intelligence can serve as focusing events for collective attention, surfacing how societies react to technological change. Social media provide a window into the sensemaking around these events, surfacing hopes and fears and showing who chooses to engage in the discourse and when. We demonstrate that public sensemaking about AI is shaped by economic interests and cultural values of those involved. We analyze 3.8 million tweets posted by 1.6 million users across 117 countries in response to the public launch of ChatGPT in 2022. Our analysis shows how economic self-interest, proxied by occupational skill types in writing, programming, and mathematics, and national cultural orientations, as measured by Hofstede’s individualism, uncertainty avoidance, and power distance dimensions, shape who speaks, when they speak, and their stance toward ChatGPT. Roles requiring more technical skills, such as programming and mathematics, tend to engage earlier and express more positive stances, whereas writing-centric occupations join later with greater skepticism. At the cultural level, individualism predicts both earlier engagement and a more negative stance, and uncertainty avoidance reduces the prevalence of positive stances but does not delay when users first engage with ChatGPT. Aggregate sentiment trends mask the dynamics observed in our study. The shift toward a more critical stance regarding ChatGPT over time stems primarily from the entry of more skeptical voices rather than a change of heart among early adopters. Our findings underscore the importance of both the occupational background and cultural context in understanding public reactions to AI.
- Adrian Rauchfleisch, Joshua Philip Suarez, Nikka Marie Sales, and Andreas Jungherr. 2025. Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change. Telematics and Informatics 103: 102344. doi: 10.1016/j.tele.2025.102344
Gastbeitrag in der FAZ: Digitalisierung birgt Risiken – Regulierung aber auch
Mit Thomas Hess habe ich einen Gastbeitrag in der Frankfurter Allgemeinen Zeitung veröffentlicht. Darin thematisieren wir die oft unbeachteten, aber sehr relevanten Kosten und Risiken, die aus Regulierung entstehen können.
Teaching Winter Term 2025/6
This winter, the Chair for Political Science, esp. Digital Transformation offers a new selection of courses exploring how politics, technology, and society intersect.
On the Bachelor level, Jon Meyer teaches two seminars:
- Digital Governance: Data and Platforms introduces key concepts of governance and digitalisation, with a focus on the rise and regulation of digital platforms. The course combines theoretical perspectives with case studies on how digitalisation shapes politics and society.
- Digital Sovereignty explores how states, organisations, and individuals seek to maintain control over data, infrastructures, and technologies in an interconnected world.
In the BA program Computation, Economics, and Politics (CEP), Florian Herold and I co-teach the lecture Algorithms for Economics and Politics & Economics and Politics of Algorithms, introducing algorithmic concepts and their applications in markets and political processes — from matching and networks to decision-making, mechanism design, and cryptography.
On the Master level, I offer two courses:
- Digital Media in Politics and Society examines how digital technologies, data practices, and algorithms shape political communication, public discourse, and democracy. The course follows a flipped-classroom format combining lecture materials with in-class discussions.
- The Research Project Seminar in Computational Social Science guides students in developing and executing their own empirical projects. Possible research questions can focus on the governance of digital media, framing of AI, or public opinion on AI. Participants design research questions, collect and analyse data, and present their findings.
Finally, the Thesis Seminar supports BA and MA students in preparing and completing their final theses.
We look forward to an engaging and inspiring semester with you!
New Journal Article: Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan
Artificial Intelligence (AI) is fast becoming a core feature of political campaigns worldwide. As campaign uses of AI grow, the importance of clear campaign rules that enable responsible uses of AI grows with them.
In a series of international studies, Adrian Rauchfleisch, Alexander Wuttke, and I examine different facets of AI in campaigning—covering public opinion on its uses as well as public preferences for regulating those uses.
In a new article in Telecommunications Policy, we analyze public opinion on regulating AI in political campaigns in the United States and Taiwan. Our focus is on explaining when and why people demand stronger AI regulation in campaigns.
People in both countries generally support regulating AI in election campaigns. However, the factors associated with this support differ. In the United States, higher demand for regulation correlates with perceptions that others are more influenced by campaign persuasion than oneself (a classic third-person effect). In Taiwan, by contrast, support is associated with perceptions that campaign persuasion affects both oneself and others (a second-person effect). These cross-national differences suggest that how people locate themselves in relation to others matters—and that this relationship can vary systematically across contexts. This points to promising avenues for future research.
Looking more closely, we find that general attitudes toward AI moderate support for regulation. In the United States, people who perceive greater AI risks in other domains are more supportive of regulating AI in elections, whereas those who see broader AI benefits are more opposed. In Taiwan, in contrast, both those who perceive AI risks and those who perceive AI benefits tend to support stronger regulation. In other words, in Taiwan, general perceptions of AI’s benefits do not translate into preferences for laxer rules in campaigning.
The article underscores the value of international comparisons for understanding public preferences on campaign regulation. We also show that the same variables and cognitions can connect differently to regulatory preferences across contexts—highlighting the need for context-aware, culturally informed comparative work.
More broadly, this study contributes to the growing attitudinal turn in digital governance scholarship. It reminds us that beyond written rules and governance frameworks, it is crucial to account for what people think about these issues—and which kinds of regulation they support, demand, or oppose.
Read the full article here.
For another paper from our study of AI in campaigning, see: Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications.
Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan
Abstract: The increasing use of Artificial Intelligence (AI) in election campaigns, such as AI-generated political ads, automated messaging, and the widespread availability of AI-assisted photorealistic content, is transforming political communication. This new era of AI-enabled election campaigns presents regulatory challenges for digital media ecosystems, prompting calls for an updated governance framework. While research has begun mapping AI’s role in digital media ecosystems, an often-overlooked factor is public attitudes toward regulating AI in election campaigns. Understanding these attitudes is essential as regulatory debates unfold at national and international levels, where public opinion often constrains the leeway of political decision-makers. We analyze data from two cross-sectional surveys conducted in the United States and Taiwan—two democracies with relatively lenient campaign regulations, which held presidential elections in the same year. We examine the role of general attitudes toward AI, psychological dispositions, and partisan alignments in shaping public support for AI regulation during elections. Our findings underscore the significance of psychological and attitudinal perspectives in predicting regulatory preferences. These insights contribute to broader discussions on AI governance within digital media ecosystems and its implications for democratic processes.
Adrian Rauchfleisch, Andreas Jungherr and Alexander Wuttke. 2025. Explaining public preferences for regulating Artificial Intelligence in election campaigns: Evidence from the U.S. and Taiwan. Telecommunications Policy (Online First). doi: 10.1016/j.giq.2025.102079
New Journal Article: Artificial Intelligence in Deliberation – The AI Penalty and the Emergence of a New Deliberative Divide
Recent advances in artificial intelligence (AI) have renewed hopes that these tools can support digital deliberation. Much of the discussion focuses on evaluating people’s reactions to AI-enabled deliberation once they encounter it. That’s useful—but it misses a prior question:
Does merely signaling that deliberation will use AI affect people’s willingness to participate? Given widespread skepticism toward AI, announcing its use in deliberative formats may deter participation. Thus, even if AI can technically enhance deliberation, it may also weaken it by keeping people from taking part in the first place.
Together with Adrian Rauchfleisch, I examine this question in a new article just published in Government Information Quarterly.
Using a preregistered survey experiment with a representative sample in Germany (n = 1,850), we test how people respond when told that specific deliberative tasks will be performed by AI versus by humans.
We find a clear AI penalty in deliberation: informing people that AI will be involved lowers both willingness to participate and expectations of deliberative quality.
This AI penalty is moderated by prior attitudes toward AI—people already skeptical of AI react more negatively—highlighting the emergence of a new deliberative divide rooted in views about AI rather than traditional factors like demographics or education.
As democratic practices move online and increasingly leverage AI, understanding and addressing public perceptions and hesitancy will be critical.
Read the full article here.
Abstract: Advances in Artificial Intelligence (AI) promise help for democratic deliberation, such as processing information, moderating discussion, and fact-checking. But public views of AI’s role remain underexplored. Given widespread skepticism, integrating AI into deliberative formats may lower trust and willingness to participate. We report a preregistered within-subjects survey experiment with a representative German sample (n = 1850) testing how information about AI-facilitated deliberation affects willingness to participate and expected quality. Respondents were randomly assigned to descriptions of identical deliberative tasks facilitated by either AI or humans, enabling causal identification of information effects. Results show a clear AI penalty: participants were less willing to engage in AI-facilitated deliberation and anticipated lower deliberative quality than for human-facilitated formats. The penalty shrank among respondents who perceived greater societal benefits of AI or tended to anthropomorphize it, but grew with higher assessments of AI risk. These findings indicate that AI-facilitated deliberation currently faces substantial public skepticism and may create a new “deliberative divide.” Unlike traditional participation gaps linked to education or demographics, this divide reflects attitudes toward AI. Efforts to realize AI’s affordances should directly address these perceptions to offset the penalty and avoid discouraging participation or exacerbating participatory inequalities.
Andreas Jungherr and Adrian Rauchfleisch. 2025. Artificial Intelligence in deliberation: The AI penalty and the emergence of a new deliberative divide. Government Information Quarterly 42(4): 102079. doi: 10.1016/j.giq.2025.102079
Games & Politics Diskussion zur Rolle von Politik in digitalen Kommunikationsumgebungen
Für das Games & Politics Format der Konrad-Adenauer-Stiftung und esports player foundation habe ich mit Joachim Ebmeyer, Johannes Steup, Just Johnny und LvciaLive über die Rolle von Politik in neuen Kommunikationsumgebungen diskutiert.
You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
KI und Macht: Neues Podcast Gespräch
Was hat Künstliche Intelligenz mit Macht zu tun? Im Servus KI! Podcast gehen Wolfram Burgard, Lennart Peters und ich dieser Frage nach.
You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
New Working Paper: Artificial Intelligence in Government
In a new working paper with Alexander Wuttke and Adrian Rauchfleisch, we examine how people feel about the use of AI by governments.
We use Principal-Agent Theory (PAT) to explore a paradox emerging as governments adopt AI technologies: what we call a “failure-by-success.”
The Efficiency–Control Paradox
At first glance, adopting AI for governmental tasks seems like an unequivocal win. Our survey experiment shows that when people hear about AI improving efficiency (i.e. speeding up tax assessments, reducing welfare administration costs, or processing bail decisions faster) trust in government increases compared to scenarios describing only human decision-making.
However, there’s a catch.
Even when AI is framed in the most beneficial light, citizens report feeling less in control. This is deeply concerning because feeling in control is not a trivial preference, it is a cornerstone of democratic legitimacy. Democracies are built on the idea that citizens can understand, influence, and contest decisions affecting their lives.
Delegation through the PAT Lens
We conceptualize AI adoption as a form of task delegation within Principal-Agent Theory. Traditionally, PAT explores how principals (citizens or politicians) delegate tasks to agents (civil servants or organizations). When AI becomes the agent, this delegation faces unique tensions:
Our Findings
We tested these tensions in a pre-registered factorial survey experiment with 1,198 participants in the UK. Respondents read vignettes describing AI use (versus human-only decisions) in tax, welfare, and bail contexts. Some vignettes emphasized AI’s efficiency benefits, while others highlighted PAT risks such as opacity, lock-in, or lack of recourse.
The results were striking:

Density and bar plots for all three dependent variables with the
treatment effects. The dashed line in the first two panels indicates the mean
score of the human condition. For each condition, the estimates with 95%-CIs
are shown.
When treatments emphasized efficiency gains of AI, they boosted trust. But even when only benefits were highlighted, perceived control declined compared to human decision-making.
When treatments made PAT risks explicit alongside benefits, both trust and perceived control dropped sharply, and support for AI use fell significantly.
Implications for Democratic Governance
These findings suggest that initial AI efficiency gains may drive wider adoption, but if issues of transparency, reversibility, and accountability are not proactively addressed, governments risk eroding citizens’ sense of control. In the long run, this could undermine the legitimacy of democratic institutions themselves—a true failure by success.
Limitations and Next Steps
Of course, our study used hypothetical vignettes and a non-probability sample. Real-world implementations and representative studies are needed to confirm these dynamics. Still, our theoretical contribution demonstrates that PAT offers a valuable framework for navigating the structural tensions of AI delegation in public administration.
We hope our conceptual approach helps policymakers, computer scientists, and social scientists better understand the subtle risks of AI adoption in government – and design systems that enhance rather than erode democratic legitimacy.
Abstract: The use of Artificial Intelligence (AI) in public administration is expanding rapidly, moving from automating routine tasks to deploying generative and agentic systems that autonomously act on goals. While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability. This article applies principal-agent theory (PAT) to conceptualize AI adoption as a special case of delegation, highlighting three core tensions: assessability (can decisions be understood?), dependency (can the delegation be reversed?), and contestability (can decisions be challenged?). These structural challenges may lead to a “failure-by-success” dynamic, where early functional gains obscure long-term risks to democratic legitimacy. To test this framework, we conducted a pre-registered factorial survey experiment across tax, welfare, and law enforcement domains. Our findings show that although efficiency gains initially bolster trust, they simultaneously reduce citizens’ perceived control. When the structural risks come to the foreground, institutional trust and perceived control both drop sharply, suggesting that hidden costs of AI adoption significantly shape public attitudes. The study demonstrates that PAT offers a powerful lens for understanding the institutional and political implications of AI in government, emphasizing the need for policymakers to address delegation risks transparently to maintain public trust.










