What's new?

Read Blog

New publication: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change

I’m happy to share a new article published in Telematics and Informatics, co-authored with Adrian Rauchfleisch, Joshua Philip Suarez, and Nikka Marie Sales:

Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change.”

The public launch of OpenAI’s ChatGPT in late 2022 was more than a technological milestone. It became a global focusing event, a moment when people around the world articulated hopes, fears, and expectations about Artificial Intelligence in ways that were public, visible, and highly consequential.

Our study asks: What can this digital conversation tell us about how societies understand technological change?

To answer this, we analyzed 3.8 million tweets from 1.6 million users across 117 countries during the first months of ChatGPT’s public availability.

We examined:

  • Who participated in the debate
  • When different groups joined
  • How users evaluated ChatGPT

Rather than assuming “the public” reacts uniformly, our aim was to map patterns of participation, evaluation, and change over time.

Key Findings

1. Professional Background Shaped Early Participation

Users with technical skills (e.g., coding, math) were among the earliest to engage with ChatGPT. Their reactions were, on average, more positive.

By contrast, users whose professional skills are writing- or creativity-centered joined the conversation later and were more negative in tone.

Early optimism was not evenly distributed: it aligned with skillsets that could capitalize on the technology.

2. Cultural Context Influenced Reactions

Patterns of engagement varied across countries and cultural environments.

  • Users from more individualistic cultures engaged earlier but showed greater skepticism and criticism.
  • Users from cultures with high uncertainty avoidance were less likely to express positive views about ChatGPT overall.

Public discourse about AI reflects not only technological affordances, but also cultural values, norms, and expectations.

3. Aggregated Trends Hide Important Dynamics

At a global level, conversation about ChatGPT became increasingly critical over time.

However, this wasn’t because early adopters changed their minds. Instead, later entrants were systematically more skeptical than early ones.

What looks like “opinion change” is often “composition change”, different groups entering the debate at different times.

This challenges simplistic narratives that public sentiment inevitably “sours” as novelty fades. The story is more about who speaks when, not merely about how attitudes shift.

Broader Implications

Our findings suggest that debates about emerging technologies are shaped by:

  • Economic interests
  • Professional identities
  • Cultural beliefs
  • Social expectations

Public discourse about AI is not just a reaction to an innovation—it is an early signal of future societal fault lines, including:

  • Who is positioned to benefit
  • Who fears displacement
  • How societies negotiate uncertainty
  • Where resistance may emerge

In this sense, digital public debate acts as a window into the social meaning of technological change, revealing cross-national variation in values, politics, and aspirations.

Why This Matters

AI systems are being adopted at unprecedented speed–often faster than institutions or policymakers can respond. Online conversations are one of the earliest and most accessible indicators of how different groups interpret and evaluate these shifts.

Studying these debates helps us understand:

  • Emerging opportunities and anxieties
  • Sources of resistance or enthusiasm
  • The social distribution of technological “winners” and “losers”
  • How norms and expectations are being renegotiated

In short, public talk about AI is not noise. Instead, it provides important information for governance and public concerns.

The article is available open access here: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change: Telematics and Informatics.

Abstract: Public product launches in Artificial Intelligence can serve as focusing events for collective attention, surfacing how societies react to technological change. Social media provide a window into the sensemaking around these events, surfacing hopes and fears and showing who chooses to engage in the discourse and when. We demonstrate that public sensemaking about AI is shaped by economic interests and cultural values of those involved. We analyze 3.8 million tweets posted by 1.6 million users across 117 countries in response to the public launch of ChatGPT in 2022. Our analysis shows how economic self-interest, proxied by occupational skill types in writing, programming, and mathematics, and national cultural orientations, as measured by Hofstede’s individualism, uncertainty avoidance, and power distance dimensions, shape who speaks, when they speak, and their stance toward ChatGPT. Roles requiring more technical skills, such as programming and mathematics, tend to engage earlier and express more positive stances, whereas writing-centric occupations join later with greater skepticism. At the cultural level, individualism predicts both earlier engagement and a more negative stance, and uncertainty avoidance reduces the prevalence of positive stances but does not delay when users first engage with ChatGPT. Aggregate sentiment trends mask the dynamics observed in our study. The shift toward a more critical stance regarding ChatGPT over time stems primarily from the entry of more skeptical voices rather than a change of heart among early adopters. Our findings underscore the importance of both the occupational background and cultural context in understanding public reactions to AI.

Gastbeitrag in der FAZ: Digitalisierung birgt Risiken – Regulierung aber auch

Mit Thomas Hess habe ich einen Gastbeitrag in der Frankfurter Allgemeinen Zeitung veröffentlicht. Darin thematisieren wir die oft unbeachteten, aber sehr relevanten Kosten und Risiken, die aus Regulierung entstehen können.

Ja, Digitalisierung bringt Risiken – Regulierung aber auch

  • Andreas Jungherr, Thomas Hess: Ja Digitalisierung birgt Risiken – Regulierung aber auch. Frankfurter Allgemeine Zeitung. 3.11.2025. S. 18.
  • New Journal Article: Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan

    Artificial Intelligence (AI) is fast becoming a core feature of political campaigns worldwide. As campaign uses of AI grow, the importance of clear campaign rules that enable responsible uses of AI grows with them.

    In a series of international studies, Adrian Rauchfleisch, Alexander Wuttke, and I examine different facets of AI in campaigning—covering public opinion on its uses as well as public preferences for regulating those uses.

    In a new article in Telecommunications Policy, we analyze public opinion on regulating AI in political campaigns in the United States and Taiwan. Our focus is on explaining when and why people demand stronger AI regulation in campaigns.

    People in both countries generally support regulating AI in election campaigns. However, the factors associated with this support differ. In the United States, higher demand for regulation correlates with perceptions that others are more influenced by campaign persuasion than oneself (a classic third-person effect). In Taiwan, by contrast, support is associated with perceptions that campaign persuasion affects both oneself and others (a second-person effect). These cross-national differences suggest that how people locate themselves in relation to others matters—and that this relationship can vary systematically across contexts. This points to promising avenues for future research.

    Looking more closely, we find that general attitudes toward AI moderate support for regulation. In the United States, people who perceive greater AI risks in other domains are more supportive of regulating AI in elections, whereas those who see broader AI benefits are more opposed. In Taiwan, in contrast, both those who perceive AI risks and those who perceive AI benefits tend to support stronger regulation. In other words, in Taiwan, general perceptions of AI’s benefits do not translate into preferences for laxer rules in campaigning.

    The article underscores the value of international comparisons for understanding public preferences on campaign regulation. We also show that the same variables and cognitions can connect differently to regulatory preferences across contexts—highlighting the need for context-aware, culturally informed comparative work.

    More broadly, this study contributes to the growing attitudinal turn in digital governance scholarship. It reminds us that beyond written rules and governance frameworks, it is crucial to account for what people think about these issues—and which kinds of regulation they support, demand, or oppose.

    Read the full article here.

    For another paper from our study of AI in campaigning, see: Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications.

    Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan

    Abstract: The increasing use of Artificial Intelligence (AI) in election campaigns, such as AI-generated political ads, automated messaging, and the widespread availability of AI-assisted photorealistic content, is transforming political communication. This new era of AI-enabled election campaigns presents regulatory challenges for digital media ecosystems, prompting calls for an updated governance framework. While research has begun mapping AI’s role in digital media ecosystems, an often-overlooked factor is public attitudes toward regulating AI in election campaigns. Understanding these attitudes is essential as regulatory debates unfold at national and international levels, where public opinion often constrains the leeway of political decision-makers. We analyze data from two cross-sectional surveys conducted in the United States and Taiwan—two democracies with relatively lenient campaign regulations, which held presidential elections in the same year. We examine the role of general attitudes toward AI, psychological dispositions, and partisan alignments in shaping public support for AI regulation during elections. Our findings underscore the significance of psychological and attitudinal perspectives in predicting regulatory preferences. These insights contribute to broader discussions on AI governance within digital media ecosystems and its implications for democratic processes.

    Adrian Rauchfleisch, Andreas Jungherr and Alexander Wuttke. 2025. Explaining public preferences for regulating Artificial Intelligence in election campaigns: Evidence from the U.S. and Taiwan. Telecommunications Policy (Online First). doi: 10.1016/j.giq.2025.102079

    New Journal Article: Artificial Intelligence in Deliberation – The AI Penalty and the Emergence of a New Deliberative Divide

    Recent advances in artificial intelligence (AI) have renewed hopes that these tools can support digital deliberation. Much of the discussion focuses on evaluating people’s reactions to AI-enabled deliberation once they encounter it. That’s useful—but it misses a prior question:

    Does merely signaling that deliberation will use AI affect people’s willingness to participate? Given widespread skepticism toward AI, announcing its use in deliberative formats may deter participation. Thus, even if AI can technically enhance deliberation, it may also weaken it by keeping people from taking part in the first place.

    Together with Adrian Rauchfleisch, I examine this question in a new article just published in Government Information Quarterly.

    Using a preregistered survey experiment with a representative sample in Germany (n = 1,850), we test how people respond when told that specific deliberative tasks will be performed by AI versus by humans.

    We find a clear AI penalty in deliberation: informing people that AI will be involved lowers both willingness to participate and expectations of deliberative quality.

    This AI penalty is moderated by prior attitudes toward AI—people already skeptical of AI react more negatively—highlighting the emergence of a new deliberative divide rooted in views about AI rather than traditional factors like demographics or education.

    As democratic practices move online and increasingly leverage AI, understanding and addressing public perceptions and hesitancy will be critical.
    Read the full article here.

    Abstract: Advances in Artificial Intelligence (AI) promise help for democratic deliberation, such as processing information, moderating discussion, and fact-checking. But public views of AI’s role remain underexplored. Given widespread skepticism, integrating AI into deliberative formats may lower trust and willingness to participate. We report a preregistered within-subjects survey experiment with a representative German sample (n = 1850) testing how information about AI-facilitated deliberation affects willingness to participate and expected quality. Respondents were randomly assigned to descriptions of identical deliberative tasks facilitated by either AI or humans, enabling causal identification of information effects. Results show a clear AI penalty: participants were less willing to engage in AI-facilitated deliberation and anticipated lower deliberative quality than for human-facilitated formats. The penalty shrank among respondents who perceived greater societal benefits of AI or tended to anthropomorphize it, but grew with higher assessments of AI risk. These findings indicate that AI-facilitated deliberation currently faces substantial public skepticism and may create a new “deliberative divide.” Unlike traditional participation gaps linked to education or demographics, this divide reflects attitudes toward AI. Efforts to realize AI’s affordances should directly address these perceptions to offset the penalty and avoid discouraging participation or exacerbating participatory inequalities.

    Andreas Jungherr and Adrian Rauchfleisch. 2025. Artificial Intelligence in deliberation: The AI penalty and the emergence of a new deliberative divide. Government Information Quarterly 42(4): 102079. doi: 10.1016/j.giq.2025.102079

    Games & Politics Diskussion zur Rolle von Politik in digitalen Kommunikationsumgebungen

    Für das Games & Politics Format der Konrad-Adenauer-Stiftung und esports player foundation habe ich mit Joachim Ebmeyer, Johannes Steup, Just Johnny und LvciaLive über die Rolle von Politik in neuen Kommunikationsumgebungen diskutiert.

    You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

    More Information

    KI und Macht: Neues Podcast Gespräch

    Was hat Künstliche Intelligenz mit Macht zu tun? Im Servus KI! Podcast gehen Wolfram Burgard, Lennart Peters und ich dieser Frage nach.

    You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

    More Information

    New Working Paper: Artificial Intelligence in Government

    In a new working paper with Alexander Wuttke and Adrian Rauchfleisch, we examine how people feel about the use of AI by governments.

    We use Principal-Agent Theory (PAT) to explore a paradox emerging as governments adopt AI technologies: what we call a “failure-by-success.”

    The Efficiency–Control Paradox

    At first glance, adopting AI for governmental tasks seems like an unequivocal win. Our survey experiment shows that when people hear about AI improving efficiency (i.e. speeding up tax assessments, reducing welfare administration costs, or processing bail decisions faster) trust in government increases compared to scenarios describing only human decision-making.

    However, there’s a catch.

    Even when AI is framed in the most beneficial light, citizens report feeling less in control. This is deeply concerning because feeling in control is not a trivial preference, it is a cornerstone of democratic legitimacy. Democracies are built on the idea that citizens can understand, influence, and contest decisions affecting their lives.

    Delegation through the PAT Lens

    We conceptualize AI adoption as a form of task delegation within Principal-Agent Theory. Traditionally, PAT explores how principals (citizens or politicians) delegate tasks to agents (civil servants or organizations). When AI becomes the agent, this delegation faces unique tensions:

  • Assessability: Can AI decisions be understood by humans?
  • Dependency: Is it possible to reverse AI delegation and return to human-led processes?
  • Contestability: Can citizens challenge AI decisions effectively?
  • Our Findings

    We tested these tensions in a pre-registered factorial survey experiment with 1,198 participants in the UK. Respondents read vignettes describing AI use (versus human-only decisions) in tax, welfare, and bail contexts. Some vignettes emphasized AI’s efficiency benefits, while others highlighted PAT risks such as opacity, lock-in, or lack of recourse.

    The results were striking:

    Density and bar plots for all three dependent variables with the
    treatment effects. The dashed line in the first two panels indicates the mean
    score of the human condition. For each condition, the estimates with 95%-CIs
    are shown.

    When treatments emphasized efficiency gains of AI, they boosted trust. But even when only benefits were highlighted, perceived control declined compared to human decision-making.

    When treatments made PAT risks explicit alongside benefits, both trust and perceived control dropped sharply, and support for AI use fell significantly.

    Implications for Democratic Governance

    These findings suggest that initial AI efficiency gains may drive wider adoption, but if issues of transparency, reversibility, and accountability are not proactively addressed, governments risk eroding citizens’ sense of control. In the long run, this could undermine the legitimacy of democratic institutions themselves—a true failure by success.

    Limitations and Next Steps

    Of course, our study used hypothetical vignettes and a non-probability sample. Real-world implementations and representative studies are needed to confirm these dynamics. Still, our theoretical contribution demonstrates that PAT offers a valuable framework for navigating the structural tensions of AI delegation in public administration.

    We hope our conceptual approach helps policymakers, computer scientists, and social scientists better understand the subtle risks of AI adoption in government – and design systems that enhance rather than erode democratic legitimacy.

    Abstract: The use of Artificial Intelligence (AI) in public administration is expanding rapidly, moving from automating routine tasks to deploying generative and agentic systems that autonomously act on goals. While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability. This article applies principal-agent theory (PAT) to conceptualize AI adoption as a special case of delegation, highlighting three core tensions: assessability (can decisions be understood?), dependency (can the delegation be reversed?), and contestability (can decisions be challenged?). These structural challenges may lead to a “failure-by-success” dynamic, where early functional gains obscure long-term risks to democratic legitimacy. To test this framework, we conducted a pre-registered factorial survey experiment across tax, welfare, and law enforcement domains. Our findings show that although efficiency gains initially bolster trust, they simultaneously reduce citizens’ perceived control. When the structural risks come to the foreground, institutional trust and perceived control both drop sharply, suggesting that hidden costs of AI adoption significantly shape public attitudes. The study demonstrates that PAT offers a powerful lens for understanding the institutional and political implications of AI in government, emphasizing the need for policymakers to address delegation risks transparently to maintain public trust.

  • Alexander Wuttke, Adrian Rauchfleisch, and Andreas Jungherr. 2025. Artificial Intelligence in Government: Why People Feel They Lose Control. arxiv. Working Paper. doi: 10.48550/arXiv.2505.01085
  • New preprint: Political Disinformation: ‘Fake News’, Bots, and Deep Fakes

    I’m happy to share that my preprint Political Disinformation: ‘Fake News’, Bots, and Deep Fakes is now available. It will appear in the *Oxford Research Encyclopedia of Communication* sometime in the future.

    You can read the preprint here: Political Disinformation: ‘Fake News’, Bots, and Deep Fakes

    The piece is an attempt to take a step back from the often alarmist public discourse on disinformation and look more carefully at what we actually know. Concerns about political disinformation—whether in the form of fake news, bots, deepfakes, or foreign interference—have become increasingly prominent, especially since 2016. But these concerns don’t always align with what empirical research tells us.

    In the article, I try to do three things:

  • Clarify what we’re talking about: I revisit some of the definitional challenges around disinformation and related terms. A central point is that political speech is often contested, interpretive, and not easily reduced to true vs. false categories.
  • Summarize the current evidence: Drawing on a broad range of studies, I look at what we know about the actual reach and effects of disinformation. While the issue is real, the available data suggest that its impact is more limited than many fear.
  • Reflect on how we respond: Efforts to fight disinformation can come with their own risks, especially if they concentrate power over speech or suppress disagreement. I argue for responses that are proportionate, evidence-based, and attentive to democratic openness.
  • This piece is meant as a contribution to ongoing conversations across research, policy, and civil society. It doesn’t dismiss the challenges posed by disinformation, but it does suggest we might better address them by being more precise in our terms, more rigorous in our methods, and more careful about the trade-offs involved.

    Abstract: Political disinformation has become a central concern in both public discourse and scholarly inquiry, particularly following the electoral successes of right-wing populist movements in Western democracies since 2016. These events have been widely interpreted through the lens of manipulation, deviance, and disruption; all closely linked to the spread of false information, automated amplification, and foreign interference in digital communication environments. More recently, advances in Artificial Intelligence have added to anxieties about the growing sophistication and scale of disinformation campaigns. Collectively, these concerns are often framed under the concept of digital disinformation and viewed as posing existential threats to democratic systems. This entry provides a comprehensive and critical overview of the disinformation debate. It traces the definitional and conceptual challenges inherent in the term “disinformation,” highlights how digital infrastructures shape both the problem and its perceived urgency, and synthesizes empirical evidence on the actual reach, distribution, and impact of political disinformation. The article distinguishes between individual, collective, and discursive harms, while cautioning against inflated threat narratives that outpace empirical findings. Importantly, the entry addresses the risks of regulatory overreach and centralized control. Efforts to counter disinformation may themselves undermine democratic openness, suppress dissent, and weaken societies’ capacity for collective information processing. In response, the article outlines a research agenda that prioritizes conceptual clarity, empirical rigor, and systemic analysis over alarmism. It advocates for a shift away from overstreching framings of disinformation toward more precise and differentiated understandings of digital political communication and its challenges.

  • Andreas Jungherr (2025). Political Disinformation: “Fake News”, Bots, and Deep Fakes. In Oxford Research Encyclopedia of Communication. Oxford: Oxford University Press. (Forthcoming).
  • New working paper: What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States

    With the rapid adoption of generative AI tools like ChatGPT and Midjourney, societies are increasingly grappling with questions of control, trust, and responsibility. As AI systems become more integrated into everyday life, public expectations around how these systems are governed – and for what purposes – take on growing importance.

    In our new working paper, we investigate a central question in AI ethics and policy:

    What do people expect from moderation in AI-enabled systems, and how do these expectations vary across countries?

    To answer this, we developed a new framework for the conceptualization of different moderation approaches and for the explanation of their demand or opposition. To test this framework, we ran two large-scale surveys – one in the United States (n = 1,756) and one in Germany (n = 1,800). We asked respondents to evaluate four core goals of AI output moderation:

    • Accuracy and Reliability: Does the AI provide factual and trustworthy content?
    • Safety: Does it avoid producing harmful or illegal outputs?
    • Bias Mitigation: Are efforts made to reduce unfairness in its responses?
    • Aspirational Imaginaries: Does the AI help envision a better, more inclusive society?

    Key Findings at a Glance

    Broad support for accuracy and safety

    These two goals enjoy the strongest backing across both countries. The public clearly wants AI systems that are factually reliable and prevent harm.

    Mixed support for fairness and aspirational goals

    Support for interventions to mitigate bias or promote idealistic visions of society is more cautious—especially in Germany.

    National differences in preferences

    U.S. respondents are more open to AI interventions across the board, reflecting greater familiarity with the technology and a more innovation-oriented culture. German respondents, by contrast, are more skeptical and differentiate more between moderation goals.

    What explains these differences?

    We propose a three-level model of “involvement” that shapes attitudes toward AI:

    • Individual-level: AI experience and free speech values
    • Group-level: Gender and political affiliation
    • System-level: The broader national context (U.S. as high-involvement, Germany as low-involvement)

    In the U.S., where AI is more widely used and publicly debated, expectations are more consistent and ideologically structured. In Germany, individual values and experience with AI play a bigger role in shaping attitudes.

    Why This Matters for AI Governance

    Our findings suggest that public support for AI regulation is goal-dependent. People are not simply “for” or “against” moderation — they care about why an intervention is being made.

    This has clear implications for policymakers and developers:

    • Don’t assume consensus. The public holds nuanced, context-sensitive views.
    • Communicate clearly. Explain not just how AI systems work, but what they are designed to achieve.
    • Build trust through transparency. Especially in low-exposure contexts like Germany, trust depends on open communication and user engagement.

    Read the full paper

    What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States
    By Andreas Jungherr & Adrian Rauchfleisch

    Abstract:

    Recent advances in generative Artificial Intelligence have raised public awareness, shaping expectations and concerns about their societal implications. Central to these debates is the question of AI alignment — how well AI systems meet public expectations regarding safety, fairness, and social values. However, little is known about what people expect from AI-enabled systems and how these expectations differ across national contexts. We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany (n = 1800) and the United States (n = 1756). We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries. U.S. respondents report significantly higher AI use and consistently greater support for all alignment features, reflecting broader technological openness and higher societal involvement with AI. In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals — like fairness and aspirational imaginaries — receive more cautious backing, particularly in Germany. We also explore how individual experience with AI, attitudes toward free speech, political ideology, partisan affiliation, and gender shape these preferences. AI use and free speech support explain more variation in Germany. In contrast, U.S. responses show greater attitudinal uniformity, suggesting that higher exposure to AI may consolidate public expectations. These findings contribute to debates on AI governance and cross-national variation in public preferences. More broadly, our study demonstrates the value of empirically grounding AI alignment debates in public attitudes and of explicitly developing normatively grounded expectations into theoretical and policy discussions on the governance of AI-generated content.

    Read the full paper on arXiv