What's new?

Read Blog

New Working Paper: Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban

Political campaigns worldwide are increasingly using Artificial Intelligence (AI) to gain an edge. But how does this affect our democratic processes? In a new study with Adrian Rauchfleisch and Alexander Wuttke, and I show what the American public thinks about AI use in elections.

We propose a framework that categorizes AI’s electoral uses into three main areas: campaign operations, voter outreach, and deception. Each of these has different implications and raises unique concerns.

Public Perception: Through a representative survey and two survey experiments (n=7,635), the study shows that while people generally view AI’s role in elections negatively, they are particularly opposed to deceptive AI practices.

Deceptive uses of AI, such as deepfakes or misinformation, are not only seen as clear norm violations by campaigns but also increase public support for banning AI altogether. This is not true for AI use for campaign operations or voter outreach.

But despite the strong public disapproval of deceptive AI use, our study finds that these tactics don’t significantly harm the political parties that use them. This creates a troubling misalignment between public opinion and political incentives.

We can’t rely on public opinion alone to curb AI misuse in elections. There’s a critical need for public and regulatory oversight to monitor and control AI’s electoral use. At the same time regulation must be nuanced to account for the diverse applications of AI.

As AI becomes more embedded in political campaigns and everyday party operations, understanding and regulating its use is crucial. It is especially important not to paint all AI use with the same broad brush.

Some AI uses could be democratically helpful, like enabling resource-strapped campaigns to compete. But fears of AI-enabled deception can overshadow these benefits, potentially stifling positive uses. Thus, discussing AI in elections requires considering its full spectrum.

Abstract: All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI’s impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories — campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. 1) the public distinguishes between different AI uses in elections, seeing AI uses predominantly negative but objecting most strongly to deceptive uses; 2) deceptive AI practices can have adverse effects on relevant attitudes and strengthen public support for stopping AI development; 3) Although deceptive electoral uses of AI are intensely disliked, they do not result in substantial favorability penalties for the parties involved. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use.

  • Andreas Jungherr, Adrian Rauchfleisch, and Alexander Wuttke. 2024. Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban. arxiv. Working Paper. doi: 10.48550/arXiv.2408.12613
  • New article: Blame and obligation

    Who do people blame for disinformation and who do they see as obligated to fix associated problems?

    In the new article Blame and obligation: The importance of libertarianism and political orientation in the public assessment of disinformation in the United States/a> with Adrian Rauchfleisch we ask who people in the US blame for disinformation and who they feel is obligated to fix it.

    Respondents blame the media (27.65%), politicians (19.37%), and other people (15.73%) for the spread of disinformation. Foreign actors are only mentioned by 6%. Only 8.77% blame social media companies for the spread of disinformation.

    When asked who is obligated to fix the problem of disinformation respondents name people (29.97%), social media companies (22.02%), news media and journalists (18.21%), and government and regulators (15.73%). Politicians are almost never mentioned in this context (3.97%).

    Americans think of disinformation predominantly as a problem created by news media and individuals. They feel people should fix corresponding problems, as well as social media companies, news media, and journalists. Only a minority sees this as the obligation of the state.

    Conservatives and liberals differ in their assignment of blame and obligation. The more conservative a person, the less likely they are to assign primary obligation for halting the spread of false information to social media companies and politicians.

    The more conservative a person is, the more likely they are to attribute blame for the spread of false information to the government and regulators as well as the general public.

    The more libertarian a person, the greater the likelihood they will name the general public as obligated to stop the spread of disinformation. They are also less likely to name social media companies, news media, or the government as obligated to fix associated problems.

    Libertarians assess disinformation in line with their underlying attitudes. They emphasize the responsibility of individuals, leave companies free from associated burdens, and to not further empower institutional actors to rule over and interfere in communication spaces.

    Political orientation primarily shapes blame attributions, aligning with individuals’ pre-existing biases and the politicized nature of disinformation. Views on obligation reflect deeper ideas about societal governance, such as state interference and individual freedoms.

    Accordingly, the discussion of disinformation and appropriate reactions is not purely about the best way to fix information problems. Instead, respective discourses are clearly connected with underlying worldviews.

    Achieving agreement on how to deal with disinformation is no longer simply about the issue of how to improve information quality in digital communication environments. Diagnoses and proposals are deeply connected to views on how societies should be run.

    These findings reinforce the challenge of assessing the dangers of disinformation correctly and finding appropriate responses due to the inherently politically and value-laden nature of the problem.

    Abstract: Disinformation concerns have heightened the importance of regulating content and speech in digital communication environments. Perceived risks have led to widespread public support for stricter control measures, even at the expense of individual speech rights. To better understand these preferences in the US context, we investigate public attitudes regarding blame for and obligation to address digital disinformation by drawing on political ideology, libertarian values, trust in societal actors, and issue salience. A manual content analysis of open-ended survey responses in combination with an issue salience experiment shows that political orientation and trust in actors primarily drive blame attribution, while libertarianism predominantly informs whose obligation it is to stop the spread. Additionally, enhancing the salience of specific aspects of the issue can influence people’s assessments of blame and obligation. Our findings reveal a range of attributions, underlining the need for careful balance in regulatory interventions. Additionally, we expose a gap in previous literature by demonstrating libertarianism’s unique role vis-à-vis political orientation in the context of regulating content and speech in digital communication environments.

  • Adrian Rauchfleisch and Andreas Jungherr. 2024. Blame and obligation: The importance of libertarianism and political orientation in the public assessment of disinformation in the United States. Policy & Internet. Online first. doi: 10.1002/poi3.407
  • New article: Foundational questions for the regulation of digital disinformation

    What do we need to know before we boldly venture forth and solve the problem of disinformation?

    In a new article for the peer-reviewed Journal of Media Law, I pose questions those asking for greater control of digital communication spaces by corporations, experts, and governments should answer:

    First, they need to be clear about what they mean by disinformation, how they go about establishing it, and how they make sure not to become participants in legitimate political competition favoring one side or the other.

    Second, public discourse tends to treat disinformation largely as a problem of unruly, unreliable, and unstable publics that lend themselves to manipulation. But what if disinformation is a problem of unruly, unreliable, and unstable political elites?

    Third, what are reach and effects of digital disinformation anyway? If disinformation is such a threat that we severely restrict important structures of the public arena and political speech, there should be evidence. But current evidence largely points to limited reach and effects.

    The risks of regulatory overreach are well documented. For one, overreach can stifle unruly but legitimate political speech, alarmist warnings that exaggerate the dangers of disinformation can lead to loss of satisfaction with democracy, increase support of restrictive regulation, and can lead to a decline of confidence in news and information, true or false.

    So yes, there are severe risks in overplaying the actual dangers of disinformation. To risk these adverse effects, we should be very sure about the actual impact of digital information.

    To be clear: this is not a claim that disinformation does not exist. There clearly is disinformation in digital communication environments and it is somewhat harder to police there than in traditional ones.

    But, as with any form of communication, the effects of disinformation appear to be limited and highly context dependent. Harms are possible but likely embedded in other factors, such as economic or cultural insecurity or relative deprivation.

    We must make sure that the empirical basis for the supposed dangers of disinformation is sound before we boldly innovate and increase central corporate and governmental control, which can result in chilling legitimate speech that contests authority and the powerful.

    Looking back at the conceptual shambles and empirical elusiveness of prior digital fears like echo chambers, filter bubbles, bots, or psychometric targeting, we need to start demanding better conceptual and empirical foundations for the regulation of digital communication spaces.

    Abstract: The threat of digital disinformation is a staple in discourse. News media feature examples of digital disinformation prominently. Politicians accuse opponents regularly of slinging disinformation. Regulators justify initiatives of increasing corporate and state control over digital communication environments with threats of disinformation to democracies. But responsible regulation means establishing a balance between the risks of disinformation and the risks of regulatory interventions. This asks for a solid, empirically grounded understanding of the reach and effects of digital disinformation and underlying mechanisms. This article provides a set of questions that a responsible approach to the regulation of disinformation needs to address.

  • Andreas Jungherr. 2024. Foundational questions for the regulation of digital disinformation. The Journal of Media Law. Online first. doi: 10.1080/17577632.2024.2362484
  • AI and Democracy: Talk in the CIVICA Data Science Seminar Series

    By invitation of the LSE Data Science Institute I gave a talk on the role of Artificial Intelligence in Democracy in the CIVICA Data Science Seminar Series.

    YouTube

    By loading the video, you agree to YouTube's privacy policy.
    Learn more

    Load video

    Abstract: The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology’s economic, social, and political consequences. Each new step in the development and application of AI is accompanied by speculations about a supposedly imminent but largely fictional artificial general intelligence (AGI) with (super-)human capacities, as seen in the unfolding discourse about capabilities and impact of large language models (LLMs) in the wake of ChatGPT. These far-reaching expectations lead to a discussion on the societal and political impact of AI that is largely dominated by unfocused fears and enthusiasms. In contrast, this talk provides a framework for a more focused and productive analysis and discussion of AI’s likely impact on one specific social field: democracy.

    First, it is necessary to be clear about the workings of AI. This means differentiating between what is at present a largely imaginary AGI and narrow artificial intelligence focused on solving specific tasks. This distinction allows for a critical discussion of how AI affects different aspects of democracy, including its effects on the conditions of self-rule and people’s opportunities to exercise it, equality, the institution of elections, and competition between democratic and autocratic systems of government.

    The talk will show that the consequences of today’s AI are more specific for democracy than broad speculation about AGI capabilities implies. Focusing on these specific aspects will account for actual threats and opportunities and thus allow for better monitoring of AI’s impact on democracy in an interdisciplinary effort by computer and social scientists.

    The talk is based on two recent articles on Artificial Intelligence and Democracy and Artificial Intelligence in the Public Arena:

  • Artificial Intelligence and Democracy: A Conceptual Framework
  • Artificial Intelligence and the Public Arena
  • Warum übertriebene Warnungen vor Desinformation gefährlich sein können: Interview mit @mediasres im Deutschlandfunk

    In einem Interview mit Christoph Sterz für das Magazin @mediasres im Deutschlandfunk habe ich über Desinformation im Wahlkampf und die Gefahren übertriebener Warnungen und Ängste gesprochen.

    Die im Interview angesprochene Studie ist:

  • Andreas Jungherr und Adrian Rauchfleisch. 2024. Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States. Political Behavior. Online first. doi: 10.1007/s11109-024-09911-3
  • Diskussion: Welche Rolle spielt Desinformation in der Europawahl

    Auf Einladung des Science Media Center Germany diskutierte ich mit Josephine Schmitt und Philipp Müller darüber, welche Rolle Desinformation im Europawahlkampf spielt.

    YouTube

    By loading the video, you agree to YouTube's privacy policy.
    Learn more

    Load video

    Warum die Angst vor Desinformation übertrieben ist: Interview für Bayern 2

    Mit Verena Fiebiger habe ich in Bayern2 über Desinformation im Wahlkampf gesprochen und warum aktuelle Ängste wahrscheinlich übertrieben sind.

    Click on the button to load the content from www.br.de.

    Load content

    Einschätzung zur KI im Wahlkampf

    Für einen Beitrag im ZDF heute journal habe ich eine Einschätzung zur Rolle von KI im Wahlkampf gegeben.

    Click on the button to load the content from ngp.zdf.de.

    Load content

    New Course: Misinformation, disinformation and other digital fakery

    For this summer semester, I designed a new research seminar on misinformation, disinformation and other digital fakery.

    Course Description: Threats of misinformation, disinformation, and other digital fakery are prominent in academic and public discourse. News media feature examples of digital disinformation prominently. Politicians accuse opponents regularly of slinging disinformation. Regulators justify initiatives of increasing corporate and state control over digital communication environments with threats of disinformation to democracies. But responsible regulation means establishing a balance between the risks of disinformation and the risks of regulatory interventions. This asks for a solid, empirically grounded understanding of the reach and effects of digital disinformation and underlying mechanisms. This raises the importance for the social sciences to reliably conceptualize, measure, and analyze the nature, spread, and impact of disinformation in digital communication environments. This course provides students with a solid understanding of core concepts related to misinformation, disinformation, and other digital fakery and supports them in the independent development of related research projects.

    [Syllabus]