Mit BR24 habe ich über das Gespräch zwischen Elon Musk und Alice Weidel auf X gesprochen.
Click on the button to load the content from www.ardmediathek.de.
Mit BR24 habe ich über das Gespräch zwischen Elon Musk und Alice Weidel auf X gesprochen.
Click on the button to load the content from www.ardmediathek.de.
Wie funktioniert Wahlkampf mit digitalen Medien?
Für das Science Media Center Germany habe ich mit Judith Möller und Philipp Müller über Wahlkampf mit digitalen Medien diskutiert.
In der Sendung Studio 9 des Deutschlandfunk Kultur habe ich mit Axel Rahmlow über den Wahlkampf von Robert Habeck und den Grünen gesprochen.
Click on the button to load the content from open.spotify.com.
Jan Höhne was nice enough to invite me to give a talk in his excellent CS3 Meeting lecture series. As elections are very much at the top of everybody’s mind right now, I was happy to talk about my recent work with Adrian Rauchfleisch (NTU) and Alexander Wuttke (LMU) on public opinion on AI uses in elections. Unfortunately, there is no recording of the talk but in case you missed it, here are the slides to the talk. The talk is based on a recent preprint that is available at arXiv: Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban.
Auf Einladung des bidt durfte ich gestern Abend über die Rolle von Desinformationen in Demokratie und Wahlkampf diskutieren.
The kind folks from the Solaris project invited me to share some thoughts on the role of generative AI in democracy and elections.
The talk builds on recent empirical work with Adrian Rauchfleisch and Alexander Wuttke on how people think about AI uses in elections, my earlier conceptual work on AI and democracy, and work with Ralph Schroeder on the impact of AI on the public arena.
Slides to the presentation are available here.
Political campaigns worldwide are increasingly using Artificial Intelligence (AI) to gain an edge. But how does this affect our democratic processes? In a new study with Adrian Rauchfleisch and Alexander Wuttke, and I show what the American public thinks about AI use in elections.
We propose a framework that categorizes AI’s electoral uses into three main areas: campaign operations, voter outreach, and deception. Each of these has different implications and raises unique concerns.
Public Perception: Through a representative survey and two survey experiments (n=7,635), the study shows that while people generally view AI’s role in elections negatively, they are particularly opposed to deceptive AI practices.
Deceptive uses of AI, such as deepfakes or misinformation, are not only seen as clear norm violations by campaigns but also increase public support for banning AI altogether. This is not true for AI use for campaign operations or voter outreach.
But despite the strong public disapproval of deceptive AI use, our study finds that these tactics don’t significantly harm the political parties that use them. This creates a troubling misalignment between public opinion and political incentives.
We can’t rely on public opinion alone to curb AI misuse in elections. There’s a critical need for public and regulatory oversight to monitor and control AI’s electoral use. At the same time regulation must be nuanced to account for the diverse applications of AI.
As AI becomes more embedded in political campaigns and everyday party operations, understanding and regulating its use is crucial. It is especially important not to paint all AI use with the same broad brush.
Some AI uses could be democratically helpful, like enabling resource-strapped campaigns to compete. But fears of AI-enabled deception can overshadow these benefits, potentially stifling positive uses. Thus, discussing AI in elections requires considering its full spectrum.
Abstract: All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI’s impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories — campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. 1) the public distinguishes between different AI uses in elections, seeing AI uses predominantly negative but objecting most strongly to deceptive uses; 2) deceptive AI practices can have adverse effects on relevant attitudes and strengthen public support for stopping AI development; 3) Although deceptive electoral uses of AI are intensely disliked, they do not result in substantial favorability penalties for the parties involved. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use.
Who do people blame for disinformation and who do they see as obligated to fix associated problems?
In the new article Blame and obligation: The importance of libertarianism and political orientation in the public assessment of disinformation in the United States/a> with Adrian Rauchfleisch we ask who people in the US blame for disinformation and who they feel is obligated to fix it.
Respondents blame the media (27.65%), politicians (19.37%), and other people (15.73%) for the spread of disinformation. Foreign actors are only mentioned by 6%. Only 8.77% blame social media companies for the spread of disinformation.
When asked who is obligated to fix the problem of disinformation respondents name people (29.97%), social media companies (22.02%), news media and journalists (18.21%), and government and regulators (15.73%). Politicians are almost never mentioned in this context (3.97%).
Americans think of disinformation predominantly as a problem created by news media and individuals. They feel people should fix corresponding problems, as well as social media companies, news media, and journalists. Only a minority sees this as the obligation of the state.
Conservatives and liberals differ in their assignment of blame and obligation. The more conservative a person, the less likely they are to assign primary obligation for halting the spread of false information to social media companies and politicians.
The more conservative a person is, the more likely they are to attribute blame for the spread of false information to the government and regulators as well as the general public.
The more libertarian a person, the greater the likelihood they will name the general public as obligated to stop the spread of disinformation. They are also less likely to name social media companies, news media, or the government as obligated to fix associated problems.
Libertarians assess disinformation in line with their underlying attitudes. They emphasize the responsibility of individuals, leave companies free from associated burdens, and to not further empower institutional actors to rule over and interfere in communication spaces.
Political orientation primarily shapes blame attributions, aligning with individuals’ pre-existing biases and the politicized nature of disinformation. Views on obligation reflect deeper ideas about societal governance, such as state interference and individual freedoms.
Accordingly, the discussion of disinformation and appropriate reactions is not purely about the best way to fix information problems. Instead, respective discourses are clearly connected with underlying worldviews.
Achieving agreement on how to deal with disinformation is no longer simply about the issue of how to improve information quality in digital communication environments. Diagnoses and proposals are deeply connected to views on how societies should be run.
These findings reinforce the challenge of assessing the dangers of disinformation correctly and finding appropriate responses due to the inherently politically and value-laden nature of the problem.
Abstract: Disinformation concerns have heightened the importance of regulating content and speech in digital communication environments. Perceived risks have led to widespread public support for stricter control measures, even at the expense of individual speech rights. To better understand these preferences in the US context, we investigate public attitudes regarding blame for and obligation to address digital disinformation by drawing on political ideology, libertarian values, trust in societal actors, and issue salience. A manual content analysis of open-ended survey responses in combination with an issue salience experiment shows that political orientation and trust in actors primarily drive blame attribution, while libertarianism predominantly informs whose obligation it is to stop the spread. Additionally, enhancing the salience of specific aspects of the issue can influence people’s assessments of blame and obligation. Our findings reveal a range of attributions, underlining the need for careful balance in regulatory interventions. Additionally, we expose a gap in previous literature by demonstrating libertarianism’s unique role vis-à-vis political orientation in the context of regulating content and speech in digital communication environments.
What do we need to know before we boldly venture forth and solve the problem of disinformation?
In a new article for the peer-reviewed Journal of Media Law, I pose questions those asking for greater control of digital communication spaces by corporations, experts, and governments should answer:
First, they need to be clear about what they mean by disinformation, how they go about establishing it, and how they make sure not to become participants in legitimate political competition favoring one side or the other.
Second, public discourse tends to treat disinformation largely as a problem of unruly, unreliable, and unstable publics that lend themselves to manipulation. But what if disinformation is a problem of unruly, unreliable, and unstable political elites?
Third, what are reach and effects of digital disinformation anyway? If disinformation is such a threat that we severely restrict important structures of the public arena and political speech, there should be evidence. But current evidence largely points to limited reach and effects.
The risks of regulatory overreach are well documented. For one, overreach can stifle unruly but legitimate political speech, alarmist warnings that exaggerate the dangers of disinformation can lead to loss of satisfaction with democracy, increase support of restrictive regulation, and can lead to a decline of confidence in news and information, true or false.
So yes, there are severe risks in overplaying the actual dangers of disinformation. To risk these adverse effects, we should be very sure about the actual impact of digital information.
To be clear: this is not a claim that disinformation does not exist. There clearly is disinformation in digital communication environments and it is somewhat harder to police there than in traditional ones.
But, as with any form of communication, the effects of disinformation appear to be limited and highly context dependent. Harms are possible but likely embedded in other factors, such as economic or cultural insecurity or relative deprivation.
We must make sure that the empirical basis for the supposed dangers of disinformation is sound before we boldly innovate and increase central corporate and governmental control, which can result in chilling legitimate speech that contests authority and the powerful.
Looking back at the conceptual shambles and empirical elusiveness of prior digital fears like echo chambers, filter bubbles, bots, or psychometric targeting, we need to start demanding better conceptual and empirical foundations for the regulation of digital communication spaces.
Abstract: The threat of digital disinformation is a staple in discourse. News media feature examples of digital disinformation prominently. Politicians accuse opponents regularly of slinging disinformation. Regulators justify initiatives of increasing corporate and state control over digital communication environments with threats of disinformation to democracies. But responsible regulation means establishing a balance between the risks of disinformation and the risks of regulatory interventions. This asks for a solid, empirically grounded understanding of the reach and effects of digital disinformation and underlying mechanisms. This article provides a set of questions that a responsible approach to the regulation of disinformation needs to address.
By invitation of the LSE Data Science Institute I gave a talk on the role of Artificial Intelligence in Democracy in the CIVICA Data Science Seminar Series.
Abstract: The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology’s economic, social, and political consequences. Each new step in the development and application of AI is accompanied by speculations about a supposedly imminent but largely fictional artificial general intelligence (AGI) with (super-)human capacities, as seen in the unfolding discourse about capabilities and impact of large language models (LLMs) in the wake of ChatGPT. These far-reaching expectations lead to a discussion on the societal and political impact of AI that is largely dominated by unfocused fears and enthusiasms. In contrast, this talk provides a framework for a more focused and productive analysis and discussion of AI’s likely impact on one specific social field: democracy.
First, it is necessary to be clear about the workings of AI. This means differentiating between what is at present a largely imaginary AGI and narrow artificial intelligence focused on solving specific tasks. This distinction allows for a critical discussion of how AI affects different aspects of democracy, including its effects on the conditions of self-rule and people’s opportunities to exercise it, equality, the institution of elections, and competition between democratic and autocratic systems of government.
The talk will show that the consequences of today’s AI are more specific for democracy than broad speculation about AGI capabilities implies. Focusing on these specific aspects will account for actual threats and opportunities and thus allow for better monitoring of AI’s impact on democracy in an interdisciplinary effort by computer and social scientists.
The talk is based on two recent articles on Artificial Intelligence and Democracy and Artificial Intelligence in the Public Arena: