2025/09/29 Andreas Jungherr

New Journal Article: Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan

Artificial Intelligence (AI) is fast becoming a core feature of political campaigns worldwide. As campaign uses of AI grow, the importance of clear campaign rules that enable responsible uses of AI grows with them.

In a series of international studies, Adrian Rauchfleisch, Alexander Wuttke, and I examine different facets of AI in campaigning—covering public opinion on its uses as well as public preferences for regulating those uses.

In a new article in Telecommunications Policy, we analyze public opinion on regulating AI in political campaigns in the United States and Taiwan. Our focus is on explaining when and why people demand stronger AI regulation in campaigns.

People in both countries generally support regulating AI in election campaigns. However, the factors associated with this support differ. In the United States, higher demand for regulation correlates with perceptions that others are more influenced by campaign persuasion than oneself (a classic third-person effect). In Taiwan, by contrast, support is associated with perceptions that campaign persuasion affects both oneself and others (a second-person effect). These cross-national differences suggest that how people locate themselves in relation to others matters—and that this relationship can vary systematically across contexts. This points to promising avenues for future research.

Looking more closely, we find that general attitudes toward AI moderate support for regulation. In the United States, people who perceive greater AI risks in other domains are more supportive of regulating AI in elections, whereas those who see broader AI benefits are more opposed. In Taiwan, in contrast, both those who perceive AI risks and those who perceive AI benefits tend to support stronger regulation. In other words, in Taiwan, general perceptions of AI’s benefits do not translate into preferences for laxer rules in campaigning.

The article underscores the value of international comparisons for understanding public preferences on campaign regulation. We also show that the same variables and cognitions can connect differently to regulatory preferences across contexts—highlighting the need for context-aware, culturally informed comparative work.

More broadly, this study contributes to the growing attitudinal turn in digital governance scholarship. It reminds us that beyond written rules and governance frameworks, it is crucial to account for what people think about these issues—and which kinds of regulation they support, demand, or oppose.

Read the full article here.

For another paper from our study of AI in campaigning, see: Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications.

Explaining public preferences for regulating Artificial Intelligence in election campaigns – Evidence from the U.S. and Taiwan

Abstract: The increasing use of Artificial Intelligence (AI) in election campaigns, such as AI-generated political ads, automated messaging, and the widespread availability of AI-assisted photorealistic content, is transforming political communication. This new era of AI-enabled election campaigns presents regulatory challenges for digital media ecosystems, prompting calls for an updated governance framework. While research has begun mapping AI’s role in digital media ecosystems, an often-overlooked factor is public attitudes toward regulating AI in election campaigns. Understanding these attitudes is essential as regulatory debates unfold at national and international levels, where public opinion often constrains the leeway of political decision-makers. We analyze data from two cross-sectional surveys conducted in the United States and Taiwan—two democracies with relatively lenient campaign regulations, which held presidential elections in the same year. We examine the role of general attitudes toward AI, psychological dispositions, and partisan alignments in shaping public support for AI regulation during elections. Our findings underscore the significance of psychological and attitudinal perspectives in predicting regulatory preferences. These insights contribute to broader discussions on AI governance within digital media ecosystems and its implications for democratic processes.

Adrian Rauchfleisch, Andreas Jungherr and Alexander Wuttke. 2025. Explaining public preferences for regulating Artificial Intelligence in election campaigns: Evidence from the U.S. and Taiwan. Telecommunications Policy (Online First). doi: 10.1016/j.giq.2025.102079

, , , , , , , , , ,