With the rapid adoption of generative AI tools like ChatGPT and Midjourney, societies are increasingly grappling with questions of control, trust, and responsibility. As AI systems become more integrated into everyday life, public expectations around how these systems are governed – and for what purposes – take on growing importance.
In our new working paper, we investigate a central question in AI ethics and policy:
What do people expect from moderation in AI-enabled systems, and how do these expectations vary across countries?
To answer this, we developed a new framework for the conceptualization of different moderation approaches and for the explanation of their demand or opposition. To test this framework, we ran two large-scale surveys – one in the United States (n = 1,756) and one in Germany (n = 1,800). We asked respondents to evaluate four core goals of AI output moderation:
- Accuracy and Reliability: Does the AI provide factual and trustworthy content?
- Safety: Does it avoid producing harmful or illegal outputs?
- Bias Mitigation: Are efforts made to reduce unfairness in its responses?
- Aspirational Imaginaries: Does the AI help envision a better, more inclusive society?
Key Findings at a Glance
Broad support for accuracy and safety
These two goals enjoy the strongest backing across both countries. The public clearly wants AI systems that are factually reliable and prevent harm.
Mixed support for fairness and aspirational goals
Support for interventions to mitigate bias or promote idealistic visions of society is more cautious—especially in Germany.
National differences in preferences
U.S. respondents are more open to AI interventions across the board, reflecting greater familiarity with the technology and a more innovation-oriented culture. German respondents, by contrast, are more skeptical and differentiate more between moderation goals.
What explains these differences?
We propose a three-level model of “involvement” that shapes attitudes toward AI:
- Individual-level: AI experience and free speech values
- Group-level: Gender and political affiliation
- System-level: The broader national context (U.S. as high-involvement, Germany as low-involvement)
In the U.S., where AI is more widely used and publicly debated, expectations are more consistent and ideologically structured. In Germany, individual values and experience with AI play a bigger role in shaping attitudes.
Why This Matters for AI Governance
Our findings suggest that public support for AI regulation is goal-dependent. People are not simply “for” or “against” moderation — they care about why an intervention is being made.
This has clear implications for policymakers and developers:
- Don’t assume consensus. The public holds nuanced, context-sensitive views.
- Communicate clearly. Explain not just how AI systems work, but what they are designed to achieve.
- Build trust through transparency. Especially in low-exposure contexts like Germany, trust depends on open communication and user engagement.
Read the full paper
What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States
By Andreas Jungherr & Adrian Rauchfleisch
Abstract:
Recent advances in generative Artificial Intelligence have raised public awareness, shaping expectations and concerns about their societal implications. Central to these debates is the question of AI alignment — how well AI systems meet public expectations regarding safety, fairness, and social values. However, little is known about what people expect from AI-enabled systems and how these expectations differ across national contexts. We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany (n = 1800) and the United States (n = 1756). We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries. U.S. respondents report significantly higher AI use and consistently greater support for all alignment features, reflecting broader technological openness and higher societal involvement with AI. In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals — like fairness and aspirational imaginaries — receive more cautious backing, particularly in Germany. We also explore how individual experience with AI, attitudes toward free speech, political ideology, partisan affiliation, and gender shape these preferences. AI use and free speech support explain more variation in Germany. In contrast, U.S. responses show greater attitudinal uniformity, suggesting that higher exposure to AI may consolidate public expectations. These findings contribute to debates on AI governance and cross-national variation in public preferences. More broadly, our study demonstrates the value of empirically grounding AI alignment debates in public attitudes and of explicitly developing normatively grounded expectations into theoretical and policy discussions on the governance of AI-generated content.