2025/06/27 Andreas Jungherr

New Working Paper: Artificial Intelligence in Government

In a new working paper with Alexander Wuttke and Adrian Rauchfleisch, we examine how people feel about the use of AI by governments.

We use Principal-Agent Theory (PAT) to explore a paradox emerging as governments adopt AI technologies: what we call a “failure-by-success.”

The Efficiency–Control Paradox

At first glance, adopting AI for governmental tasks seems like an unequivocal win. Our survey experiment shows that when people hear about AI improving efficiency (i.e. speeding up tax assessments, reducing welfare administration costs, or processing bail decisions faster) trust in government increases compared to scenarios describing only human decision-making.

However, there’s a catch.

Even when AI is framed in the most beneficial light, citizens report feeling less in control. This is deeply concerning because feeling in control is not a trivial preference, it is a cornerstone of democratic legitimacy. Democracies are built on the idea that citizens can understand, influence, and contest decisions affecting their lives.

Delegation through the PAT Lens

We conceptualize AI adoption as a form of task delegation within Principal-Agent Theory. Traditionally, PAT explores how principals (citizens or politicians) delegate tasks to agents (civil servants or organizations). When AI becomes the agent, this delegation faces unique tensions:

  • Assessability: Can AI decisions be understood by humans?
  • Dependency: Is it possible to reverse AI delegation and return to human-led processes?
  • Contestability: Can citizens challenge AI decisions effectively?
  • Our Findings

    We tested these tensions in a pre-registered factorial survey experiment with 1,198 participants in the UK. Respondents read vignettes describing AI use (versus human-only decisions) in tax, welfare, and bail contexts. Some vignettes emphasized AI’s efficiency benefits, while others highlighted PAT risks such as opacity, lock-in, or lack of recourse.

    The results were striking:

    Density and bar plots for all three dependent variables with the
    treatment effects. The dashed line in the first two panels indicates the mean
    score of the human condition. For each condition, the estimates with 95%-CIs
    are shown.

    When treatments emphasized efficiency gains of AI, they boosted trust. But even when only benefits were highlighted, perceived control declined compared to human decision-making.

    When treatments made PAT risks explicit alongside benefits, both trust and perceived control dropped sharply, and support for AI use fell significantly.

    Implications for Democratic Governance

    These findings suggest that initial AI efficiency gains may drive wider adoption, but if issues of transparency, reversibility, and accountability are not proactively addressed, governments risk eroding citizens’ sense of control. In the long run, this could undermine the legitimacy of democratic institutions themselves—a true failure by success.

    Limitations and Next Steps

    Of course, our study used hypothetical vignettes and a non-probability sample. Real-world implementations and representative studies are needed to confirm these dynamics. Still, our theoretical contribution demonstrates that PAT offers a valuable framework for navigating the structural tensions of AI delegation in public administration.

    We hope our conceptual approach helps policymakers, computer scientists, and social scientists better understand the subtle risks of AI adoption in government – and design systems that enhance rather than erode democratic legitimacy.

    Abstract: The use of Artificial Intelligence (AI) in public administration is expanding rapidly, moving from automating routine tasks to deploying generative and agentic systems that autonomously act on goals. While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability. This article applies principal-agent theory (PAT) to conceptualize AI adoption as a special case of delegation, highlighting three core tensions: assessability (can decisions be understood?), dependency (can the delegation be reversed?), and contestability (can decisions be challenged?). These structural challenges may lead to a “failure-by-success” dynamic, where early functional gains obscure long-term risks to democratic legitimacy. To test this framework, we conducted a pre-registered factorial survey experiment across tax, welfare, and law enforcement domains. Our findings show that although efficiency gains initially bolster trust, they simultaneously reduce citizens’ perceived control. When the structural risks come to the foreground, institutional trust and perceived control both drop sharply, suggesting that hidden costs of AI adoption significantly shape public attitudes. The study demonstrates that PAT offers a powerful lens for understanding the institutional and political implications of AI in government, emphasizing the need for policymakers to address delegation risks transparently to maintain public trust.

  • Alexander Wuttke, Adrian Rauchfleisch, and Andreas Jungherr. 2025. Artificial Intelligence in Government: Why People Feel They Lose Control. arxiv. Working Paper. doi: 10.48550/arXiv.2505.01085
  • , , , , , ,