Evaluating Institutional Trust in AI Based Public Decision-Making Systems

  • Zarnish Sultan University of Haripur
Keywords: Artificial Intelligence Governance, Institutional Trust, Algorithmic Transparency, Automated Decision Making, Digital Public Administration, Ai Ethics

Abstract

Artificial intelligence technologies are increasingly being integrated into public-sector decision-making systems. Governments and public institutions are adopting artificial intelligence-based tools to improve efficiency, reduce administrative costs, and enhance data driven policy development. Applications such as predictive analytics, automated eligibility assessment, fraud detection, and resource allocation algorithms are now used in sectors including healthcare, public safety, social welfare administration, and urban governance. While these technologies promise improved operational efficiency and evidence-based policymaking, their adoption has raised important questions regarding public trust and institutional legitimacy. Institutional trust plays a critical role in determining whether citizens accept automated decision-making systems used by governments. Trust in artificial intelligence-based governance systems depends on factors such as transparency of algorithms, perceived fairness of automated decisions, accountability mechanisms, and the reliability of technological infrastructure. When citizens perceive artificial intelligence systems as opaque or biased, trust in public institutions may decline, which could undermine the legitimacy of digital governance initiatives. This study evaluates the level of institutional trust in artificial intelligence based public decision-making systems. The research develops a conceptual framework that examines the relationships between algorithmic transparency, perceived fairness, technological reliability, and institutional trust in artificial intelligence governance. Data were collected from citizens, public administrators, and information technology professionals involved in digital governance initiatives. Structural Equation Modeling using Smart Partial Least Squares was employed to analyze the relationships between constructs. The results indicate that algorithmic transparency and perceived fairness significantly influence institutional trust in artificial intelligence based public decision-making systems. Technological reliability also plays an important role in strengthening citizens’ confidence in automated governance systems. The findings highlight the importance of transparent governance frameworks, ethical artificial intelligence design, and robust accountability mechanisms for maintaining public trust in digital governance. This study contributes to research on artificial intelligence governance and public administration by providing empirical insights into the factors that shape institutional trust in automated decision-making systems.

Published
2026-03-22