http://thebijri.com/index.php/bijri/issue/feed Bolan International Journal of Research Insights (BIJRI) 2026-03-23T01:35:28+00:00 Dr. Hafiz Muhammad Irshadullah editor@thebijri.com Open Journal Systems <p>The Bolan International Journal of Research Insights (BIJRI) is a prestigious and multidisciplinary scholarly publication that serves as a platform for rigorous academic research and critical analysis within the realm of <strong>Social Sciences, Arts &amp; Humanities, Physical Sciences, Natural Sciences, Engineering &amp; Technology, Health Sciences and Business &amp; Management</strong>. This journal provides a forum for academics, researchers, policymakers and practitioners to contribute to and engage in discussions about contemporary societal challenges and the policy responses needed to address them.</p> <p><strong>Aims and Scope:</strong> The BIJRI aims to foster insightful discourse and disseminate high-quality research across various domains within the Social Sciences, Arts &amp; Humanities, Physical Sciences, Natural Sciences, Engineering &amp; Technology, Health Sciences and Business &amp; Management. The journal's scope encompasses a wide range of disciplines but not limited to:</p> <ol> <li class="show"><strong>Social Sciences</strong></li> <li class="show"><strong>Arts &amp; Humanities</strong></li> <li class="show"><strong>Physical Sciences</strong></li> <li class="show"><strong>Natural Sciences</strong></li> <li class="show"><strong>Engineering &amp; Technology</strong></li> <li class="show"><strong>Health Sciences</strong></li> <li class="show"><strong>Business &amp; Management</strong></li> </ol> <p><strong>Editorial Process and Quality Standards:</strong> The BIJRI upholds rigorous academic standards, employing a robust peer-review process to ensure the quality, validity, and originality of published articles. Submissions undergo thorough evaluation by experts in the field, providing constructive feedback to authors to enhance the scholarly contribution of their work.</p> <p><strong>Contributions:</strong>&nbsp;The journal welcomes various types of contributions, including original research articles, review papers, case studies, policy briefs, book reviews, and commentaries. Each submission should present novel insights, theoretical frameworks, empirical evidence, or practical implications relevant to the fields.</p> <p><strong>Audience and Impact:</strong> Targeted at scholars, policymakers, practitioners, and students in the above mentioned fields, the BIJRI strives to bridge the gap between academic research and policy practice. By providing evidence-based insights and innovative perspectives, the journal aims to influence policy discourse and contribute to evidence-informed decision-making.</p> http://thebijri.com/index.php/bijri/article/view/37 Analyze the Role of Generative Artificial Intelligence in Shaping Public Opinion and Democratic Processes 2026-03-23T01:35:28+00:00 Zubair Safi safi336565@gmail.com Shah Alam shag224578@gmail.com <p>Generative artificial intelligence has rapidly emerged as a transformative technology capable of producing highly realistic text, images, audio, and video content. Advanced generative models such as large language models and diffusion-based image generators are increasingly integrated into digital platforms including social media, news generation systems, and automated communication tools. While these technologies offer opportunities for innovation, creativity, and information accessibility, they also raise significant concerns regarding their influence on public opinion and democratic processes. In contemporary digital societies, public discourse and political participation are heavily shaped by online information environments. Generative artificial intelligence systems can produce persuasive political narratives, automated commentary, and synthetic media that may influence citizens’ perceptions of political issues, candidates, and public policies. The growing ability of generative artificial intelligence to produce large volumes of realistic content introduces potential risks related to misinformation, manipulation of public discourse, and erosion of trust in democratic institutions. Automated generation of political messages, deepfake videos, and targeted information campaigns may alter the dynamics of political communication and electoral processes. Consequently, understanding the impact of generative artificial intelligence on democratic governance has become an important area of interdisciplinary research. This study analyzes the role of generative artificial intelligence in shaping public opinion and democratic processes. The research develops a conceptual model that examines the relationships between generative artificial intelligence content exposure, perceived information credibility, misinformation risk, and democratic engagement. Data were collected from digital media users, political communication experts, and information technology professionals. Structural Equation Modeling using Smart Partial Least Squares was applied to analyze the relationships between constructs. The results indicate that exposure to generative artificial intelligence generated content significantly influences perceptions of information credibility and increases the risk of misinformation within digital communication environments. However, media literacy and regulatory governance mechanisms play important roles in mitigating the negative effects of automated content generation. The study contributes to research on digital democracy and artificial intelligence governance by providing empirical insights into how generative artificial intelligence technologies shape political communication and democratic participation in the digital age</p> 2026-03-22T19:23:23+00:00 Copyright (c) 2026 Bolan International Journal of Research Insights (BIJRI) http://thebijri.com/index.php/bijri/article/view/38 Evaluating Institutional Trust in AI Based Public Decision-Making Systems 2026-03-23T01:35:18+00:00 Zarnish Sultan s.zarnish22478@gmail.com <p>Artificial intelligence technologies are increasingly being integrated into public-sector decision-making systems. Governments and public institutions are adopting artificial intelligence-based tools to improve efficiency, reduce administrative costs, and enhance data driven policy development. Applications such as predictive analytics, automated eligibility assessment, fraud detection, and resource allocation algorithms are now used in sectors including healthcare, public safety, social welfare administration, and urban governance. While these technologies promise improved operational efficiency and evidence-based policymaking, their adoption has raised important questions regarding public trust and institutional legitimacy. Institutional trust plays a critical role in determining whether citizens accept automated decision-making systems used by governments. Trust in artificial intelligence-based governance systems depends on factors such as transparency of algorithms, perceived fairness of automated decisions, accountability mechanisms, and the reliability of technological infrastructure. When citizens perceive artificial intelligence systems as opaque or biased, trust in public institutions may decline, which could undermine the legitimacy of digital governance initiatives. This study evaluates the level of institutional trust in artificial intelligence based public decision-making systems. The research develops a conceptual framework that examines the relationships between algorithmic transparency, perceived fairness, technological reliability, and institutional trust in artificial intelligence governance. Data were collected from citizens, public administrators, and information technology professionals involved in digital governance initiatives. Structural Equation Modeling using Smart Partial Least Squares was employed to analyze the relationships between constructs. The results indicate that algorithmic transparency and perceived fairness significantly influence institutional trust in artificial intelligence based public decision-making systems. Technological reliability also plays an important role in strengthening citizens’ confidence in automated governance systems. The findings highlight the importance of transparent governance frameworks, ethical artificial intelligence design, and robust accountability mechanisms for maintaining public trust in digital governance. This study contributes to research on artificial intelligence governance and public administration by providing empirical insights into the factors that shape institutional trust in automated decision-making systems.</p> 2026-03-22T19:25:53+00:00 Copyright (c) 2026 Bolan International Journal of Research Insights (BIJRI) http://thebijri.com/index.php/bijri/article/view/39 Designing Socio Technological Models for Enhancing Digital Trust and Transparency in E Government Systems 2026-03-23T01:35:08+00:00 Naheed Qamar naheed.qq2218@gmail.com <p>The rapid digital transformation of public administration has led to the widespread adoption of electronic government systems across the world. E government systems enable governments to deliver services, share information, and interact with citizens through digital platforms. These systems have significantly improved administrative efficiency, reduced bureaucratic delays, and enhanced access to public services. However, the success of e government initiatives depends heavily on citizens’ trust in digital platforms and the transparency of governmental processes. Without sufficient trust in digital systems and confidence in the integrity of government institutions, citizens may be reluctant to engage with electronic governance services. Digital trust refers to citizens’ confidence that digital systems operate securely, reliably, and ethically while protecting personal data and ensuring fairness in administrative processes. Transparency refers to the openness of government processes, the accessibility of information, and the ability of citizens to understand how decisions are made within digital governance frameworks. Socio technological models that integrate technological infrastructure with social and institutional factors can play an important role in strengthening trust and transparency in e government systems. This study aims to design and evaluate a socio technological model for enhancing digital trust and transparency in e government systems. The research develops a conceptual framework that examines the relationships between technological reliability, institutional transparency, citizen participation, and digital trust in e government services. Data were collected from citizens, public administrators, and information technology professionals involved in digital governance initiatives. Structural Equation Modeling using Smart Partial Least Squares was employed to analyze the relationships between constructs. The results indicate that technological reliability, institutional transparency, and citizen participation significantly influence digital trust in e government systems. The findings highlight the importance of integrating technological capabilities with social and governance mechanisms to strengthen public confidence in digital public administration. The study contributes to research on digital governance by providing empirical insights into the socio technological factors that shape trust and transparency in e government systems</p> 2026-03-22T19:27:36+00:00 Copyright (c) 2026 Bolan International Journal of Research Insights (BIJRI) http://thebijri.com/index.php/bijri/article/view/40 The Strategic Impact of Generative Artificial Intelligence on Organizational Decision Making 2026-03-23T01:34:55+00:00 Muhammad Nawaz Khan nawazkhan@awkum.edu.pk <p>Generative Artificial Intelligence (AI) has emerged as a transformative technology with profound implications for organizational decision-making processes. Unlike traditional AI systems, generative AI can autonomously produce novel content, including text, images, models, and scenarios, enabling organizations to analyze complex data, forecast trends, and simulate strategic alternatives. Its application spans business intelligence, strategic planning, risk assessment, marketing, and innovation management. While generative AI offers substantial benefits for improving decision quality, speed, and efficiency, it also raises challenges related to trust, interpretability, and ethical deployment within corporate environments. Organizational decision-making is increasingly data-driven, and generative AI systems facilitate the synthesis of structured and unstructured information from diverse sources. These systems enable managers to explore multiple decision pathways, identify potential risks, and optimize strategic choices. Furthermore, generative AI can augment human creativity by producing innovative solutions and scenario planning alternatives that may not emerge through conventional analytical approaches. However, reliance on automated content generation also introduces risks of cognitive overreliance, algorithmic bias, and strategic misalignment. This study examines the strategic impact of generative AI on organizational decision-making by developing a conceptual framework that investigates the relationships between generative AI adoption, decision quality, strategic agility, and organizational performance. Empirical data were collected from senior managers, decision-makers, and AI adoption specialists across multiple industries. Structural Equation Modeling using Smart Partial Least Squares was applied to assess the relationships between constructs. The results indicate that generative AI adoption significantly enhances decision quality and strategic agility, which in turn positively influence organizational performance. Moreover, organizational culture and technological readiness moderate the effectiveness of generative AI integration in decision-making processes. This study contributes to literature on AI-driven strategic management by providing empirical evidence of generative AI’s role in shaping organizational decision-making outcomes and offering actionable insights for successful AI integration in corporate strategy.</p> 2026-03-22T19:30:06+00:00 Copyright (c) 2026 Bolan International Journal of Research Insights (BIJRI) http://thebijri.com/index.php/bijri/article/view/41 The Effect of Digital Surveillance Systems and Their Implications for Civil Liberties in Smart Societies 2026-03-23T01:34:45+00:00 Jamrooz Ayan ayan.jam5589@gmail.com <p>The rapid development of smart technologies and digital infrastructure has transformed the way governments and organizations monitor and manage urban environments. Smart societies rely on advanced technologies such as artificial intelligence, Internet of Things devices, biometric identification systems, and large-scale data analytics to improve public services, enhance security, and optimize urban management. Digital surveillance systems have become a central component of smart city governance by enabling real time monitoring of public spaces, transportation systems, communication networks, and citizen activities. While these technologies provide numerous benefits in terms of public safety, crime prevention, and efficient resource management, they also raise significant concerns regarding privacy protection and civil liberties. Digital surveillance technologies collect and analyze vast amounts of personal and behavioral data, which may create risks related to unauthorized monitoring, data misuse, and excessive governmental control. Civil liberty advocates argue that widespread surveillance may undermine fundamental democratic principles such as freedom of expression, freedom of association, and the right to privacy. As smart societies continue to expand the use of surveillance technologies, it becomes increasingly important to understand how these systems influence public perceptions of privacy, governance transparency, and civil rights protection. This study analyzes the effect of digital surveillance systems on civil liberties within smart societies. The research develops a conceptual model that examines the relationships between digital surveillance intensity, perceived privacy risk, governance transparency, and protection of civil liberties. Data were collected from citizens, technology professionals, and policy analysts involved in smart city initiatives. Structural Equation Modeling using Smart Partial Least Squares was employed to test the relationships between constructs. The results indicate that increased digital surveillance intensity significantly raises perceived privacy risks among citizens. However, governance transparency and regulatory oversight mechanisms play a critical role in mitigating negative perceptions and protecting civil liberties. The study contributes to digital governance and technology policy research by providing empirical insights into the complex relationship between surveillance technologies and civil liberty protection in smart societies. The findings highlight the need for balanced governance frameworks that ensure security benefits while safeguarding individual rights and democratic values.</p> <p>&nbsp;</p> 2026-03-22T19:32:00+00:00 Copyright (c) 2026 Bolan International Journal of Research Insights (BIJRI)