Generative AI’s Impact on Business and Technology

Author: Denis Avetisyan


A new review synthesizes the rapidly evolving landscape of generative artificial intelligence within the field of Information Systems.

The analysis categorizes examined publications according to their application sectors, revealing the diverse landscape of research within the field.
The analysis categorizes examined publications according to their application sectors, revealing the diverse landscape of research within the field.

This paper presents a systematic review of Generative AI research, focusing on socio-technical implications and future research directions in AI governance and ethics.

Despite the transformative potential of Generative AI, its rapid deployment often outpaces our understanding of its systemic consequences. This paper, ‘The Landscape of Generative AI in Information Systems: A Synthesis of Secondary Reviews and Research Agendas’, systematically reviews recent scholarship to reveal a persistent misalignment between rapidly evolving technical capabilities and slower-adapting organizational and societal structures. Our analysis of 28 studies since 2023 highlights critical challenges-from technical unreliability and ethical risks to governance vacuums-that constrain GenAI’s benefits. How can Information Systems research proactively shape the co-evolution of these technologies to foster responsible innovation and achieve genuine socio-technical alignment?


Navigating the Promise and Peril of Generative AI

Generative artificial intelligence is poised to reshape industries through its capacity to automate complex tasks and accelerate innovation. Beyond simply streamlining existing processes, this technology fosters the creation of entirely new products, services, and experiences. In manufacturing, it designs optimized components and predicts equipment failures, while in healthcare, it assists in drug discovery and personalizes treatment plans. The creative industries are also experiencing a shift, with generative AI tools enabling artists, designers, and writers to explore uncharted territories and produce content at an unprecedented scale. This potential for increased efficiency isn’t limited to specific sectors; it extends to data analysis, software development, and even scientific research, promising a broad-based economic impact and a future where human ingenuity is amplified by intelligent machines.

While generative AI presents remarkable opportunities, its successful integration necessitates a rigorous examination of associated risks. Concerns surrounding the reliability of outputs – including the potential for hallucinations or inaccuracies – demand robust validation techniques and ongoing monitoring. Simultaneously, security vulnerabilities, such as prompt injection and data breaches, pose significant threats that require proactive mitigation strategies. Beyond these technical hurdles, ethical considerations – encompassing bias amplification, intellectual property rights, and potential job displacement – are paramount. Consequently, research dedicated to addressing these challenges has experienced substantial growth between 2023 and 2025, evidenced by a surge in publications and dedicated funding initiatives focused on responsible AI development and deployment, highlighting a collective push to harness the benefits of this technology while minimizing its potential harms.

This section outlines the key challenges and limitations encountered during the study.
This section outlines the key challenges and limitations encountered during the study.

Understanding GenAI Through a Socio-Technical Lens

A Socio-Technical Systems (STS) approach to Generative AI (GenAI) deployment recognizes that technical components and social elements – including organizational processes, user skills, and societal norms – are interdependent and mutually constitutive. This perspective moves beyond evaluating GenAI solely on its technical performance, instead emphasizing the need to analyze how the technology interacts with, and is shaped by, its operational environment. STS considers that GenAI’s success or failure is determined not simply by algorithmic capabilities, but by the alignment of the technology with human needs, work practices, and broader social values; therefore, comprehensive evaluation requires examination across multiple levels of analysis, from individual users to organizational structures and wider societal impacts.

The Socio-Technical GenAI Outcomes Matrix (SGOM) is a framework designed to synthesize evidence regarding Generative AI (GenAI) implementations by acknowledging outcomes are not solely determined by the technology itself. The SGOM facilitates analysis across multiple levels, including individual user experience, organizational processes, and broader societal impacts. This multi-level approach recognizes that GenAI’s effects are co-produced through interactions between the technical system, the people utilizing it, and the organizational and social context in which it is deployed. By mapping outcomes to these different levels, the SGOM enables a more comprehensive understanding of GenAI’s influence and facilitates identification of factors contributing to both positive and negative results.

Successful Generative AI (GenAI) implementation requires consideration beyond technical specifications; it is fundamentally shaped by organizational structures and prevailing societal values. A systematic review employing the Socio-Technical GenAI Outcomes Matrix (SGOM) demonstrates that GenAI outcomes are co-produced across multiple levels of analysis, necessitating an integrated approach. Data extraction for this review achieved 74% inter-rater agreement, indicating a reasonable degree of consistency in the identified evidence base and supporting the validity of the framework’s application in assessing these complex interdependencies.

Proactive Risk Management: Building Trust and Reliability

Trust Calibration, in the context of Generative AI, refers to the alignment between a user’s reliance on an AI system’s output and the actual accuracy and reliability of that output. Because GenAI models are probabilistic and can produce incorrect or misleading results – often referred to as ā€œhallucinationsā€ – it is crucial to establish appropriate levels of trust. Over-reliance on inaccurate outputs can lead to flawed decision-making, while under-reliance can negate the potential benefits of the technology. Effective Trust Calibration involves providing users with clear indications of model confidence, limitations, and potential biases, and enabling them to critically evaluate the generated content. This process is not a fixed setting, but rather a dynamic adjustment based on the specific application, the user’s expertise, and ongoing performance monitoring of the AI system.

Contestability by Design establishes mechanisms for users to actively examine and, if necessary, correct or bypass AI-generated outputs. This involves providing access to the data and reasoning behind an AI’s conclusions, allowing for audit trails and explanations of decision-making processes. Users should be able to query the AI system to understand why a specific output was generated, and possess the ability to override the AI’s suggestion with their own judgment or alternative data. Implementing these features is critical for accountability, as it allows for the identification and correction of errors or biases within the AI system, and ultimately fosters user trust through increased transparency and control.

The National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) offers a systematic process for managing risks inherent in Generative AI deployments, encompassing functions such as Govern, Map, Measure, and Manage. Evaluation of studies included in the RMF’s development demonstrated moderate agreement among reviewers (Cohen’s Īŗ = 0.47), indicating variability in assessment and the necessity for standardized, rigorous evaluation methodologies when applying the framework. This suggests that consistent interpretation and application of the RMF require well-defined metrics and clear guidelines to ensure reliable risk assessment and mitigation across different implementations and contexts.

Charting a Course for the Future: Research and Societal Alignment

Future investigations must address the necessary shifts within organizations to fully leverage generative AI’s capabilities; simply implementing the technology isn’t enough. Research indicates that successful integration demands a reconfiguration of workflows, roles, and responsibilities to facilitate effective human-AI collaboration. This involves not merely task allocation, but the development of entirely new organizational structures that prioritize synergy between human ingenuity and artificial intelligence. Studies suggest a move away from traditional hierarchical models towards more fluid, adaptive systems where AI acts as a collaborative partner, augmenting human skills and decision-making. This organizational restructuring will require investment in employee training, the creation of new leadership roles focused on AI integration, and a fundamental rethinking of how work is designed and evaluated to unlock the full potential of GenAI and ensure a productive, ethically sound partnership.

The successful integration of generative artificial intelligence demands proactive attention to societal alignment, a critical process ensuring these powerful technologies adhere to established ethical principles and reflect shared human values. Research indicates that neglecting this alignment carries substantial risk, potentially leading to unintended consequences like bias amplification, misinformation dissemination, and erosion of trust. Therefore, ongoing investigation focuses on developing robust frameworks for ethical AI governance, incorporating diverse stakeholder perspectives, and fostering transparency in algorithmic decision-making. This isn’t merely a matter of preventing harm, but of actively shaping GenAI’s trajectory to promote fairness, inclusivity, and benefit all of society, ultimately enabling responsible innovation and maximizing the positive impact of these transformative tools.

The sustained development of GenAI artifacts – the very systems and models driving this technological wave – demands rigorous attention to reliability, safety, and ethical considerations. Current research emphasizes that long-term success isn’t simply about increasing computational power or algorithmic complexity, but about proactively building AI systems that are inherently trustworthy and aligned with human values. This necessitates advancements in areas like adversarial robustness, explainable AI, and formal verification, ensuring that these systems perform as intended, even in unforeseen circumstances. Moreover, artifact design must incorporate mechanisms for detecting and mitigating bias, promoting fairness, and safeguarding against malicious use, ultimately fostering public trust and responsible innovation in the field of generative artificial intelligence.

Recent systematic reviews of research conducted between 2023 and 2025 strongly suggest that the most effective approach to leveraging generative AI lies in the development of hybrid human-AI ensembles. These collaborative systems are not intended to replace human capabilities, but rather to augment them, combining the analytical power and efficiency of artificial intelligence with uniquely human traits such as critical thinking, creativity, and complex problem-solving. The studies analyzed consistently demonstrate that tasks performed by these ensembles achieve higher accuracy, greater innovation, and improved outcomes compared to either humans or AI operating in isolation. This synergistic partnership allows for a division of labor where AI handles data processing and pattern recognition, while humans provide contextual understanding, ethical oversight, and strategic direction – creating a robust and adaptable framework for future innovation. The rapidly evolving nature of the field indicates a continued need for investigation into optimal ensemble designs and implementation strategies.

This work identifies key research gaps and proposes directions for future investigation in the field.
This work identifies key research gaps and proposes directions for future investigation in the field.

The exploration of Generative AI within Information Systems demands a holistic view, recognizing that technology doesn’t exist in isolation. This aligns perfectly with the assertion by Grace Hopper: ā€œIt’s easier to ask forgiveness than it is to get permission.ā€ The rapid evolution of these models, as detailed in the systematic review, necessitates a proactive, yet adaptable approach to governance and ethical considerations. Rather than attempting to foresee and regulate every potential outcome – effectively ā€˜getting permission’ upfront – a more pragmatic path involves deploying cautiously, learning from real-world application, and adjusting course as needed. This iterative process, mirroring the evolution of socio-technical systems, acknowledges the inherent complexity and unpredictability of innovation.

What Lies Ahead?

This synthesis of generative AI within information systems reveals, predictably, that the technology outpaces understanding. The field has largely focused on what these models can do, while neglecting the more pressing question of what they should do, and for whom. Every optimization-increased efficiency, novel outputs-creates new tension points within the socio-technical system. The apparent benefits, so readily proclaimed, invariably redistribute power, introduce biases, and demand constant recalibration of established norms.

The future, therefore, isn’t about building ā€˜better’ algorithms, but about cultivating a more holistic view. Architecture isn’t a diagram on paper; it’s the system’s behavior over time. Research must shift toward understanding these emergent properties-the unintended consequences, the subtle shifts in organizational structure, the erosion of trust-that inevitably accompany large language model deployment.

The most fruitful avenues lie in exploring the feedback loops between technology and society. The challenge is not simply to ā€˜govern’ AI, but to design systems that are inherently aligned with human values and organizational goals. This requires moving beyond technical fixes and embracing the messy, iterative process of socio-technical co-evolution. The landscape of generative AI is not a problem to be solved, but a complex adaptive system to be navigated.


Original article: https://arxiv.org/pdf/2603.11842.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-03-13 15:17