Author: Denis Avetisyan
A comprehensive review of over 1200 studies reveals a complex picture of artificial intelligence, showing potential gains in efficiency alongside emerging risks to well-being and the environment.

This systematic review highlights a critical need for holistic impact assessments that account for systemic effects and the full life cycle of AI technologies, including environmental justice concerns.
Despite the transformative potential of artificial intelligence, a comprehensive understanding of its broader consequences remains elusive. This systematic review-‘The impacts of artificial intelligence on environmental sustainability and human well-being’-analyzes over 1,200 studies and reveals a fragmented research landscape dominated by narrow assessments of energy efficiency, alongside emerging concerns about AI’s potential to exacerbate social inequalities. While environmental analyses largely portray positive impacts, well-being research presents a more nuanced picture, highlighting risks to employment and social cohesion. Can a truly holistic assessment-incorporating systemic effects, life-cycle impacts, and both environmental and social dimensions-unlock AI’s potential for genuine sustainability and human flourishing?
The Inevitable Cascade: Mapping AI’s Systemic Reach
Artificial intelligence is rapidly gaining recognition not merely as a tool for specific tasks, but as a General Purpose Technology – a class of innovations with the potential to reshape the entire economic and social landscape, much like the steam engine or electricity did in prior eras. This categorization signifies a fundamental shift in perspective, suggesting AI’s impacts will extend far beyond automating existing processes and will instead drive the creation of entirely new industries, redefine work itself, and fundamentally alter how society functions. The implications are vast, extending to areas as diverse as healthcare, transportation, and communication, promising unprecedented levels of efficiency and innovation while simultaneously posing complex challenges regarding equity, access, and societal control. This transformative power necessitates a thorough examination of both the opportunities and risks associated with widespread AI adoption, moving beyond incremental improvements to consider systemic and long-term consequences.
The transformative potential of artificial intelligence, increasingly recognized as a General Purpose Technology, is contingent upon a nuanced grasp of its far-reaching consequences. Realizing benefits comparable to those derived from past technological revolutions – like the steam engine or electricity – demands more than simply celebrating innovation; it necessitates a rigorous evaluation of both positive advancements and potential drawbacks. This includes anticipating unintended consequences across diverse domains, from economic disruption and workforce displacement to shifts in social structures and ethical considerations. A thorough understanding of these interconnected impacts is crucial for proactive mitigation of risks and the responsible development of AI systems that truly benefit society, rather than exacerbate existing inequalities or create new challenges.
A critical evaluation of artificial intelligence’s potential hinges on robust impact assessments, yet existing analyses frequently fall short by failing to account for the intricate web of interconnected consequences. To address this deficiency, a comprehensive review synthesized findings from 1,291 studies, offering a broader perspective than typically found in isolated analyses. This extensive survey encompassed 523 investigations into environmental impacts – ranging from resource consumption to pollution – alongside 768 focused on the multifaceted dimensions of human well-being, including economic effects, social equity, and psychological consequences. By aggregating and analyzing this substantial body of work, the review aims to move beyond fragmented understandings and provide a more systemic view of AI’s far-reaching implications, enabling a more informed approach to its development and deployment.

Deconstructing the Black Box: A Rigorous Examination
A Systematic Literature Review was conducted to address identified knowledge gaps, adhering to the PRISMA 2020 guidelines to ensure methodological rigor and transparency throughout the process. This framework mandated a pre-defined eligibility criteria, comprehensive search strategy, and standardized data extraction procedures. The PRISMA 2020 checklist was utilized for reporting, detailing search strategies, study selection, data extraction, and risk of bias assessment. Adherence to this framework facilitated a reproducible and unbiased synthesis of existing research, enhancing the validity and reliability of the review’s findings.
The systematic literature review utilized a multi-database search strategy to maximize the capture of relevant research. Specifically, the search included Scopus, a comprehensive abstract and citation database of peer-reviewed literature; arXiv, a pre-print server covering physics, mathematics, computer science, quantitative biology, quantitative finance, and statistics; and NBER Working Papers, providing access to ongoing economic research. This combined approach ensured coverage of both published and pre-publication research, as well as a broad disciplinary scope, contributing to a more complete and unbiased assessment of the existing literature.
The synthesis of findings was achieved through a mixed-methods approach, incorporating both quantitative and qualitative research methodologies alongside conceptual analysis. This involved the critical evaluation and integration of data from a total of 1,291 studies identified through the systematic literature review process. Quantitative analysis facilitated the identification of statistically significant trends and relationships, while qualitative research provided nuanced insights into the contextual factors influencing observed phenomena. Conceptual analysis was used to establish a cohesive framework for interpreting and categorizing the diverse range of findings across the included studies.

The Entropy of Efficiency: Unpacking AI’s Environmental Costs
Artificial intelligence systems contribute significantly to environmental impact, primarily through substantial energy consumption and the resulting carbon dioxide emissions. This energy is utilized throughout the entire AI lifecycle, encompassing data collection, model training, and deployment – with computationally intensive deep learning models being particularly energy-demanding. The scale of this impact is growing with the increasing prevalence of AI applications and the rising complexity of algorithms. Quantifying this footprint requires detailed analysis of energy usage across hardware, data centers, and network infrastructure, as well as consideration of the energy sources powering these systems.
Life Cycle Assessment (LCA) and Environmentally-Extended Input-Output Analysis (EEIOA) are essential methodologies for comprehensively evaluating the environmental impacts associated with artificial intelligence systems. LCA meticulously tracks the resource consumption and emissions throughout the entire AI lifecycle – from raw material extraction and manufacturing of hardware, through the operational energy use of training and deployment, to end-of-life treatment and disposal. EEIOA, conversely, adopts an economy-wide perspective, modeling the interdependencies between AI-related sectors and quantifying the indirect environmental burdens embedded within the supply chain. While LCA provides detailed process-level data, EEIOA offers a broader, system-level understanding, and the combined application of both techniques provides a more robust and complete assessment of AI’s environmental footprint than either method alone.
Analysis of environmental impact studies reveals a frequent discrepancy between projected efficiency gains from AI implementation and actual reductions in overall environmental burden due to the rebound effect. While 83% of reviewed studies reported positive environmental impacts, these were largely confined to the application level-specifically, improvements in the efficiency of a single task or process. This indicates that gains made through AI-driven optimization are often offset by increased consumption or activity enabled by those same efficiencies; for example, optimized logistics may reduce fuel consumption per delivery but simultaneously facilitate a larger volume of deliveries, negating the initial environmental benefit. Consequently, lifecycle assessments must account for these systemic effects to accurately measure the net environmental impact of AI technologies.
![Analysis of impact studies reveals that environmental research heavily focuses on energy use and [latex]CO_2[/latex] emissions, whereas well-being studies exhibit a more diverse consideration of impact categories.](https://arxiv.org/html/2602.24091v1/2602.24091v1/x1.png)
The Human Algorithm: Mapping AI’s Ripple Effects on Well-being
Artificial intelligence is reshaping societal well-being in multifaceted ways, extending beyond simple economic gains to influence the very fabric of human connection and equity. The technology’s impact on employment is a primary concern, with potential for both job displacement and the creation of new roles demanding different skillsets. However, the effects are not solely economic; AI-driven algorithms can also subtly erode social cohesion through filter bubbles and the spread of misinformation, while simultaneously exacerbating existing inequalities. Access to the benefits of AI – and protection from its harms – remains unevenly distributed, potentially creating a widening gap between those who thrive in an AI-powered world and those left behind. Consequently, a holistic assessment of well-being, encompassing economic security, social relationships, and equitable opportunity, is crucial to navigate this technological transition effectively.
A comprehensive understanding of artificial intelligence’s societal impact necessitates investigation at multiple scales. Macro-level analyses reveal broad trends – shifts in national employment rates or alterations in economic productivity – but these figures often mask critical localized effects. Micro-level studies, conversely, pinpoint how AI specifically alters individual experiences, from changes in job roles and skill requirements to impacts on personal data privacy and social interactions within communities. Recognizing the interplay between these scales is paramount; a national increase in automation might coincide with concentrated job displacement in specific regions or among particular demographic groups. Therefore, researchers emphasize that a truly nuanced assessment requires integrating both macro and micro perspectives to accurately capture the diverse and often uneven distribution of AI’s consequences and inform targeted interventions.
A comprehensive understanding of artificial intelligence’s societal impact necessitates examining not just its immediate effects, but also the consequences rippling through entire supply chains and the distribution of both benefits and harms. Current research, however, demonstrates a significant oversight in this area; a review indicates that only 11% of environmental studies adequately address these systemic impacts. This scarcity of holistic analysis creates a critical gap in knowledge, hindering efforts to ensure equitable outcomes as AI technologies become increasingly integrated into daily life. Without a broader perspective encompassing upstream effects and distributional consequences, interventions risk exacerbating existing inequalities and failing to deliver genuinely inclusive progress.

Cultivating the Ecosystem: Charting a Course for Responsible AI
The trajectory of artificial intelligence demands a shift toward extended, longitudinal research initiatives. Current evaluations frequently focus on immediate impacts, offering limited insight into the cascading societal and environmental consequences that unfold over years or even decades. Comprehensive studies are needed to move beyond short-term metrics and capture the full lifecycle effects of AI systems – from resource consumption during training and deployment, to shifts in labor markets, and the potential for unforeseen biases to become entrenched. These investigations should embrace nuanced methodologies, accounting for complex interactions between technological advancements and the social-ecological systems they inhabit, ultimately enabling a more proactive and responsible approach to AI development and integration.
While artificial intelligence tools like Microsoft Copilot demonstrably accelerate research processes – particularly in exhaustive tasks such as literature reviews – their environmental impact warrants careful consideration. Recent analyses reveal that employing these large language models for data extraction generates a significant carbon footprint; a single instance of such a process can produce approximately 39 kilograms of carbon dioxide emissions. This highlights the crucial need for researchers to adopt a critical perspective when integrating AI into their workflows, balancing the benefits of increased efficiency with a responsible awareness of the associated energy consumption and environmental costs. A thoughtful approach requires not only verifying the accuracy of AI-generated information, but also actively seeking strategies to minimize its ecological impact, ensuring that technological advancement aligns with sustainability goals.
The development of artificial intelligence demands a shift towards comprehensive, cross-disciplinary investigation to navigate its complex implications. Truly sustainable and equitable AI systems cannot emerge from isolated technological advancements; instead, careful consideration of both foreseen benefits and potential unintended consequences is paramount. This requires collaboration between computer scientists, ethicists, social scientists, environmental experts, and policymakers to proactively identify and mitigate risks. Such a holistic approach extends beyond simply maximizing efficiency or profit, encompassing a thorough evaluation of AI’s broader societal and ecological footprint, ensuring that its integration genuinely benefits all stakeholders and respects planetary boundaries. Only through this interconnected lens can innovation foster a future where AI serves as a catalyst for positive, lasting change.

The pursuit of efficiency, so often touted as AI’s environmental boon, feels increasingly like a rearranging of deck chairs. This systematic review illuminates a troubling pattern: a focus on localized gains while obscuring the systemic impacts rippling through complex supply chains. It echoes a sentiment articulated by John von Neumann: “There is no long-term security in any system that is not fundamentally open.” The article demonstrates that AI’s environmental footprint isn’t merely about operational energy use, but a web of resource extraction, manufacturing, and disposal – a closed system masquerading as progress. The fragmented nature of current research, highlighted by the assessment of over 1200 studies, only reinforces the need for holistic Life Cycle Assessments, recognizing that every optimization comes at the cost of future flexibility and potential unforeseen consequences.
The Looming Silhouette
The aggregation of over twelve hundred studies reveals not a landscape of understanding, but a fractured terrain. Efficiency gains, diligently cataloged in environmental research, appear as local optimizations – bright spots on a darkening horizon. Each calculation of reduced energy consumption masks a deeper truth: the system, by its very nature, demands expansion. The promise of ‘sustainable AI’ resembles a gardener pruning a vine while simultaneously fertilizing the root.
The more pressing concerns regarding well-being, while acknowledged, remain largely reactive. The studies highlight risks, but rarely trace the full arc of consequence. Focusing on algorithmic bias is akin to treating a fever without addressing the infection. The true vulnerability lies not in the tool itself, but in the predictable amplification of existing societal fault lines. Every connection forged, every efficiency achieved, tightens the chains of dependency.
Future assessment must abandon the illusion of isolated impact. The exercise is not to measure what AI does, but to map the shadow it casts across the entire system – from resource extraction to obsolescence. The current literature offers glimpses of the problem; what is needed is a cartography of decay. A reckoning with the fact that every line of code is a prophecy of eventual entropy.
Original article: https://arxiv.org/pdf/2602.24091.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash of Clans Unleash the Duke Community Event for March 2026: Details, How to Progress, Rewards and more
- Gold Rate Forecast
- Jason Statham’s Action Movie Flop Becomes Instant Netflix Hit In The United States
- Kylie Jenner squirms at ‘awkward’ BAFTA host Alan Cummings’ innuendo-packed joke about ‘getting her gums around a Jammie Dodger’ while dishing out ‘very British snacks’
- Hailey Bieber talks motherhood, baby Jack, and future kids with Justin Bieber
- eFootball 2026 Jürgen Klopp Manager Guide: Best formations, instructions, and tactics
- KAS PREDICTION. KAS cryptocurrency
- Jujutsu Kaisen Season 3 Episode 8 Release Date, Time, Where to Watch
- How to download and play Overwatch Rush beta
- Quadruped Teams Navigate Clutter with Adaptive Roles
2026-03-02 13:16