AI Inside the Machine: Navigating the EU AI Act for Internal Systems

Author: Denis Avetisyan


A new analysis clarifies how the EU AI Act applies to artificial intelligence deployed within organizations, impacting everything from HR to research and development.

This paper examines the scope of the EU AI Act’s requirements for internally deployed AI systems, including risk assessment and compliance obligations.

Determining the scope of the forthcoming EU AI Act remains a complex undertaking, particularly concerning the deployment of artificial intelligence systems within organizations for internal use. This memorandum, ‘Internal Deployment in the EU AI Act’, rigorously analyzes arguments for and against extending the Act’s regulations to such internally deployed models, navigating potentially conflicting interpretations of Articles 2(1), 2(6), and 2(8). Our analysis suggests that, despite available exemptions-including those for scientific research-many internal deployments will likely fall under the Act’s purview, demanding proactive compliance strategies. Will a nuanced understanding of these provisions be sufficient to balance innovation with the imperative of responsible AI governance?


The Inevitable Integration: Navigating AI’s Regulatory Landscape

The proliferation of general purpose Artificial Intelligence models is fundamentally reshaping the landscape of numerous industries, from healthcare and finance to transportation and entertainment. This rapid deployment, while promising substantial gains in efficiency and innovation, simultaneously introduces complex challenges that demand proactive governance. Unlike traditional software with defined parameters, these AI systems exhibit adaptability and autonomy, creating potential for unforeseen consequences and systemic risks. A clear and comprehensive regulatory framework is therefore becoming increasingly vital, not to impede progress, but to ensure responsible development and deployment – fostering public trust, mitigating potential harms, and establishing accountability as these powerful technologies become further integrated into the fabric of daily life. The current pace of innovation necessitates a dynamic approach to regulation, one that can adapt to evolving capabilities and address emerging ethical and societal implications.

The European Union’s proposed AI Act, while ambitious in its attempt to mitigate the risks posed by artificial intelligence, faces significant hurdles in precisely defining the scope of its regulation. Establishing clear boundaries is complicated by the rapidly evolving nature of AI technology and the diverse ways it can be implemented; a system designed for image recognition, for example, may also contribute to complex decision-making processes, blurring the lines between acceptable use and potentially harmful application. This necessitates a nuanced approach that avoids overly broad definitions, which could stifle innovation, while simultaneously ensuring adequate oversight of high-risk systems. The challenge lies in identifying those AI applications that genuinely warrant strict regulation, distinguishing them from those that offer societal benefits without posing substantial threats to fundamental rights or safety – a task complicated by the potential for unintended consequences and the difficulty of anticipating future technological developments.

Establishing effective AI regulation hinges on a precise understanding of when a system is considered ‘put into service’ and, crucially, determining accountability when harm occurs. Current legal frameworks often struggle to address the unique characteristics of AI, particularly its capacity for autonomous operation and continuous learning. The question isn’t simply if an AI caused damage, but when did its deployment cross the threshold requiring adherence to safety standards and liability protocols? Determining responsibility requires tracing the causal chain from the AI’s actions back to the developers, deployers, or operators, a task complicated by the opacity of many AI algorithms and the potential for unforeseen emergent behaviors. Successfully navigating these complexities necessitates a shift toward proactive risk assessment and the establishment of clear lines of responsibility throughout the AI lifecycle, ensuring that innovation is balanced with robust safeguards against potential harms.

The implementation of artificial intelligence within an organization’s own operations-internal deployment-presents a distinctive regulatory challenge, as highlighted by recent analysis of the proposed EU AI Act. While offering considerable gains in efficiency and decision-making, applying broad regulations designed for publicly facing AI systems to these internal tools risks hindering innovation and creating unnecessary bureaucratic burdens. This paper details how a rigid interpretation of the Act’s scope could inadvertently stifle beneficial internal applications, such as AI-powered data analysis for research and development or automated internal reporting. A nuanced approach is therefore critical, one that acknowledges the reduced risk profile of internally used AI and fosters responsible development without impeding an organization’s ability to leverage these powerful technologies for internal improvements and growth.

Preserving the Seed: Exemptions for Scientific Advancement

The European Union’s AI Act contains a specific exemption for AI systems created and utilized exclusively for scientific research and development. This exclusion, codified within the Act, acknowledges the fundamental importance of unhindered inquiry and innovation within the scientific community. The rationale is to avoid impeding the progress of research by subjecting early-stage AI experimentation to the same regulatory requirements as deployed, market-ready AI products. This exemption applies to systems used for basic scientific investigation, allowing researchers to explore AI technologies without immediate legal constraints, provided the systems remain within a purely research context and are not intended for commercial application.

Article 2(6) of the EU AI Act exempts AI systems developed and used solely for scientific research and development; however, the applicability of this exemption is contingent upon a demonstrable separation between research activities and product development. The Act does not define these terms, necessitating a case-by-case analysis to determine if a system is genuinely employed for expanding knowledge or is instead being utilized to create a commercially viable product. Activities focused on theoretical investigation, experimentation, and the generation of new hypotheses are considered research, while efforts directed towards the creation of a finalized, marketable product, even if iterative, fall outside the scope of the exemption. This distinction is critical for ensuring that only systems genuinely dedicated to advancing scientific understanding benefit from the Act’s relaxed regulatory requirements.

Article 2(8) of the EU AI Act provides a specific exemption for AI systems engaged in research, testing, and development phases that occur before a product is released to market. This means that the Act’s regulatory requirements, such as conformity assessments and ongoing monitoring, do not immediately apply to AI systems used exclusively within these pre-release activities. The intention is to avoid hindering innovation by imposing obligations on early-stage development and experimentation. This exemption applies regardless of whether the research is conducted by private companies, academic institutions, or public bodies, provided the AI system remains within the scope of research and is not yet deployed as a finalized product or service.

The EU AI Act’s exemptions for research and development specifically encompass internal deployment of AI systems used solely for scientific inquiry, thereby avoiding regulatory burdens that could impede innovation. This interpretation, detailed through a systematic analysis of Articles 2(6) and 2(8), confirms that AI utilized as a tool within a contained research environment – prior to any deployment as a marketable product – is excluded from the Act’s immediate obligations. This allows for continued experimentation and iterative development without triggering compliance requirements intended for fully deployed AI solutions, fostering progress in fields reliant on AI-driven research methodologies.

The Adaptive Framework: Balancing Regulation with Deployment

Successful integration of AI systems, including General Purpose AI Models, into existing workflows necessitates thorough consideration of the stipulations outlined in the EU AI Act. The Act imposes obligations on organizations deploying these systems, extending beyond developers to encompass those utilizing AI within their operational processes. Compliance requires a detailed assessment of how the AI model functions within the specific workflow, and how its outputs are utilized, to ensure adherence to the Act’s requirements regarding risk management, transparency, and human oversight. Failure to adequately address these stipulations can result in significant legal and financial repercussions, as the Act introduces a tiered system of prohibited, high-risk, and limited-risk AI applications, each subject to specific regulatory demands.

Article 2(1)(b) of the EU AI Act establishes that entities deploying AI systems fall under the regulatory framework, extending responsibility beyond developers and providers. This means any organization utilizing an AI system within its operational processes, even if the system was created by a third party, is legally accountable for its use and must demonstrate compliance with the Act’s requirements. Deployment is defined broadly and isn’t limited to direct public-facing applications; internal use cases, such as AI-powered tools for employee management or data analysis, are also subject to the Act. Organizations must therefore assess the risk level of deployed AI systems and implement appropriate risk management measures, including documentation, transparency, and human oversight, to avoid potential penalties for non-compliance.

Article 2(1)(c) of the EU AI Act establishes a significant jurisdictional scope by applying its regulations whenever the output generated by an AI system is utilized within the European Union, regardless of where the system itself is developed or deployed. This means any organization using AI-generated results – whether for internal decision-making, external services, or incorporated into other products – falls under the Act’s purview. Consequently, entities need not directly deploy or manage the AI system to be subject to compliance requirements; simply leveraging its output within the EU triggers obligations related to risk management, transparency, and accountability. This broad definition aims to capture a wide range of AI usage scenarios and ensure responsible AI practices throughout the Union.

The EU AI Act allows for mitigation of certain obligations when AI systems are deployed solely for internal research and development purposes. Specifically, organizations utilizing AI systems exclusively within a controlled research environment are not subject to the full extent of the Act’s requirements. However, this exemption is contingent upon demonstrably limiting deployment to research activities and maintaining thorough documentation to evidence this limitation. This documentation must clearly outline the scope of the research, the internal controls in place, and confirm that any outputs generated are not utilized beyond the confines of the research environment, as detailed in this paper’s analysis of Article 4 and related provisions within the Act.

The analysis of internal AI deployment within the EU AI Act reveals a predictable tension: systems, even those contained within organizational boundaries, are subject to the inevitable creep of regulation. This mirrors the natural world, where all structures eventually face entropy. As Carl Friedrich Gauss observed, “If I have seen further it is by standing on the shoulders of giants.” This resonates with the Act’s intent – building upon existing frameworks, but also acknowledging the inherent complexities of artificial intelligence. The paper rightly highlights that claiming exemptions requires demonstrable rigor, a process that, like all things, will eventually require re-evaluation and adaptation. Stability, in this context, isn’t permanence, but a temporary deferral of the challenges posed by evolving technology and its governance.

The Horizon Recedes

The applicability of the EU AI Act to internal deployments, as this analysis demonstrates, isn’t a question of if regulation applies, but how swiftly existing systems will accommodate it. The exceptions carved out for research and internal administration offer temporary respite, but these are precisely the areas where abstraction carries the weight of the past-early design choices will prove either remarkably adaptable or become brittle liabilities. The current focus on risk assessment, while necessary, risks becoming a static snapshot of capability, rather than an ongoing appraisal of systemic drift.

Future work must move beyond the checklist approach to compliance. The Act rightly identifies high-risk applications, but the true challenge lies in anticipating how ostensibly ‘low-risk’ internal tools will, over time, contribute to emergent, system-level risks. Every optimization, every automation, subtly alters the landscape.

The enduring task isn’t simply to meet a regulatory standard, but to build systems that age gracefully. Slow change preserves resilience. The Act provides a framework, but the longevity of any AI deployment will depend on a continuous, skeptical appraisal-not of what the system can do today, but of what it will become tomorrow.


Original article: https://arxiv.org/pdf/2512.05742.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-08 15:52