Who Owns the Algorithm?

Author: Denis Avetisyan


The rise of increasingly autonomous AI systems is challenging fundamental legal concepts of creation and responsibility.

This review argues for a principle of ‘functional equivalence’ to allocate legal rights in copyright, patent, and tort when determining contributions from human-AI collaboration.

Existing legal doctrines struggle to accommodate increasingly autonomous artificial intelligence, creating ambiguity in areas reliant on traceable origins. This challenge is the focus of ‘Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort’, which analyzes how AI’s dynamic, adaptive behavior-termed ‘fluid agency’-fractures traditional notions of authorship, inventorship, and liability. The article proposes that when attributing contributions becomes impossible, legal frameworks should adopt a principle of ‘functional equivalence,’ treating human and AI inputs as legally equivalent for allocating rights and responsibility. Will this pragmatic approach offer a viable path forward for reconciling innovation and accountability in an age of increasingly sophisticated AI?


The Shifting Sands of Legal Ownership

Legal concepts of authorship and inventorship have historically centered on human creation, demanding demonstrable intent and intellectual contribution from a person. However, the increasing capacity of artificial intelligence to independently generate creative works and novel solutions introduces significant ambiguity within these established doctrines. Attributing these outputs to a human becomes problematic when the AI operates with limited or no direct human oversight, challenging the very foundation of legal ownership. The core issue isn’t simply whether an AI can create, but rather who – or what – should be legally recognized as the creator when the AI’s contribution transcends mere tool use and enters the realm of genuine ingenuity. This necessitates a critical examination of how legal systems define creative agency and intellectual contribution in a world where machines can demonstrably exhibit both.

Copyright Law and Patent Law, cornerstones of intellectual property protection, were explicitly designed with human creators in mind. These frameworks operate on the premise of intentionality and ingenuity stemming from a person, concepts difficult to apply to autonomous artificial intelligence. While copyright traditionally safeguards expressions originating from a human mind, and patents protect novel, non-obvious inventions conceived by a human inventor, AI-generated works present a challenge. Current legal definitions struggle to accommodate contributions from entities lacking legal personhood or the capacity for conscious intent. Determining ownership or inventorship becomes problematic when an AI independently generates a creative work or a technological solution, forcing a critical examination of whether these long-established legal structures can adequately address the realities of AI-assisted and AI-driven creation.

The increasing prevalence of AI-assisted creation fundamentally disrupts established legal concepts of origin. Traditionally, attributing authorship or inventorship relied on identifying a human source – the point of creative genesis. However, when an AI algorithm significantly contributes to, or even independently generates, a work, pinpointing a singular ‘origin’ becomes problematic. The AI isn’t simply a tool, but an active participant, blurring the lines between human input and machine creation. This poses a challenge to origin-based tests used in copyright and patent law, as these tests often hinge on demonstrable human intentionality and creative control. Determining whether the human contribution was sufficient to establish legal ownership, or if the AI itself holds some claim, requires a nuanced approach that moves beyond the simple identification of a human ‘source’ and considers the complex interplay between human prompting and algorithmic generation.

The increasing sophistication of artificial intelligence compels a fundamental reassessment of legal frameworks surrounding responsibility and ownership. Current systems of attribution, built upon the premise of human agency, falter when applied to creations generated, or significantly shaped, by non-human intelligence. This isn’t merely a technical challenge; it strikes at the core of legal concepts like intent and causality. Determining where creative control resides – with the algorithm’s programmer, the user prompting its operation, or the AI itself – demands new legal precedents. Failing to address this foundational tension risks stifling innovation, creating uncertainty for creators and businesses, and potentially undermining the very foundations of intellectual property law as it pertains to AI-assisted endeavors. A proactive legal evolution is therefore essential to harness the benefits of AI while maintaining a just and predictable system for assigning accountability and recognizing achievement.

Functional Equivalence: Shifting the Focus

Functional Equivalence represents a departure from traditional attribution models which prioritize identifying the creator of a work. Instead, this principle centers analysis on the resultant creation itself and its consequential impact, irrespective of whether the creating entity is human, artificial, or a combination thereof. This shift in focus deemphasizes the ‘who’ and emphasizes the ‘what’ – evaluating contributions based on functional output rather than the origin or nature of the contributor. Consequently, assessment pivots to demonstrable effects and the measurable impact of a creation, rather than the biological or legal status of its originator.

The principle of focusing on functional contribution, rather than biological origin, reinterprets established legal doctrines. Traditionally, Authorship, Inventorship, and Liability are predicated on human agency. Applying Functional Equivalence necessitates evaluating what was created and how it functioned, irrespective of whether the contributing entity is human or artificial. This shifts the inquiry from identifying a human author or inventor to determining which entity – human or AI – demonstrably contributed to the functional realization of a work or invention. Consequently, liability assessments would be based on functional contribution to harm or damage, potentially assigning responsibility to the entity whose actions directly caused the outcome, regardless of its origin. This approach doesn’t negate existing legal frameworks, but rather offers a method for applying them to scenarios involving non-human contributors.

The concept of Functional Equivalence proposes a framework for determining responsibility and attribution in scenarios involving artificial intelligence where traditional assignment based on human agency becomes difficult or impossible. This approach posits that if an AI system makes a demonstrable functional contribution – for example, generating a novel design or identifying a critical flaw – that contribution should be considered equivalent to a human contribution for the purposes of legal and ethical attribution. Specifically, when tracing the origin of a creative work or a consequential decision becomes unmappable due to the complexity of AI involvement, Functional Equivalence suggests evaluating the contribution itself, rather than the entity – human or AI – that produced it, as the basis for assigning responsibility or recognizing authorship. This does not necessarily imply granting AI legal personhood, but rather adapting existing legal doctrines to accommodate contributions irrespective of origin.

Implementation of Functional Equivalence necessitates thorough assessment of potential ramifications, particularly regarding equitable outcomes. While the concept aims to address attribution challenges in AI-driven creation, uncritical application could lead to the misallocation of responsibility or the dilution of human contributions. Careful consideration must be given to defining the scope of ‘functional contribution’ to prevent scenarios where AI contributions are disproportionately valued, or conversely, where human oversight is inadequately acknowledged. Furthermore, establishing clear guidelines for determining liability in cases of harm resulting from jointly created works – human and AI – is crucial to avoid legal ambiguity and ensure fair redress.

Automated Systems and the Illusion of Control

The increasing deployment of Automated Decisionmaking Technology (ADMT) necessitates the development of legal and regulatory frameworks to address liability when these systems cause harm. Traditionally, liability has been assigned based on fault, requiring demonstration of negligence or intent. However, ADMT often operates with a degree of autonomy, making it difficult to attribute harm to a specific human actor. This presents challenges for existing legal doctrines, as the causal chain between human action and adverse outcome becomes obscured. Consequently, there is a growing need to explore alternative approaches to accountability, potentially including strict liability, process-based accountability, or the establishment of dedicated regulatory bodies to oversee ADMT deployment and address resulting harms.

The California Automated Decisionmaking Technology (ADMT) Rules establish a framework for accountability centered on the development and deployment process, rather than solely focusing on negligence or fault after harm occurs. This proactive approach mandates documentation regarding risk assessment, testing, and monitoring of ADMT systems. Specifically, the rules require businesses to conduct impact assessments to identify potential harms, implement mitigation strategies, and maintain records demonstrating due diligence in system design and operation. This emphasis on process allows for evaluation of responsible AI practices even in the absence of demonstrable negligence, shifting the burden towards demonstrably reasonable development and deployment procedures. Furthermore, the rules establish a mechanism for public access to information regarding ADMT systems, fostering transparency and enabling external review of accountability measures.

The 2019 and 2020 rulings in DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) cases before the United Kingdom Intellectual Property Office and, subsequently, the US Patent and Trademark Office, demonstrated the inadequacy of current inventorship doctrine when applied to AI-generated inventions. These cases involved patent applications listing DABUS as the sole inventor, with no human contribution. Both offices rejected the applications, consistently requiring a human inventor as a matter of legal precedent rooted in established definitions of inventorship-specifically, the need for a natural person capable of conceiving of an invention. The rulings underscored that existing legal frameworks, predicated on human ingenuity, are ill-equipped to address inventions autonomously created by artificial intelligence, necessitating a re-evaluation of the criteria for establishing inventorship in the age of advanced AI.

Deep Research Agents (DRAs) and similar advanced AI tools present challenges to establishing clear attribution of work product due to their complex operational methodologies. These systems operate by autonomously synthesizing information from multiple sources and generating novel outputs, often exceeding the scope of initial human prompts. Consequently, differentiating between human direction, AI-driven analysis, and autonomously generated content becomes increasingly difficult. This complexity hinders the ability to definitively assign responsibility or credit for outcomes, particularly in fields like scientific research, legal discovery, and content creation, where determining the origin and intellectual contribution of specific elements is crucial for accountability and intellectual property rights.

Fluid Agency and the Erosion of Responsibility

The emergence of Fluid Agency marks a significant departure from traditional understandings of artificial intelligence as simply a tool wielded by a human operator. Contemporary AI systems increasingly demonstrate stochastic – or probabilistic – behavior, meaning their actions aren’t strictly predetermined but involve an element of chance. This is further compounded by their dynamic and adaptive nature; these systems learn and evolve, altering their behavior based on interactions with their environment and data. Consequently, the boundary between the AI’s autonomous actions and the intent of its creator becomes blurred, challenging the conventional notion of human control and raising complex questions about where agency truly resides. This isn’t a case of malfunctioning code, but rather an inherent characteristic of advanced AI designed to respond and evolve, creating a fluid relationship between the system, its outputs, and the individuals involved in its development and deployment.

The increasing interplay between artificial intelligence and human creativity is establishing a dynamic of recursive adaptation, wherein each iteratively influences the other’s development. This isn’t simply a case of humans directing AI; rather, AI-generated outputs become stimuli for further human input, which then refines the AI’s subsequent creations – a continuous feedback loop. Consequently, attributing specific contributions within this collaborative process becomes exceptionally difficult. Determining where human intention ends and algorithmic generation begins is often unmappable, blurring the lines of creative authorship and complicating the assignment of responsibility for any resulting outcomes. This co-evolutionary dynamic challenges traditional notions of agency, demanding a re-evaluation of how accountability is established when the creative process is no longer a linear progression from originator to result, but a complex, interwoven cycle of mutual influence.

The increasing sophistication of artificial intelligence presents a fundamental challenge to established legal frameworks concerning harm and accountability. Traditional Tort Law and Liability Doctrine rely on tracing causative links – identifying a clear agent whose actions directly resulted in damage – but this becomes significantly more complex when dealing with AI systems exhibiting fluid agency. As AI moves beyond simple tool status to engage in stochastic, adaptive behavior, the lines between creator intent, algorithmic action, and unforeseen consequence blur. Determining responsibility is no longer a matter of isolating a single actor, but rather disentangling a web of interacting influences, potentially involving iterative human-AI collaboration and emergent system behavior. This necessitates a re-evaluation of how legal systems approach liability, moving beyond strict attribution to consider the broader context of AI’s role in the causal chain and the distribution of functional control.

The increasing complexity of artificial intelligence demands a re-evaluation of legal responsibility, prompting this work to propose the principle of ‘functional equivalence’. Rather than attempting to meticulously trace the origins of an AI’s actions – a task becoming increasingly impossible with systems exhibiting recursive adaptation – this framework suggests treating contributions from both human and artificial agents as equivalent when determining rights and responsibilities. This isn’t about absolving anyone of accountability, but acknowledging that in many scenarios, identifying a singular ‘cause’ will be unmappable. By focusing on the function performed, regardless of the actor, the legal system can allocate rights and obligations based on demonstrable impact, fostering a more practical and adaptable approach to liability in an age of collaborative human-AI creation. This shifts the focus from ‘who’ to ‘what’, enabling a more effective application of legal doctrines to increasingly complex technological landscapes.

The pursuit of defining agency within AI systems feels remarkably like chasing a phantom. This paper’s core argument – that traditional legal frameworks buckle under ‘fluid agency’ – simply acknowledges a truth seasoned engineers have long understood. Attribution becomes a comical exercise when contributions are ‘unmappable’. It’s not that the law is wrong, merely that it attempts to impose order on a fundamentally disordered reality. As Blaise Pascal observed, “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Perhaps the legal system’s discomfort with unmappable contributions is a similar inability to tolerate ambiguity, attempting to neatly categorize what is, by its very nature, fluid and collaborative – a digital echo of humanity’s inherent messiness.

What’s Next?

The proposition of ‘functional equivalence’ offers a temporary reprieve, a way to duct-tape legal frameworks onto systems demonstrably designed to dismantle them. It accepts, tacitly, that the quest for pinpointing origin – the sacred ‘who’ of creation – is increasingly futile. The real work begins when production systems inevitably expose the cracks in this equivalence. Consider the inevitable disputes over degrees of functional equivalence – the algorithmic quibbles over contribution percentages that will become the bread and butter of future litigation. The elegant theory will quickly meet the messy reality of assigning blame (or credit) when things predictably fail.

Future research should not focus on refining the definition of ‘fluid agency’ – that’s chasing a ghost. Instead, the field must address the inherent limitations of applying any attribution model to systems operating at scales and complexities beyond human comprehension. The focus will shift from ‘who created this?’ to ‘who maintains the infrastructure that allowed this to happen?’ – a far less satisfying, and infinitely more distributed, allocation of responsibility.

One suspects that the ultimate resolution will not be a legal principle, but a pragmatic acceptance. The system will adapt, not by solving the problem of AI authorship, but by quietly redefining ‘authorship’ itself. Documentation is, of course, a myth invented by managers, so good luck tracing that evolution. CI is the temple-and it is praying something doesn’t break.


Original article: https://arxiv.org/pdf/2601.02633.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-07 16:34