Beyond the Algorithm: Art’s Critical Response to AI

Author: Denis Avetisyan


Artists are increasingly using machine learning not just as a tool for creation, but as a medium for critical inquiry into its societal implications.

This review examines the emerging field of tactical AI art and its engagement with issues of algorithmic bias, computational creativity, and AI ethics.

Despite increasing reliance on artificial intelligence, critical engagement with its underlying assumptions remains surprisingly limited within artistic practice. This paper, ‘Lures of Engagement: An Outlook on Tactical AI Art’, examines how artists are uniquely positioned to leverage and critique machine learning technologies, addressing sociopolitical concerns through innovative aesthetic strategies. Analysis of work across sociocultural, existential, and political themes reveals that tactical AI art offers vital insights into the ambiguities of our increasingly algorithmically-mediated world. Can these artistic interventions not only expose the limitations of AI, but also chart a course toward more responsible and epistemologically grounded computational creativity?


Unmasking the Machine: Bias Embedded in the Algorithmic Mirror

Artificial intelligence systems, frequently presented as objective arbiters, are demonstrably susceptible to bias – a phenomenon stemming directly from the data used in their development. These systems learn patterns from the information they are fed, and if that data reflects existing societal prejudices – regarding race, gender, socioeconomic status, or any other characteristic – the AI will inevitably internalize and perpetuate those biases. This isn’t a matter of malicious intent on the part of the algorithms, but rather a consequence of their reliance on imperfect and often skewed datasets. Consequently, an AI designed to assess loan applications, for example, might unfairly discriminate against certain demographics if the historical loan data used for training contained pre-existing biases in lending practices. The illusion of objectivity, therefore, masks a critical reality: AI is a mirror reflecting – and often amplifying – the inequalities inherent in the data it consumes.

Artificial intelligence systems, while presented as objective tools, frequently encode and exacerbate pre-existing societal inequalities. These aren’t simply random errors in calculation; rather, biases embedded within algorithms can systematically disadvantage already marginalized groups. For example, facial recognition software exhibiting higher error rates for people of color doesn’t represent a neutral technological failing, but a reinforcement of discriminatory patterns. Similarly, AI used in loan applications or hiring processes can perpetuate historical biases, denying opportunities based on factors like race or gender. The consequence is a feedback loop where biased data leads to biased outcomes, solidifying and amplifying inequalities across various aspects of life, effectively automating discrimination at scale.

The foundation of nearly all artificial intelligence lies in meticulously labeled datasets, a process known as data annotation. However, this seemingly objective task is profoundly shaped by human perspectives and inherent limitations. Annotators, despite best efforts, inevitably bring their own cultural biases, assumptions, and understandings to the labeling process, unintentionally embedding these perspectives into the data. Moreover, the demographics of annotators often lack sufficient representation from diverse groups, leading to underrepresentation or misrepresentation of certain populations within the training data. Consequently, AI systems learn to recognize patterns that reflect these skewed datasets, perpetuating and even amplifying existing societal biases in areas like facial recognition, natural language processing, and predictive algorithms – demonstrating that the quality of AI is inextricably linked to the inclusivity and objectivity of its foundational data labeling practices.

The pursuit of unbiased artificial intelligence extends far beyond the realm of technical adjustments and algorithmic refinements. Recognizing and mitigating AI bias is fundamentally an ethical and societal challenge, demanding careful consideration of the values embedded within these systems. As AI increasingly influences critical decisions-from loan applications and hiring processes to criminal justice and healthcare-the potential for perpetuating and amplifying existing societal inequalities becomes profoundly real. Therefore, addressing bias requires a multidisciplinary approach, encompassing not only computer scientists and engineers, but also ethicists, social scientists, policymakers, and the communities most likely to be impacted. Ignoring the ethical dimensions of AI development risks creating systems that reinforce discrimination, erode trust, and ultimately undermine the promise of a more equitable future.

Subverting the Code: AI Art as Tactical Intervention

Contemporary artistic practice demonstrates a growing trend of utilizing AI as a medium for critical inquiry, shifting away from predominantly positive portrayals of artificial intelligence. This manifests as “Tactical AI Art,” where artists actively address the sociopolitical ramifications of AI technologies. Rather than focusing on the potential benefits of AI, these works frequently examine issues such as algorithmic bias, data privacy, and the potential for automated systems to reinforce existing power structures. This critical approach distinguishes itself from earlier AI art which often prioritized demonstrating technical capabilities or exploring aesthetic possibilities, and instead prioritizes a nuanced investigation of AI’s societal impact and potential harms.

Tactical AI Art builds upon the established history of Tactical Media, a practice originating in the late 20th century that utilized communication technologies for social and political activism. Historically, Tactical Media employed tools like radio, video, and the early internet to circumvent mainstream media and enable direct action, often focusing on issues of control, surveillance, and corporate power. Contemporary artists working with AI extend this approach by critically engaging with the specific infrastructures and algorithms of artificial intelligence, framing AI not as a neutral technology but as a site of contestation. This continuation involves employing AI tools themselves – image generation, data visualization, and machine learning – to expose and intervene in systems of power, mirroring the historical use of media for disruption and resistance.

Tactical AI Art actively contests prevalent perspectives on Surveillance Technology by visualizing and interrogating its operational logic and societal impact. Artists employing this practice often focus on the asymmetries of power inherent in surveillance systems, demonstrating how data collection and algorithmic processing can reinforce existing inequalities and enable new forms of control. This critique extends beyond theoretical concerns, manifesting in artworks that expose the limitations of surveillance – such as algorithmic bias or vulnerabilities to manipulation – and highlight the potential for misuse by state and corporate entities. By making the mechanisms of surveillance visible and understandable, Tactical AI Art aims to foster public awareness and encourage critical engagement with these technologies, moving beyond acceptance of surveillance as a neutral or beneficial force.

Exploitation Forensics, as practiced within Tactical AI Art, involves the visual representation of algorithmic processes and the underlying infrastructure that supports them. Artists employ techniques to deconstruct and map these systems, making visible the data flows, computational logic, and resource allocation that are typically obscured. This visualization isn’t merely aesthetic; it aims to reveal the power dynamics embedded within these technologies, specifically how algorithms can perpetuate bias, enable control, and concentrate authority. By rendering these normally invisible mechanisms, artists highlight areas of potential exploitation and challenge the perception of algorithms as neutral or objective tools, instead demonstrating their constructed nature and inherent political implications.

Beyond the Human: Questioning Agency in the Age of Machines

The increasing capabilities of artificial intelligence are prompting reassessment of long-held assumptions about human identity and agency. Traditionally, concepts like consciousness, intentionality, and self-awareness have been considered uniquely human attributes. However, the development of AI systems exhibiting complex behaviors – including learning, problem-solving, and even creative output – challenges this anthropocentric view. This challenge has fueled growing interest in Posthumanism, a philosophical and cultural movement that questions the privileged status of humans and explores the potential for expanded, non-human forms of intelligence and existence. Posthumanist thought doesn’t necessarily posit the replacement of humanity, but rather a critical examination of the boundaries defining it, and consideration of the ethical and societal implications of increasingly sophisticated AI.

Machine Learning (ML) utilizes algorithms that allow computer systems to improve performance on a specific task without explicit programming, relying instead on data to learn patterns and make predictions. A significant advancement within ML is Deep Learning, which employs artificial neural networks with multiple layers – hence “deep” – to analyze data with greater complexity. These networks, inspired by the structure of the human brain, can automatically extract features from raw data, eliminating the need for manual feature engineering. The increased computational power and availability of large datasets have facilitated the development of increasingly complex Deep Learning models capable of achieving high levels of autonomy in tasks such as object detection, natural language processing, and game playing, demonstrating a shift towards systems that can operate with minimal human intervention.

Generative Adversarial Networks (GANs) are a class of machine learning systems designed to generate new data instances that resemble the training data. A GAN consists of two neural networks: a generator, which creates new data, and a discriminator, which evaluates the authenticity of the generated data and distinguishes it from real data. These networks are trained in an adversarial process, where the generator attempts to fool the discriminator, and the discriminator attempts to correctly identify generated data. This iterative process leads to the generator producing increasingly realistic outputs, demonstrating a capacity for creative expression in areas like image synthesis, music composition, and text generation. The success of GANs in these domains challenges traditional distinctions between human creativity and machine computation, as the generated outputs often exhibit qualities previously considered uniquely human.

Computer Vision encompasses a range of techniques enabling automated analysis of visual data. Core to this field are methodologies like Image Classification, which assigns predefined labels to images based on their content, and Facial Recognition, a specialized form of image classification focused on identifying or verifying individuals from digital images or video. These systems function by utilizing algorithms, often based on Deep Learning, to process pixel data and extract relevant features. The success of these technologies relies on the availability of large, labeled datasets used to train the algorithms; however, inherent biases within these datasets can lead to inaccuracies or discriminatory outcomes. Consequently, the application of Computer Vision raises critical questions regarding the objectivity of ‘seeing’ and the potential for misrepresentation in automated perception systems.

Deconstructing Value: Challenging the NFT Paradigm

Despite promises of decentralization and empowerment, the NFT ecosystem frequently mirrors and even amplifies existing societal inequalities. The financial barriers to entry – including gas fees, the cost of minting, and the inherent volatility of cryptocurrencies – often exclude artists and collectors from marginalized communities. Furthermore, the concentration of wealth within the NFT space tends to benefit a small number of established collectors and platforms, reinforcing pre-existing power dynamics in the art world. Algorithmic biases embedded within NFT marketplaces and promotional strategies can also systematically disadvantage certain creators, while speculative bubbles driven by hype and exclusivity often prioritize financial gain over artistic merit, ultimately hindering broader access and genuine innovation.

Tactical AI art frequently operates in direct opposition to the foundational tenets of the NFT market, deliberately undermining systems of speculative value and commodification. Artists employing these techniques often prioritize open-source distribution, infinite reproducibility, and the dismantling of artificial scarcity – concepts fundamentally at odds with the NFT emphasis on unique, limited-edition digital assets. Rather than seeking to establish monetary worth through blockchain-verified ownership, these projects frequently leverage AI to generate works that are freely available, infinitely mutable, or even designed to actively resist collection and resale. This approach positions AI not as a tool for enhancing the value of digital objects, but as a means of critiquing the very notion of digital ownership and challenging the economic structures that underpin the NFT ecosystem, fostering a counter-narrative focused on access, collaboration, and the de-commodification of creative expression.

The prevailing emphasis on digital scarcity within the NFT landscape presents a fundamental paradox when considered alongside the rapidly expanding capabilities of artificial intelligence. AI tools, increasingly accessible and user-friendly, hold the potential to dramatically lower the barriers to creative production, enabling a far wider range of individuals to generate sophisticated art, music, and literature. This democratization of creative tools directly challenges the NFT model, which often relies on artificially limited supply to drive up value. Where NFTs thrive on exclusive ownership, AI suggests a future where creative abundance is the norm, potentially undermining the economic logic of digital collectibles and fostering a more inclusive, knowledge-based creative ecosystem. The tension between these forces compels a re-evaluation of how value is assigned in the digital realm and whether current systems truly serve the interests of artistic expression and broad participation.

Artists now face a critical juncture, requiring deliberate engagement with technologies like AI not simply as tools for creation, but as subjects for critical examination. The prevailing NFT model, with its emphasis on digital scarcity and ownership, presents a significant counterpoint to the democratizing potential of AI-driven art. Rather than passively adopting these technologies within existing market structures, artists are increasingly called upon to actively resist pressures that prioritize commodification over creative exploration and accessibility. This resistance isn’t necessarily about rejecting NFTs entirely, but about consciously shaping their application to align with values that promote open access, collaboration, and the free dissemination of knowledge, ultimately redefining the relationship between art, technology, and the market.

The exploration of Tactical AI Art, as detailed in this paper, reveals a fascinating tension between creation and critique. The artist, much like a systems engineer, doesn’t merely accept the tools at hand but actively probes their limits. This resonates deeply with the sentiment expressed by Claude Shannon: “The most important thing in a complex system is managing the interfaces.” Indeed, the ‘interfaces’ here aren’t merely technical, but conceptual – the points where algorithmic bias meets artistic intent, where generative models confront sociopolitical realities. By deliberately stressing these interfaces, artists expose the underlying architecture of machine learning, revealing both its power and its potential pitfalls – a process akin to reverse-engineering reality itself.

What’s Next?

The exploration of tactical AI art, as presented, inevitably circles back to the foundational rules governing the very systems artists attempt to subvert. One might ask: what happens when the critique becomes the canon? If generative adversarial networks are repeatedly prodded to reveal their biases-racial, gendered, or otherwise-does that merely expose the flaw, or does it, through iterative training on that very exposure, encode the critique into the algorithm itself? The result could be a self-aware bias, a formalized prejudice presented as progress.

Further investigation demands a dismantling of the ‘black box’ not through transparency alone, but through deliberate, constructive interference. Artists are positioned to introduce controlled errors, to ‘break’ the machine learning process in ways that highlight not just what the algorithm learned, but how it learned it. This isn’t simply about identifying bias; it’s about understanding the architecture of that bias, mapping the pathways of its reinforcement.

Ultimately, the field must confront the inherent paradox of using tools built on flawed datasets to critique those same datasets. The goal isn’t to eliminate the flaws-that’s likely impossible-but to externalize them, to make them visible and mutable. The true art lies not in generating novel images, but in engineering a controlled demolition of the algorithmic assumptions underpinning them.


Original article: https://arxiv.org/pdf/2602.20221.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-02-25 15:16