Coding the Battlefield: When AI Researchers Become Arms Dealers

Author: Denis Avetisyan


A new analysis explores the ethical tightrope walked by artificial intelligence scientists whose work increasingly fuels the development of autonomous weapons systems.

This review examines the role of AI researchers in military funding and argues for a critical approach to dual-use technology, emphasizing solidarity as a means to resist algorithmic warfare.

The accelerating integration of artificial intelligence into military technologies presents a paradox: innovation promising security simultaneously amplifies the potential for unprecedented harm. This paper, ‘The implicated scientist: on the role of AI researchers in the development of weapons systems’, investigates the ethical position of AI researchers whose work contributes-often indirectly-to systems capable of mass destruction and escalating global inequalities. We argue that recognizing one’s status as an ‘implicated subject’ is a crucial first step, but insufficient without a commitment to fostering differentiated, long-distance solidarity with those most vulnerable to technologically-enhanced injustices. Can a shift in perspective and practice transform the role of the AI researcher from one of complicity to one of meaningful resistance?


The Origins of Control: Military Roots of Modern AI

The narrative of artificial intelligence as a neutral, universally beneficial technology often obscures its origins. A significant portion of the foundational research that birthed the field, especially within the realm of cybernetics and early neural networks, was directly financed by military agencies. Driven by Cold War anxieties and the pursuit of technological superiority, organizations like the Defense Advanced Research Projects Agency (DARPA) provided crucial funding and direction to pioneering scientists. This early investment wasn’t aimed at creating broadly accessible tools, but rather at developing systems for command and control, automated targeting, and information processing relevant to national security. Consequently, the initial parameters and priorities of AI research were heavily influenced by strategic military objectives, laying the groundwork for a development path intrinsically linked to defense applications.

The development of artificial intelligence has been indelibly marked by its origins in military funding, a pattern that continues to shape its priorities today. Early research into cybernetics and related fields received substantial investment from defense agencies seeking technological superiority, effectively steering the initial trajectory of AI toward strategic applications like surveillance, targeting, and autonomous systems. This historical dependence persists, as evidenced by the dramatic increase in U.S. Department of Defense spending on AI – a sixteen-fold surge from 2022 to 2023, now totaling $4.323 billion. This financial commitment suggests a continued prioritization of AI for military purposes, potentially overshadowing development focused on broader societal benefits such as healthcare, education, or environmental sustainability, and raising questions about the ultimate direction of this powerful technology.

The pursuit of military dominance is rapidly accelerating the development of artificial intelligence, fostering an “AI arms race” where nations prioritize speed and capability over careful consideration of ethical implications. This competitive environment incentivizes the creation of increasingly autonomous weapons systems and surveillance technologies, often with limited public oversight or international regulation. The emphasis on strategic advantage means that potential risks – such as algorithmic bias, unintended consequences, and the erosion of human control – are frequently sidelined in favor of achieving a decisive technological edge. Consequently, innovation is heavily skewed toward applications with immediate military value, potentially hindering the development of AI for peaceful purposes and exacerbating global security concerns as these technologies proliferate.

Many everyday technologies owe their existence to initial funding from military research initiatives. Concepts initially explored for battlefield applications-such as advanced algorithms for target recognition, neural networks for data analysis, and even the foundations of the internet itself-have since been repurposed and refined for civilian use. This isn’t simply a case of ‘spin-off’ technology; the underlying architecture and core functionalities of numerous commercial products directly stem from projects conceived within defense programs. Consequently, a substantial portion of the modern technological landscape is built upon a foundation of military-funded innovation, creating a complex interdependence where advancements in civilian sectors are frequently reliant on technologies originally intended for warfare, and raising questions about the long-term implications of this historical reliance.

The Dual-Use Dilemma: Expanding Applications, Limited Oversight

Dual-use technology, inherent in much AI research, refers to algorithms and systems with applications spanning both civilian and military sectors. For example, computer vision algorithms developed for enhancing security camera footage and enabling facial recognition in public spaces can be readily adapted for use in autonomous targeting systems or drone-based surveillance. Similarly, natural language processing techniques designed for virtual assistants and data analysis are also applicable to intelligence gathering and automated threat detection. This inherent ambiguity means that a single research investment can yield capabilities applicable to both beneficial and harmful ends, complicating efforts to regulate and control the development of potentially dangerous AI systems.

The involvement of large technology companies in artificial intelligence research is increasingly coupled with direct collaboration with weapons manufacturers. These partnerships manifest through joint research ventures, contract work, and the provision of AI platforms and infrastructure for military applications. Companies specializing in cloud computing, machine learning, and data analytics are actively supplying tools and expertise to defense contractors and government agencies. This trend represents a shift from indirect contributions to AI development – through foundational research and general-purpose technologies – towards a more direct role in the creation and deployment of military AI systems, blurring the lines between civilian innovation and defense capabilities.

Fundamental AI research, while frequently framed as objective and non-applied, possesses inherent adaptability for military use. This is due to the foundational nature of the algorithms and techniques developed; core advancements in areas such as machine learning, computer vision, and natural language processing are applicable across a wide range of applications, including those with direct military relevance. Funding mechanisms and research collaborations often facilitate this transition, creating a demonstrable pipeline whereby theoretical work quickly informs the development of applied technologies like autonomous systems, enhanced surveillance capabilities, and improved targeting algorithms. The lack of clear distinctions between basic and applied research, coupled with the inherent dual-use nature of AI, allows for rapid prototyping and deployment of military applications based on publicly available research.

The increasing proliferation of artificial intelligence across multiple sectors introduces significant accountability challenges and the risk of unforeseen outcomes. Federal investment patterns further highlight this concern, with 95% of all AI-related funding in 2023 allocated to the U.S. Department of Defense. This concentration of resources indicates a strong emphasis on military applications and raises questions regarding the development and deployment of AI technologies outside of defense contexts, as well as the potential for these technologies to be utilized in ways not originally intended or adequately considered.

The Weight of Power: Accountability and the Distribution of Benefit

The economic advantages stemming from artificial intelligence are currently concentrated within a limited sphere of influence, notably large corporations and military organizations. While these entities realize substantial financial gains and strategic benefits from AI technologies, the wider implications for societal well-being-specifically regarding data privacy and cybersecurity-are frequently marginalized. This disparity creates a situation where the pursuit of innovation and profit often overshadows crucial safeguards necessary to protect individuals and communities from potential harms. The current trajectory suggests a reinforcement of existing power imbalances, where the benefits of AI accrue to those already privileged, while the risks are disproportionately borne by the vulnerable, demanding a re-evaluation of priorities and a more inclusive approach to development.

The architecture of modern AI development frequently positions individuals as Implicated Subjects, meaning even those with positive intentions can inadvertently contribute to harmful outcomes. This arises from complex systems where the full implications of one’s work are obscured by layers of abstraction and specialization; a programmer focused on algorithmic efficiency, for example, might not be aware of how that algorithm will be deployed in a biased or discriminatory context. The issue isn’t necessarily malicious intent, but rather a lack of comprehensive understanding regarding the broader societal impact of the technology being created. This dynamic highlights a critical responsibility for those within the AI ecosystem to actively seek out and address potential harms, demanding greater transparency and accountability throughout the development process, and fostering a culture of ethical foresight.

The development of artificial intelligence isn’t a neutral process; rather, it’s deeply interwoven with existing power dynamics, a reality illuminated by Critical Theory. This analytical framework reveals how societal inequalities – stemming from factors like class, race, and gender – aren’t simply reflected in AI systems, but actively embedded within their design and deployment. Algorithms are created by individuals operating within specific institutional contexts, inheriting and perpetuating biases present in the data they utilize and the objectives they prioritize. Consequently, AI technologies often serve to reinforce these pre-existing structures, potentially amplifying discrimination and limiting access to opportunities for marginalized groups. Understanding AI development through this lens necessitates a shift from viewing technology as objective and impartial, to recognizing it as a social construct shaped by, and contributing to, complex power relations.

A transformative recalibration of artificial intelligence development is essential, moving beyond purely technological advancement to embrace a core foundation of ethical responsibility and social justice. This necessitates a proactive integration of diverse perspectives – encompassing ethicists, social scientists, and impacted communities – throughout the entire AI lifecycle, from initial design to deployment and ongoing monitoring. Such a shift demands a move away from solely optimizing for efficiency or profit, and instead prioritizing fairness, transparency, and accountability in algorithmic systems. Ultimately, the future of AI hinges not just on what it can do, but on how it aligns with equitable societal values and contributes to a more just and inclusive world.

Toward Collective Action: Science, Resistance, and Value-Based Research

Organizations such as Science for the People promote a critical analysis of science, asserting that research priorities are not neutral but are shaped by existing power structures and funding sources. This perspective argues that scientific inquiry often serves the interests of corporations, governments, and military entities, potentially leading to the neglect of research beneficial to marginalized communities or focused on addressing social problems. Advocates for this approach emphasize the importance of scientists engaging in social and political activism, prioritizing research that promotes social responsibility, and actively challenging the influence of external stakeholders on research agendas. They contend that a truly objective science requires transparency in funding, democratic control over research directions, and a commitment to using scientific knowledge for the public good, rather than private profit or political gain.

Effective resistance to systems of oppression and the pursuit of accountability from those profiting from detrimental technologies necessitate coordinated group effort. Individual actions, while potentially impactful, are significantly amplified through solidarity and collective action. This approach facilitates resource pooling, expands the reach of advocacy, and increases the political pressure on decision-makers. Collective organizing enables the formulation of unified demands, coordinated strategies – including legal challenges, public campaigns, and economic disruption – and mutual support for participants facing repercussions. The scale and visibility of collective action are key factors in influencing policy changes and shifting public perception regarding harmful technologies and the entities responsible for their development and deployment.

Direct action, encompassing tactics like protests and boycotts, functions as a mechanism for both publicizing concerns and actively impeding business-as-usual operations. Protests leverage public assembly to draw media attention to specific issues and demonstrate the scale of opposition, while boycotts aim to economically pressure organizations or industries by discouraging consumers from purchasing their products or services. The efficacy of these strategies lies in their potential to disrupt established power dynamics by increasing the costs – reputational, financial, or logistical – associated with maintaining the status quo. Historical examples demonstrate that sustained direct action can compel policy changes, alter corporate behavior, and ultimately contribute to broader systemic shifts, particularly when coordinated with other forms of advocacy and organizing.

Value-Based Research (VBR) in Artificial Intelligence utilizes insights from Feminist and Decolonial Scholarship to address biases and power imbalances embedded within AI systems. This approach moves beyond purely technical solutions by explicitly incorporating ethical considerations and social justice principles into the research design and development process. VBR critically examines the assumptions and values that underpin data collection, algorithm design, and model evaluation, seeking to mitigate harms disproportionately affecting marginalized communities. It emphasizes participatory research methods, centering the voices and experiences of those most impacted by AI technologies, and prioritizes the development of AI applications that promote equity, liberation, and social well-being rather than reinforcing existing inequalities.

The exploration of AI researchers’ ‘implicated subject’ position demands rigorous self-assessment, a process mirroring the minimization of unnecessary complexity. The article posits that acknowledging one’s role within systems of harm is the initial step toward meaningful resistance. This aligns with Vinton Cerf’s assertion: “Any sufficiently advanced technology is indistinguishable from magic.” The article suggests that this ‘magic’-the transformative power of AI-carries inherent ethical weight, and researchers must actively deconstruct the illusion of neutrality. Density of meaning is achieved not through technical sophistication, but through honest reckoning with the dual-use nature of technology and the potential for algorithmic warfare. Unnecessary obfuscation serves only to amplify the risks.

The Remaining Questions

The argument for acknowledging an ‘implicated subject’ position, while intuitively resonant, reveals the enduring difficulty of assigning moral responsibility within complex systems. The paper rightly identifies the problem of dual-use technology, but the solution – a call for solidarity – remains, necessarily, a starting point, not a destination. What constitutes effective solidarity in a field so deeply entwined with institutional funding, and what compromises are inevitable in its pursuit? These are not questions easily answered by further ethical analysis, but by the messy, iterative process of practice.

Future work must move beyond identifying the ethical contours of the AI arms race and confront the material conditions that sustain it. A focus on the logistical networks, funding flows, and bureaucratic processes which enable algorithmic warfare offers a more precise – and perhaps more effective – lever for change. The emphasis should shift from individual moral failings to systemic vulnerabilities.

Ultimately, the value of this line of inquiry lies not in discovering new ethical principles, but in stripping away the layers of abstraction that obscure the fundamental problem: power. The remaining question is not whether AI researchers should resist, but whether they possess the leverage, and the will, to do so effectively. The answer, predictably, remains to be seen.


Original article: https://arxiv.org/pdf/2604.18380.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-04-21 08:25