The Stories We Tell About AI: Unpacking the Metaphors Shaping Policy

Author: Denis Avetisyan


A new framework for analyzing the narrative language used in artificial intelligence ethics and governance is critical for understanding the underlying assumptions driving policy debates.

This paper introduces ‘Narrative Frames,’ a system for categorizing metaphors in AI discourse to improve analysis and foster interdisciplinary collaboration.

Despite the pervasive influence of metaphor in shaping discussions around artificial intelligence, current analytical approaches suffer from inconsistent definitions and limited comparability. This paper introduces ‘Narrative Frames,’ a novel categorization system detailed in ‘Narrative Frames: A New Approach to Analysing Metaphors in AI Ethics and Policy Discourse’, designed to standardize the analysis of metaphor in AI policy debates. Developed through inductive coding of a large dataset and cross-referencing with existing scholarship, this framework yields [latex]\mathcal{N}=49[/latex] distinct narrative frames for understanding how metaphors construct perceptions of AI governance. By offering a shared vocabulary and revealing both present and absent frames, can this approach unlock a more transparent understanding of the underlying assumptions and power dynamics shaping the future of AI?


The Echo of Assumptions: Framing the AI Conversation

The pursuit of effective artificial intelligence governance is frequently hampered not by a lack of technical expertise, but by deficiencies in communication that subtly mask foundational assumptions. Current discussions surrounding AI policy are often laden with jargon, abstract concepts, and unacknowledged premises regarding agency, control, and even the very definition of intelligence. This obscures the core values at stake and creates a significant barrier to productive dialogue, as stakeholders operate with differing, often unstated, understandings of the problem space. Consequently, proposed solutions may address symptoms rather than root causes, and fail to gain broad acceptance due to underlying conceptual mismatches; a seemingly technical debate may, in reality, be a disagreement over fundamental philosophical viewpoints disguised as pragmatic concerns.

The pervasive use of metaphor in discussions surrounding artificial intelligence extends far beyond simple rhetorical flourish; it fundamentally structures cognition regarding the technology and its implications. Cognitive science demonstrates that metaphors aren’t just figures of speech, but cognitive tools that allow humans to understand abstract concepts by mapping them onto more familiar domains. Consequently, framing AI as a ‘black box’, a ‘powerful tool’, or even a ‘thinking machine’ isn’t neutral; each metaphor subtly emphasizes certain aspects of the technology while obscuring others. This can lead to skewed perceptions of risk, responsibility, and potential benefits, creating blind spots in policy debates and hindering the development of truly effective governance strategies. By shaping how problems are defined and solutions are envisioned, ingrained metaphorical frames can inadvertently limit the scope of inquiry and perpetuate unintended consequences.

The framing of artificial intelligence through ingrained metaphors significantly impacts policy discussions, often limiting the scope of debate and hindering effective governance. These deeply embedded conceptual frames-such as AI as a ‘black box,’ a ‘powerful tool,’ or even a ‘thinking machine’-aren’t neutral descriptions; they actively shape perceptions of risk, responsibility, and potential solutions. Consequently, policymakers may unknowingly operate within constrained cognitive landscapes, prioritizing certain interventions while overlooking others, or even misdiagnosing the core challenges. A critical awareness of these metaphorical underpinnings is therefore essential to move beyond superficial arguments and foster a more comprehensive, nuanced, and ultimately productive dialogue regarding the responsible development and deployment of AI technologies.

The Architecture of Understanding: Conceptual Tools

Conceptual Metaphor Theory proposes that human understanding of abstract concepts – such as time, love, or arguments – is systematically structured by metaphorical mappings from more concrete, embodied experiences. These mappings are not merely rhetorical devices, but cognitive mechanisms where we understand one domain of experience (the target domain, e.g., argument) in terms of another (the source domain, e.g., war). This process relies on correlations established through physical interactions and bodily sensations; for example, the metaphor “ARGUMENT IS WAR” manifests in language through expressions like “He attacked my points” or “I defended my position”. The theory posits that these consistent mappings reveal underlying cognitive structures and are fundamental to how we reason and communicate about abstract ideas.

Conceptual Metaphor Theory challenges the traditional view of metaphor as purely rhetorical devices; instead, it proposes that metaphor is a fundamental aspect of cognition. This theory asserts that the human mind understands and reasons about abstract concepts – such as time, love, or arguments – by mapping them onto more concrete, sensorimotor experiences. This cognitive process isn’t limited to language; metaphorical structuring occurs across modalities, influencing thought, reasoning, and action. Consequently, metaphors are not merely ways of talking about concepts, but rather constitute the very basis of how those concepts are understood and mentally represented, shaping our perception and interpretation of reality.

Critical Metaphor Analysis (CMA) extends Conceptual Metaphor Theory by systematically examining how metaphorical language constructs and reinforces specific ideological positions. CMA doesn’t simply identify metaphors; it investigates how these metaphors function to normalize certain perspectives, obscure alternative viewpoints, and justify particular social practices. This involves analyzing the systematic patterns of metaphorical framing, tracing the historical and social contexts of these patterns, and revealing the power dynamics embedded within them. The methodology focuses on demonstrating that seemingly neutral or natural language choices are, in fact, ideologically loaded and contribute to the construction of social reality, often serving to maintain existing power structures or promote specific agendas.

Tracing the Patterns: Mapping Metaphorical Structures

The research utilizes Qualitative Content Analysis as its primary methodology, specifically a Directed Approach. This means the analysis is guided by an existing theoretical framework – Conceptual Metaphor Theory – to inform the coding process. Rather than beginning with a completely open coding scheme, initial codes are derived from the core principles of Conceptual Metaphor Theory, allowing for a focused investigation of metaphorical expressions within the data. This approach balances the need for systematic analysis with the theoretical grounding necessary to interpret the function and prevalence of conceptual metaphors, ensuring the findings are anchored in established linguistic theory.

The process of identifying metaphorical patterns within the text data utilizes inductive coding, an iterative approach where initial theoretical expectations, derived from Conceptual Metaphor Theory, are continuously refined. This involves a close reading of the corpus to identify instances of metaphorical language, followed by the development of codes based on emergent themes. These codes are then applied systematically across the data, allowing for the identification of recurring patterns and the modification of the initial theoretical framework as needed. The inductive nature of the coding process ensures that the analysis remains grounded in the data itself, rather than being solely driven by pre-existing assumptions, and allows for the discovery of unexpected metaphorical patterns.

The analytical process incorporated both the MetaNet Database and FrameNet to provide a comprehensive linguistic resource for metaphor identification and validation. MetaNet, a lexical database of English metaphors, assisted in recognizing established metaphorical mappings, while FrameNet, a database centered on semantic frames, helped to contextualize the identified metaphors within broader conceptual structures. A total of 685 instances of metaphorical language were subjected to analysis, utilizing these databases to verify the presence of conventional metaphorical expressions and to determine the underlying conceptual frames activated by the language used in the source material.

The Stories We Tell: Dominant Frames in AI Discourse

The conceptualization of artificial intelligence through a “WAR” narrative frame fundamentally shapes approaches to its governance, often casting development as a series of competitive battles for dominance. This framing emphasizes strategic advantage and national security, prioritizing rapid innovation and deployment over careful consideration of ethical implications or societal impact. Such a perspective fosters an environment where AI research becomes intrinsically linked to geopolitical rivalry, incentivizing a focus on capabilities that outperform competitors rather than prioritizing safety or equitable access. Consequently, discussions surrounding regulation and oversight frequently encounter resistance, framed as impediments to progress in a critical arena of competition, and hindering collaborative efforts towards responsible AI development.

The conceptualization of artificial intelligence as a journey presents a markedly different approach to its development, framing progress not as a series of competitive victories, but as a gradual and iterative movement toward a predetermined, often utopian, future. This narrative emphasizes the obstacles encountered along the way – technical hurdles, ethical dilemmas, and societal integration challenges – not as defeats, but as essential components of the overall progression. By positioning AI development as an expedition, this frame encourages collaborative problem-solving and a long-term perspective, focusing on the transformative potential realized through sustained effort and adaptation. This contrasts sharply with frames that prioritize immediate gains or dominance, instead highlighting the continuous refinement and incremental improvements inherent in pursuing a desired technological horizon.

A comprehensive investigation into the language surrounding artificial intelligence yielded a novel typology of 49 distinct ‘Narrative Frames’ used to discuss and understand its development. This framework wasn’t built in isolation; researchers systematically cross-referenced these frames with findings from 82 existing critical metaphor analysis studies. The resulting standardized system allows for a more nuanced examination of AI discourse, moving beyond simple keyword analysis to reveal the underlying stories and assumptions that shape perceptions of the technology. By identifying these recurring narrative patterns, the typology offers a powerful tool for analyzing how AI is framed, debated, and ultimately, governed – revealing not just what is said about AI, but how it is said, and the implications of those choices.

The pursuit of categorization, as demonstrated in this paper’s proposal of ‘Narrative Frames,’ echoes a timeless human impulse – to impose order upon complexity. One might recall Alan Turing’s observation: “Sometimes people who are unaware that their reasoning is based on assumptions are too proud to admit it.” This categorization isn’t about achieving a perfect map of the discourse, but rather illuminating the inherent assumptions within AI policy debates. The system, as it grows, reveals not just what is being said about AI governance, but how it’s being framed, and therefore, what remains unsaid. Each identified frame begins as an attempt to capture a truth, and inevitably ends in a recognition of its limitations.

What’s Next?

The categorization proposed within this work-‘Narrative Frames’-is not an arrival, but a map drawn before the territory fully reveals itself. Attempts at taxonomy are, inevitably, prophecies of failure; the emergent nature of discourse guarantees that any fixed system will require constant renegotiation. The value lies not in capturing a static landscape of metaphor, but in creating a shared vocabulary for charting its shifts and drifts. This standardization, however fragile, may allow for more robust interdisciplinary conversation – a temporary bulwark against the isolation of specialized thought.

The deeper challenge remains unaddressed. Identifying how these Narrative Frames propagate-the mechanisms by which assumptions become embedded in policy-demands a move beyond simple cataloging. The study of metaphor, after all, isn’t about the figures of speech themselves, but about the power dynamics they conceal and reinforce. There are no best practices – only survivors.

Order is just cache between two outages. Future work must therefore confront the uncomfortable truth that even the most rigorous analysis is a localized phenomenon, a momentary glimpse of coherence within a fundamentally chaotic system. The task is not to build a perfect model, but to cultivate a persistent awareness of its inevitable imperfections, and to track the fault lines as they appear.


Original article: https://arxiv.org/pdf/2603.17192.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-03-20 03:32