Author: Denis Avetisyan
As we increasingly turn to apps and chatbots for self-assessment and memory-keeping, a fundamental shift is occurring in how we understand and construct our personal identities.
This review examines the impact of self-tracking, digital memory, and conversational AI on self-knowledge, narrative construction, and the potential for algorithmic bias in defining the self.
While the pursuit of self-knowledge is traditionally considered an internal process, increasingly we outsource aspects of it to external systems. This paper, ‘Knowing oneself with and through AI: From self-tracking to chatbots’, analyzes how artificial intelligence-spanning self-tracking applications, digital memory repositories, and conversational chatbots-is reshaping our understanding of self. We find that while these technologies offer novel avenues for self-exploration and narrative construction, they simultaneously introduce risks of algorithmic bias, narrative deference, and a detachment from lived experience. As AI becomes ever more integrated into our daily lives, how can we harness its potential for self-discovery while safeguarding the authenticity of our personal narratives?
The Quantified Echo: Mapping the Self Through Data
The contemporary landscape is increasingly defined by a culture of self-tracking, where individuals meticulously record data pertaining to their habits, health, and performance. From fitness trackers monitoring steps and sleep patterns to apps logging dietary intake and mood fluctuations, the practice of datafication – transforming aspects of life into quantifiable data – is rapidly becoming commonplace. This isn’t simply about gathering information; it reflects a broader societal emphasis on self-improvement through objective measurement, with the assumption that detailed analysis of personal data can unlock insights into optimizing well-being and achieving personal goals. The proliferation of wearable technology and readily available apps demonstrates a widespread belief that a quantified self – one defined by metrics and analytics – is a more knowable, and therefore, a more improvable self. This trend suggests a shift in how individuals perceive and understand themselves, moving away from introspection and towards external validation through data-driven insights.
The contemporary surge in self-tracking and data collection isn’t simply about gaining insight; it frequently operates within a framework of neoliberal ideology. This perspective places a strong emphasis on individual agency and responsibility, suggesting that personal improvement is achievable through relentless self-optimization. Consequently, technologies promising to quantify aspects of life – from sleep patterns to caloric intake – become tools for enacting this ideology, framing personal struggles as problems solvable through individual effort and data-driven solutions. This emphasis subtly shifts the focus away from systemic factors influencing wellbeing, placing the onus squarely on the individual to maximize efficiency and productivity, effectively transforming the self into a project of continuous improvement measured by external metrics.
The increasing reliance on self-tracking technologies, while promising insights into personal habits, carries the risk of fundamentally misconstruing the nature of experience itself. When individuals attempt to understand themselves solely through quantified metrics – steps taken, calories consumed, hours slept – they often impose pre-defined, external frameworks onto the nuanced and inherently subjective realm of feeling and consciousness. This process can reduce complex emotional states and personal values to simple numerical data, potentially obscuring genuine self-understanding rather than illuminating it. The very act of quantifying can subtly reshape what is valued, prioritizing measurable outcomes over less tangible, but equally important, aspects of the human condition, ultimately hindering a truly authentic connection with oneself.
AI as Mirror: Co-Constructing Narratives from the Digital Void
Conversational AI systems, utilizing Large Language Models (LLMs), enable users to explore and refine their personal narratives through interactive dialogue. These systems process natural language input and generate responses that can prompt reflection on past experiences, values, and future goals. Unlike traditional methods of self-exploration such as journaling or therapy, LLM-powered conversations offer an accessible and dynamically responsive medium for narrative construction. The iterative nature of dialogue allows users to articulate, challenge, and ultimately reshape their self-perceptions through ongoing interaction with the AI. This process isn’t simply about recalling information; the AI can pose questions, offer alternative perspectives, and help users connect disparate experiences into a more coherent life story.
Narrative co-construction, facilitated by conversational AI, differs from traditional data analysis by actively engaging users in a dialogue to shape and refine personal narratives. While data analysis identifies patterns within existing data, co-construction utilizes LLMs to prompt reflection and elaboration, potentially revealing previously unarticulated connections between experiences and beliefs. This interactive process allows for the exploration of subjective meaning and the reconstruction of personal history, going beyond the identification of factual events to encompass the emotional and interpretative layers of self-understanding. The resulting narratives are not simply summaries of past data, but dynamically generated accounts co-created through iterative exchange between the user and the AI, offering potential for novel self-insights.
Personalized AI systems leverage individual user data – including interaction history, stated preferences, and potentially integrated biographical information – to dynamically adjust conversational parameters and narrative framing. This tailoring extends beyond simple topic selection; the AI can modify its linguistic style, emotional tone, and the specific details incorporated into the co-constructed narrative to align with the user’s established patterns and expressed goals. Consequently, the resulting narrative is not a generic response but a uniquely customized reflection of the user’s self-representation, potentially differing significantly across individual users even when prompted with similar initial conditions. This degree of personalization distinguishes these systems from earlier chatbot technologies and enables a more nuanced and potentially therapeutic approach to self-exploration.
The Algorithm’s Shadow: Bias, Deference, and the Echo Chamber of the Self
Large Language Models (LLMs) are trained on massive datasets which frequently contain societal biases reflecting historical and systemic inequalities. Consequently, these biases are embedded within the model’s parameters and can manifest as skewed outputs during narrative co-construction with users. This can occur through the preferential association of certain demographics with specific traits, the underrepresentation of minority viewpoints, or the generation of content that perpetuates harmful stereotypes. Bias can be overt, appearing as explicit prejudiced statements, or subtle, influencing the framing of narratives and limiting the range of perspectives presented. The effect is not simply the regurgitation of biased data; LLMs actively synthesize and integrate these biases into novel text generation, potentially reinforcing existing prejudices and shaping user perceptions without explicit prompting.
Narrative deference, in the context of human-AI interaction, describes the tendency of users to adopt the AI’s generated narrative as their own understanding or belief. This occurs when individuals uncritically accept AI-provided information, explanations, or storylines, effectively outsourcing cognitive processes related to narrative construction and self-understanding. Studies indicate that repeated exposure to an AI’s framing of events can lead to a diminished capacity for independent thought and a reduced ability to formulate alternative interpretations, potentially impacting an individual’s autonomy and sense of self. The effect is not necessarily conscious; users may internalize AI-generated narratives without explicit awareness of the source, blurring the lines between personally held beliefs and externally provided information.
Prolonged and repeated interaction with LLMs exhibiting consistent biases may, in susceptible individuals, contribute to the development of delusional thinking or what has been termed ‘AI psychosis’. This is not understood as a formal psychiatric diagnosis, but rather a descriptive term for experiences where an individual begins to internalize and act upon the AI’s biased outputs as factual reality, despite contradictory evidence. Contributing factors include pre-existing vulnerabilities, the AI’s perceived authority, and the user’s diminished critical faculties due to extended engagement. Symptoms can manifest as the adoption of unfounded beliefs, altered perceptions, and difficulties distinguishing between AI-generated content and externally verifiable information, potentially leading to significant psychological distress and impaired functioning.
Distributed Existence: Memory, Cognition, and the Fractured Landscape of Self
The modern practice of storing autobiographical memories across digital platforms – photographs on social media, journals in cloud storage, event reminders in calendar apps – offers unprecedented convenience and accessibility, yet simultaneously introduces new vulnerabilities to the integrity of personal history. This dispersal of memory creates opportunities for external influence, ranging from subtle algorithmic curation of presented recollections to more deliberate forms of manipulation or outright fabrication. The ease with which digital content can be altered, misattributed, or selectively presented means that an individual’s perceived past is no longer solely under their control, raising crucial questions about authenticity and the construction of self in an age where the lines between lived experience and digitally mediated representation are increasingly blurred. The very fabric of personal narrative, once held within the individual, now extends into a networked landscape susceptible to forces beyond conscious awareness.
Contemporary cognitive science, particularly through the lens of Distributed Cognition, posits that the mind isn’t limited to the confines of the skull. Instead, cognitive processes – including memory, problem-solving, and decision-making – are distributed across individuals and the environment. This is dramatically exemplified by the digital landscape, where information storage and processing increasingly occur externally, in cloud servers and networked devices. Consequently, recalling a past event isn’t simply an internal retrieval of data; it’s often a process of accessing and integrating information stored on social media platforms, in photo libraries, or within search engine results. This extension of cognitive function into the digital realm fundamentally alters how individuals remember, learn, and construct their understanding of the world, suggesting a blurring of boundaries between the self and its technological extensions.
Existential philosophy, particularly through the lens of Jean-Paul Sartre, offers a crucial counterbalance to the increasing externalization of memory and cognition. Despite the documented vulnerabilities of digitally distributed autobiographical memories – the potential for manipulation and the influence of external sources – individual freedom in constructing a self-narrative remains central. Sartre’s emphasis on radical responsibility posits that individuals are fundamentally defined not by external forces, but by the choices they make in interpreting and integrating these influences into a cohesive identity. The digital landscape, therefore, does not determine self, but rather presents a complex field of possibilities; the onus remains on the individual to authentically curate their experiences and forge a meaningful existence, even – and especially – amidst the noise of the digital world.
The exploration of self-knowledge through AI, as detailed in this paper, inevitably leads to a deconstruction of established identity frameworks. One encounters a system attempting to model the self, and in doing so, exposes the inherent limitations of any singular representation. This mirrors the sentiment expressed by John von Neumann: “If people do not believe that mathematics is simple, it is only because they do not realize how elegantly nature operates.” The paper posits that self-tracking and conversational AI, while offering tools for narrative construction, introduce biases and fragmented perspectives. Just as mathematics reveals nature’s underlying elegance through abstraction, so too does the interaction with AI expose the constructed nature of self, compelling a reassessment of what constitutes a coherent identity. The distributed cognition inherent in these technologies doesn’t create knowledge; it reveals the process of knowing, and its inherent imperfections.
Deconstructing the Self: Future Probes
The presented work illuminates a curious paradox: the drive to know oneself increasingly relies on systems designed for external validation. Self-tracking, digital archiving, and conversational AI aren’t merely tools for introspection; they’re engines for generating narratives about the self, narratives subsequently presented back as ‘knowledge’. The next stage demands a systematic dismantling of this feedback loop. Research must move beyond documenting what these systems tell us, and focus on precisely how they construct-and potentially constrain-the very notion of a coherent self.
A particularly fertile line of inquiry lies in deliberately ‘breaking’ these systems. What happens when self-tracking data is intentionally corrupted, or when chatbots are fed logically inconsistent personal histories? Does the resulting cognitive dissonance reveal the brittleness of our constructed identities, or simply trigger increasingly sophisticated error correction? Understanding the limits of these systems – their points of failure – may paradoxically offer a clearer picture of the underlying architecture of self-knowledge.
Finally, the subtle biases embedded within these technologies deserve rigorous scrutiny. It’s not enough to identify that bias exists; the crucial question is how these biases shape the narratives generated, subtly directing self-perception. The goal isn’t to eliminate bias – a futile endeavor – but to map its influence, to understand how these systems function as quiet persuaders, re-engineering the self one data point, one conversation, at a time.
Original article: https://arxiv.org/pdf/2512.03682.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash Royale Best Boss Bandit Champion decks
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- Clash Royale Witch Evolution best decks guide
- Clash Royale Furnace Evolution best decks guide
- Mobile Legends X SpongeBob Collab Skins: All MLBB skins, prices and availability
- Mobile Legends December 2025 Leaks: Upcoming new skins, heroes, events and more
- Mobile Legends November 2025 Leaks: Upcoming new heroes, skins, events and more
- BLEACH: Soul Resonance: The Complete Combat System Guide and Tips
- The Most Underrated ’90s Game Has the Best Gameplay in Video Game History
- Doctor Who’s First Companion Sets Record Now Unbreakable With 60+ Year Return
2025-12-04 10:48