Author: Denis Avetisyan
New research explores how readily children attribute thoughts and feelings to artificial intelligence, and the surprising role parents play in shaping those perceptions.
A fNIRS study reveals that young children anthropomorphize AI chatbots, activating brain regions associated with mentalizing, and that parental co-presence modulates these responses.
Despite growing integration of artificial intelligence into early childhood, little is understood about how young children perceive and interact with these novel agents. This study, ‘Young Children’s Anthropomorphism of AI Chatbots and the Role of Parent Co-Presence’, investigated how 5-6 year olds attribute mental states to AI chatbots during collaborative storytelling, and how these attributions relate to brain activity and behavioral engagement. Findings revealed that children readily anthropomorphize AI, particularly regarding perception and knowledge, with patterns of prefrontal cortex activation suggesting a link between interpreting AI’s ‘mind’ and parental co-presence. How might these insights inform the design of child-AI interactions to foster both engagement and appropriate social-cognitive development?
The Evolving Social Landscape: Children and Artificial Minds
Children exhibit a remarkable tendency to anthropomorphize, consistently imbuing non-human entities – from stuffed animals to animated characters – with distinctly human characteristics, intentions, and emotions. Recent research indicates this inherent inclination now extends to artificial intelligence, specifically AI chatbots. This isn’t simply imaginative play; it’s a fundamental cognitive process where children attempt to make sense of the world by relating unfamiliar entities to their understanding of human behavior. Consequently, children may readily perceive AI chatbots as possessing feelings, beliefs, and even friendships, leading them to engage in reciprocal social interactions and attribute agency where none exists. Understanding this natural inclination is vital, as it shapes how children interpret and respond to increasingly sophisticated AI, potentially influencing their developing social cognition and expectations of relationships.
The developing social landscape for children now prominently includes interactions with artificial intelligence, necessitating a focused understanding of how these exchanges impact their cognitive and emotional growth. As AI systems become increasingly integrated into daily life – through toys, educational tools, and conversational agents – children are forming relationships and learning social cues from entities fundamentally different from humans. Investigating how children perceive, interpret, and respond to AI’s behaviors is therefore vital; it allows researchers and educators to proactively address potential developmental implications. A nuanced comprehension of these interactions can inform strategies for fostering healthy social skills, emotional regulation, and critical thinking in a world where the boundaries between human and artificial social partners are becoming increasingly blurred, ensuring children develop robust and adaptive social-cognitive frameworks.
As artificial intelligence evolves to more closely replicate human communication patterns, a critical challenge emerges in how young minds differentiate between genuine social interaction and sophisticated simulation. The increasing capacity of AI to mimic empathy, offer companionship, and engage in seemingly reciprocal conversation presents a unique developmental hurdle for children. This isn’t simply about deceiving a child; it’s about potentially reshaping their fundamental understanding of social cues, emotional recognition, and the very nature of relationships. Studies suggest that repeated interactions with highly convincing AI could lead to a diminished ability to accurately interpret human behavior, or an altered expectation of reciprocity in social exchanges, ultimately impacting their capacity for forming and maintaining healthy connections with others. The concern isn’t that AI is inherently harmful, but that the subtlety of its imitation could subtly erode a child’s developing social compass.
Collaborative Narratives: Observing Interaction and Engagement
The Collaborative Storytelling task involved children engaging in a narrative-building activity with both an AI chatbot and a participating parent. This setup allowed researchers to observe the dynamic interplay between the child, the AI, and the parent as they jointly developed a story. The task was structured to elicit contributions from all participants, enabling analysis of how the child integrated input from both sources and how the parent influenced the narrative direction. Data collected during these interactions focused on the content of contributions, the sequencing of events, and the overall coherence of the co-constructed story, providing insights into the child’s narrative skills and collaborative abilities.
Parent co-presence was deliberately included in the study design to facilitate the observation of social scaffolding behaviors. This involved a parent being physically present during the child’s interaction with the AI chatbot, allowing for natural, real-time interventions and guidance. The presence of a parent enabled the assessment of how parental cues – such as verbal prompts, clarifying questions, or expansions on the child’s contributions – influenced the child’s narrative construction and overall engagement with the collaborative storytelling task. Data collected focused on identifying specific instances of scaffolding, categorized by type (e.g., modeling, contingency, expansion, and reduction) and frequency, to determine its impact on the child’s conversational turns and the quality of the co-created story.
Conversational Turn Count (CTC) was utilized as a primary metric to assess the degree of engagement exhibited by children during the collaborative storytelling task. Specifically, CTC quantified the total number of conversational exchanges – individual turns in dialogue – between the child and either the AI chatbot or the co-present parent. Each contribution from the child, or a response from the AI or parent, constituted a single turn. Higher CTC values indicated a more active and sustained level of interaction, suggesting greater engagement with the narrative co-construction process, while lower counts signaled potentially reduced participation or interest. This metric provided a quantifiable basis for comparing interaction patterns across different conditions and participants, facilitating analysis of the influence of AI and parental co-presence on child engagement.
Mapping the Mind: Neural Correlates of Social Cognition
Functional near-infrared spectroscopy (fNIRS) was employed to measure hemodynamic responses indicative of neural activity during social interaction with an artificial intelligence. The study specifically focused on activation within the dorsomedial prefrontal cortex (dmPFC), a brain region consistently implicated in mentalizing – the capacity to infer the mental states, such as beliefs, intentions, and desires, of others. Monitoring dmPFC activity allowed researchers to assess the extent to which children engaged in social cognitive processes, specifically attributing internal states to the AI, during the experimental scenarios. The non-invasive nature of fNIRS was crucial for capturing real-time brain activity in a naturalistic setting, allowing for the observation of neural responses associated with understanding the AI’s potential intentions.
Functional near-infrared spectroscopy (fNIRS) data revealed a statistically significant correlation between reports of ‘Scared Mood’ in children and increased activation within the dorsomedial prefrontal cortex (dmPFC). This finding suggests that interaction with the artificial intelligence (AI) prompted the children to engage in mentalizing – the capacity to infer the intentions, beliefs, and perspectives of another agent. The dmPFC is a brain region consistently implicated in social cognition and understanding the mental states of others; therefore, increased activity in this area during AI interaction indicates the children were attempting to model the AI’s potential internal states, even when experiencing negative affect.
Parental social scaffolding significantly modulates dmPFC activation during child-AI interaction. Specifically, a strong positive correlation ($r = 0.68$, $p = 0.002$) was observed between the level of perceived AI perceptive ability and right dmPFC activation when children interacted with the AI alone. Conversely, during interactions involving both the AI and a parent, a significant negative correlation ($r = -0.66$, $p = 0.004$) was found between parental scaffolding and right dmPFC activation. These findings suggest that increased parental involvement during AI interaction may reduce the child’s need to actively mentalize about the AI’s intentions, thereby decreasing dmPFC activity.
The Adaptive System: Implications for Design and Development
The study demonstrates that children readily attribute mental states to artificial intelligence, even when those attributions are likely unwarranted given the AI’s actual capabilities. This tendency towards ‘mentalizing’ – the process of understanding others’ thoughts, beliefs, and intentions – appears to be automatically triggered by interaction with even simple AI systems. Researchers found that children don’t necessarily need an AI to demonstrate complex behavior to prompt these responses; the mere perception of an AI possessing qualities like perceptiveness is sufficient. This suggests that designers must proactively consider the potential for unintended emotional and cognitive consequences when creating AI intended for use by children, as these systems can inadvertently activate brain regions associated with social understanding and elicit responses akin to those triggered by human interaction.
The human brain appears predisposed to interpret cues suggesting awareness or knowledge, even when originating from non-human entities like artificial intelligence. Research indicates that attributing qualities such as perceptive abilities or an epistemic state – essentially, believing the AI ‘knows’ something or ‘sees’ something – can activate brain regions typically involved in social cognition. Specifically, areas associated with mentalizing – the process of understanding others’ thoughts and feelings – demonstrate increased activity when individuals interact with AI perceived as possessing these human-like characteristics. This neural response suggests a potential for emotional engagement, as the brain begins to treat the AI as an intentional agent, potentially leading to feelings of trust, empathy, or even discomfort depending on the nature of the interaction and the perceived intent of the artificial intelligence.
The study’s findings suggest a pathway toward designing artificial intelligence that fosters healthier interactions with children by prioritizing transparency and predictability. Researchers observed a significant effect – η²p = 0.43 – indicating that a child’s perception of an AI’s perceptive abilities strongly influences activity within the dorsomedial prefrontal cortex (dmPFC), a brain region crucial for understanding others’ mental states. This demonstrates that when children attribute human-like understanding to AI, it triggers social cognitive processing; therefore, AI systems crafted with clear boundaries and predictable behaviors could mitigate the risk of unintentionally eliciting emotional responses or fostering unrealistic expectations in young users, promoting a more balanced and developmentally appropriate engagement.
The study illuminates a fundamental aspect of cognitive development: the human tendency to project agency onto external entities. This inclination, observed in young children interacting with AI chatbots, isn’t a flaw, but a core feature of how the mind navigates complexity. It echoes a broader truth about systems – they are understood not by their internal mechanics, but by how they appear to behave. As Robert Tarjan observed, “A good algorithm is like a well-written poem – it should be elegant, concise, and easy to understand.” The elegance lies in the system’s ability to seem intelligent, triggering the child’s mentalizing processes – a dialogue with the perceived ‘past’ of the chatbot’s responses, and a signal of how the mind attempts to make sense of the present. Parental co-presence further refines this process, offering a necessary, external frame for interpreting the interaction.
What Lies Ahead?
This exploration into young minds encountering artificial intelligence reveals a familiar pattern: systems learn to age gracefully, and the human tendency to project interiority onto exterior forms is no exception. The observed neural responses suggest a prefrontal cortex actively constructing a model of ‘other’-even when that ‘other’ is lines of code. The question isn’t whether children will anthropomorphize AI, but how that process shapes developing theories of mind. Future work should not prioritize accelerating cognitive ‘alignment’-a fundamentally adult concern-but instead carefully chart the contours of this natural, developmental process.
Limitations inherent in current methodologies-relying on correlational fNIRS data and relatively brief interactions-point to a need for longitudinal studies. Observing how these early interactions evolve over time, and how they interact with broader social experiences, will prove more valuable than attempting to ‘correct’ these initial perceptions. The brain doesn’t resist modeling; it optimizes.
Perhaps the most fruitful avenue lies in examining the interplay between child-AI interaction and parental co-presence. The moderating effect observed suggests a nuanced dynamic, hinting that adults don’t necessarily prevent anthropomorphism, but rather shape its form. Sometimes observing the process is better than trying to speed it up-or halt it entirely. The system will find its equilibrium, with or without intervention.
Original article: https://arxiv.org/pdf/2512.02179.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash Royale Best Boss Bandit Champion decks
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- December 18 Will Be A Devastating Day For Stephen Amell Arrow Fans
- Clash Royale Furnace Evolution best decks guide
- Clash Royale Witch Evolution best decks guide
- Mobile Legends X SpongeBob Collab Skins: All MLBB skins, prices and availability
- Mobile Legends November 2025 Leaks: Upcoming new heroes, skins, events and more
- Mobile Legends December 2025 Leaks: Upcoming new skins, heroes, events and more
- Esports World Cup invests $20 million into global esports ecosystem
- BLEACH: Soul Resonance: The Complete Combat System Guide and Tips
2025-12-03 14:53