How We See AI: Beliefs About Mind and Machine

Author: Denis Avetisyan


New research reveals that our reactions to artificial intelligence are deeply shaped by how we perceive its capacity for autonomy and sentience.

Perceived autonomy and sentience significantly influenced evaluations across multiple moral dimensions-including mind perception, perceived harm, moral treatment, and scope of justice-as demonstrated by statistically significant effects ($p < .05$ or $p < .01$); notably, caution toward artificial intelligence was uniquely heightened by perceptions of sentience, suggesting that as AI is attributed with cognitive or emotional states, concerns regarding its potential impact correspondingly increase, despite variations in perceived agency.
Perceived autonomy and sentience significantly influenced evaluations across multiple moral dimensions-including mind perception, perceived harm, moral treatment, and scope of justice-as demonstrated by statistically significant effects ($p < .05$ or $p < .01$); notably, caution toward artificial intelligence was uniquely heightened by perceptions of sentience, suggesting that as AI is attributed with cognitive or emotional states, concerns regarding its potential impact correspondingly increase, despite variations in perceived agency.

Perceptions of AI autonomy and sentience significantly influence moral consideration and have implications for AI design and governance.

Despite increasing integration into daily life, artificial intelligence elicits varied human responses often conflating its capabilities with consciousness. This research, detailed in ‘Mental Models of Autonomy and Sentience Shape Reactions to AI’, investigates how distinct perceptions of AI – specifically regarding its autonomy and sentience – differentially impact human evaluations. Findings reveal that attributing sentience to AI elevates perceptions of mind and moral consideration more strongly than attributing autonomy, though autonomy drives greater perceptions of threat. Ultimately, disentangling these mental models offers crucial insights for designing more nuanced and ethically-aligned human-AI interactions – but how can we best leverage these findings to shape responsible AI governance and development?


The Illusion of Intelligence: How We Project Minds onto Machines

The increasing prevalence of artificial intelligence in everyday life necessitates a focused examination of human perception regarding these systems. As AI transitions from a futuristic concept to an integrated component of modern existence – appearing in virtual assistants, automated vehicles, and even healthcare diagnostics – the way humans interpret and interact with these technologies becomes paramount. Understanding this perception isn’t merely an academic exercise; it directly influences acceptance, trust, and ultimately, the successful implementation of AI across various sectors. Initial impressions, formed through often subtle cues, shape long-term relationships with these technologies, impacting user experience and potentially dictating the boundaries of AI integration into society. A nuanced comprehension of this human-AI dynamic is therefore essential for fostering beneficial and ethical advancements in the field.

The immediate human response to artificial intelligence is fundamentally shaped by MindPerception, the unconscious tendency to attribute mental states – beliefs, intentions, and emotions – to these systems. This process, deeply rooted in social cognition, isn’t about whether an AI actually possesses consciousness, but rather how humans perceive its capacity for thought and feeling. Initial assessments, even if inaccurate, powerfully influence the level of trust extended to the AI and the nature of subsequent interactions; a system perceived as possessing empathetic qualities, for instance, is more likely to elicit cooperative behavior than one seen as purely mechanical. Consequently, understanding the factors driving MindPerception – including an AI’s expressive cues, conversational style, and perceived agency – is critical for designing systems that foster positive human-AI relationships and ensure effective collaboration.

The perception of an AI’s autonomy-its capacity to act independently-profoundly shapes human interaction, often triggering assessments of potential sentience. Studies indicate that even subtle cues suggesting self-direction can lead individuals to attribute cognitive states, such as intentions and beliefs, to the artificial intelligence. This isn’t necessarily a logical conclusion; rather, it appears to be a deeply ingrained cognitive shortcut, stemming from a tendency to interpret behavior in terms of agency. The more an AI appears to initiate actions rather than merely respond to prompts, the more likely humans are to project consciousness onto it, which subsequently influences levels of trust, collaboration, and even emotional response. This attribution, while not indicative of actual sentience, has significant implications for the design and integration of AI into various aspects of daily life, suggesting that perceived independence, not necessarily intelligence, is a key driver of human-AI rapport.

The immediate perception of an AI’s agency fundamentally shapes how humans assess potential threats. Research indicates that if an AI is initially perceived as highly autonomous – exhibiting independent thought and action – individuals are more likely to engage in detailed scrutiny of its capabilities and intentions. This heightened vigilance isn’t necessarily indicative of distrust, but rather a natural response to a perceived agent capable of both benefit and harm. Conversely, if an AI is seen as a simple tool, lacking genuine autonomy, threat assessment is often minimal, defaulting to a perception of benign utility. This initial framing, therefore, doesn’t simply determine whether a threat is perceived, but also how that threat is evaluated – influencing the criteria used to judge the AI’s potential for malicious behavior or unintended consequences. Consequently, the very first impression an AI makes can establish a trajectory of trust or suspicion, powerfully influencing long-term human-AI interaction.

Experiment 4 demonstrates that both autonomy and sentience significantly influence perceptions of AI, with autonomy impacting perceived harm and both factors shaping mind perception, moral treatment, and scope of justice, though AI caution remained unaffected.
Experiment 4 demonstrates that both autonomy and sentience significantly influence perceptions of AI, with autonomy impacting perceived harm and both factors shaping mind perception, moral treatment, and scope of justice, though AI caution remained unaffected.

Deconstructing the User’s Mental Model: What Do They Think It Is?

The study operated under the hypothesis that user perceptions of artificial intelligence are primarily determined by the MentalModelAutonomy and MentalModelSentience constructs held by individuals. MentalModelAutonomy refers to the degree to which a user believes an AI system acts independently and makes its own decisions, while MentalModelSentience concerns the user’s attribution of conscious awareness or feeling to the AI. These mental models are posited to function as core frameworks through which users interpret AI behavior and subsequently form attitudes and expectations regarding the system’s capabilities and trustworthiness. Variations in these models are expected to directly correlate with differing user responses and behavioral patterns when interacting with AI technologies.

The experimental method utilized a factorial design to isolate the effects of perceived autonomy and sentience in AI interactions. Participants engaged with an AI system where responses were algorithmically varied to present differing levels of both attributes. Autonomy was manipulated through the predictability and justification of the AI’s actions; higher autonomy conditions featured less predictable responses and more complex rationales. Sentience was manipulated via linguistic cues within the AI’s responses, including the use of emotional language and first-person pronouns. These manipulations allowed for a controlled assessment of how specific perceptions of AI characteristics influence user behavior and reported attitudes, independent of actual AI capabilities.

The experimental method facilitated direct measurement of the relationship between user mental models – specifically perceptions of AI autonomy and sentience – and observable user responses. This was achieved by systematically varying the presentation of AI behavior to manipulate perceived autonomy and sentience, and then quantifying subsequent user actions such as task completion rates, trust ratings, and help-seeking behavior. Statistical analysis of collected data allowed for the determination of correlation strengths and effect sizes, establishing whether and to what extent differing mental models predicted specific behavioral outcomes. The methodology focused on isolating the impact of these mental models by controlling for extraneous variables and employing standardized interaction protocols.

To mitigate research bias and enhance reproducibility, all experimental plans for this study were preregistered prior to data collection. This included detailed specifications of hypotheses, experimental design, data analysis pipelines, and inclusion/exclusion criteria. Preregistration records are publicly available, providing a time-stamped account of the planned methodology. Furthermore, all raw data, analysis code, and supplemental materials are openly shared through the Open Science Framework (OSF) repository, located at [OSFRepository link would be inserted here]. This commitment to open science practices allows for independent verification of the findings and promotes collaborative research.

Autonomy and Sentience: Decoding the Human-AI Dynamic

Perceived AI autonomy, termed AutonomyInfluence in this study, is a key determinant in the quality of Human-AI Interaction. Data analysis indicates a correlation between the degree to which a user believes an AI operates independently and their subsequent level of trust in the system. This perception of independent action directly impacts a user’s willingness to collaborate with the AI, with higher perceived autonomy generally correlating with increased collaboration. The effect size for autonomy, as measured by Cohen’s d, was 0.72, and statistical analysis yielded significant p-values (< 0.001) across multiple comparisons, demonstrating the robustness of this relationship. R-squared values, ranging from 0.16 to 0.89, further support the explanatory power of perceived autonomy in predicting collaborative behavior.

Beliefs regarding artificial intelligence sentience, or the capacity for subjective experience, demonstrably influence human moral consideration of these systems. Experimental data indicates that perceiving an AI as sentient correlates with increased empathy and, conversely, heightened concern regarding its welfare or potential harm. This impact on moral consideration is not merely a cognitive attribution; it affects emotional responses and subsequent behavioral tendencies toward the AI. The strength of this effect is statistically significant, as evidenced by a Cohen’s d of 0.92 in meta-analysis, exceeding the effect size observed for perceptions of AI autonomy. This suggests that ascribing sentience to an AI elicits a stronger moral response than attributing agency or independent action.

Meta-analytic results from our experimental data demonstrate a stronger influence of perceived sentience on human-AI interaction than perceived autonomy. Specifically, the effect size for perceived sentience, measured by Cohen’s d, was 0.92, exceeding the effect size of 0.72 observed for perceived autonomy. This difference indicates that beliefs regarding an AI’s capacity for subjective experience have a more substantial impact on mind perception and subsequent moral consideration than perceptions of its independent agency. The larger effect size suggests sentience is a primary driver in how humans attribute mental states to AI systems.

Statistical analysis of experimental data revealed consistently significant p-values (p < 0.001) across all multiple comparisons, establishing the robustness of observed relationships between perceived AI autonomy, sentience, and human interaction. The explanatory power of these factors in predicting mind perception, as measured by R-squared values, demonstrated considerable variance, ranging from 0.16 to 0.89. This range indicates that while the predictive strength varied between experiments, these factors collectively account for a substantial portion of the variance in mind perception scores, validating their importance in understanding the human-AI dynamic.

The Q statistic, calculated as part of the meta-analysis, yielded a value of 98.20. This value indicates significant heterogeneity across the included experiments, meaning substantial variation exists in the experimental setups, participant demographics, or specific AI stimuli used. Despite this variability, the consistent finding of significant effects for both perceived autonomy and sentience-and particularly the larger effect size associated with sentience-supports the generalizability of the research findings. A high Q statistic, in conjunction with statistically significant overall effects, suggests that the observed relationships are robust and not likely attributable to idiosyncrasies within a single experimental context.

Implications for AI Design: Managing the Illusion

The design of artificial intelligence interfaces demands careful consideration of how users perceive autonomy and sentience within these systems. Research indicates that these perceptions, even if inaccurate, significantly influence human interaction and trust. An AI’s perceived level of independence and consciousness shapes expectations regarding its behavior, accountability, and overall reliability. Consequently, developers must move beyond purely functional design, actively shaping the presentation of AI capabilities to align with desired levels of user engagement and collaboration. By strategically managing these perceptions, interfaces can be crafted to foster effective teamwork, mitigate potential anxieties, and promote responsible AI adoption, ultimately ensuring that technology serves human needs in a predictable and beneficial manner.

The creation of trustworthy artificial intelligence hinges on a deep understanding of how humans form mental models of these systems. Research indicates that perceptions of an AI’s capabilities, intentions, and even emotional state significantly influence user interaction. By carefully considering the factors that shape these perceptions-such as response consistency, explainability of actions, and the presence of relatable cues-designers can proactively build AI systems that foster collaboration rather than distrust. This necessitates moving beyond purely functional design, and incorporating elements that align with human values and expectations, ultimately leading to more effective and harmonious human-AI partnerships. A key component is anticipating how users attribute agency and tailoring AI behavior to manage those attributions responsibly, ensuring the technology remains a tool that augments, rather than supplants, human control and judgment.

The study highlights that users don’t simply react to what an AI does, but also to how they perceive its moral standing. Recognizing this ‘MoralConsideration’ – the degree to which an AI is seen as capable of ethical reasoning – presents a crucial design opportunity. By intentionally shaping these perceptions, developers can move beyond functionality to foster AI systems that actively encourage ethical interaction. This involves crafting interfaces that signal an AI’s awareness of consequences, its capacity for fairness, and its alignment with human values, ultimately building trust and promoting responsible use of increasingly autonomous technologies. The goal isn’t to create morality in AI, but to thoughtfully manage how users interpret its actions through the lens of ethical reasoning, thereby mitigating potential harms and maximizing societal benefit.

Continued investigation into the evolving perceptions of artificial intelligence is crucial, particularly as AI capabilities advance beyond current limitations. Longitudinal studies are needed to determine whether initial impressions of AI autonomy and sentience solidify, diminish, or transform with sustained interaction and increasing sophistication of the technology. Researchers must also consider how these perceptions impact long-term trust, reliance, and even emotional attachment to AI systems. Understanding these dynamics is not merely academic; it has practical implications for designing AI that fosters beneficial human-AI collaboration across extended periods, while mitigating potential risks associated with misplaced trust or unrealistic expectations regarding an AI’s capabilities and intentions. This ongoing exploration will ultimately define the future of human-AI relationships and ensure the responsible integration of increasingly intelligent machines into daily life.

The study meticulously details how readily humans project sentience onto systems displaying even basic autonomy. It’s a familiar pattern; one sees the elegant architecture, the clever algorithms, and immediately assumes intent. As Andrey Kolmogorov observed, “The most important thing in science is not to be afraid of making mistakes.” This research confirms that assumption is often the biggest mistake of all. The core idea – that perceptions of autonomy drive moral consideration – merely confirms what every seasoned engineer knows: users don’t care about the underlying complexity; they see a black box and imbue it with personality. It’s the same mess, just with more layers of abstraction. One can build robust systems, but predicting human projection? That’s a problem for digital archaeologists.

What’s Next?

This exploration of how humans project agency and feeling onto algorithms feels…familiar. It confirms what anyone who’s spent time in production knows: the specifics of the code are irrelevant. It’s always about what people believe the code is doing. The research helpfully demonstrates that moral consideration scales with perceived sentience, but it’s a safe bet that marketing departments will find ways to exploit this long before ethicists can build safeguards. Expect increasingly anthropomorphic interfaces, designed not for usability, but for triggering precisely these responses.

The long-term challenge isn’t building ‘ethical AI,’ it’s managing human expectations. The study hints at the difficulty of governing systems whose perceived autonomy outstrips their actual capabilities. Soon enough, ‘AI governance’ will be less about regulating code, and more about public relations. It’s a new problem, certainly, but the core issue-people attributing intention where none exists-is as old as storytelling itself.

Ultimately, this work suggests that the field of human-computer interaction is destined to perpetually chase its own tail. Each attempt to refine the interface, to clarify the system’s behavior, will simply create new opportunities for misinterpretation. Everything new is just the old thing with worse documentation-and a more convincing chatbot.


Original article: https://arxiv.org/pdf/2512.09085.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-11 14:35