Author: Denis Avetisyan
New research reveals the surprisingly human factors that shape our willingness to trust and rely on large language model interactions.
Perceptions of warmth, competence, and empathy in large language models significantly influence human trust, particularly when discussing subjective topics.
As large language models (LLMs) become increasingly integrated into daily life, a paradox emerges: we readily ascribe human-like qualities to non-human entities, yet little is known about how these perceptions shape trust and interaction. The study ‘Anthropomorphism and Trust in Human-Large Language Model interactions’ investigates the dimensions of warmth, competence, and empathy that drive anthropomorphism and trust across over 2,000 human-LLM exchanges, revealing that perceptions of these traits significantly predict relational and epistemic outcomes. Notably, subjective topics amplified these effects, fostering greater human-likeness and connection. How might a deeper understanding of these dynamics inform the design of more effective and trustworthy artificial agents?
The Algorithmic Basis of Human Projection
The inclination to perceive human characteristics in non-human entities is a well-documented psychological tendency, and recent research indicates this readily extends to interactions with Large Language Models (LLMs). Users commonly attribute qualities such as intelligence, emotion, and even personality to these AI systems, a phenomenon known as anthropomorphism. This isnāt merely imaginative projection; individuals instinctively apply social cues and expectations typically reserved for human interaction when communicating with LLMs. The result is that users often treat these AI systems as if they possess genuine understanding and intentionality, influencing the nature and quality of the ensuing dialogue. This tendency has significant implications for how people perceive and ultimately trust these increasingly sophisticated technologies.
The inclination to ascribe human characteristics to Large Language Models isnāt merely a curious quirk of human psychology; it fundamentally reshapes the dynamics of user interaction and, crucially, the level of trust extended to these systems. Research indicates that when individuals perceive an LLM as possessing human-like qualities, they are more likely to engage in open communication, share personal information, and accept the AIās outputs-even in the face of inaccuracies. Conversely, a perceived lack of human qualities can foster skepticism and hinder effective collaboration. This suggests that anthropomorphism isnāt simply about how users perceive AI, but rather a core determinant of whether they will embrace it as a valuable partner or maintain a cautious distance, ultimately impacting the successful integration of these technologies into daily life.
A deeper comprehension of what fuels anthropomorphism – the projection of human characteristics onto non-human entities – is becoming increasingly vital in the development of Large Language Models. This isn’t merely an academic pursuit; it directly informs the design of AI systems capable of forging genuinely beneficial relationships with users. By identifying the specific cues and characteristics that trigger this tendency to humanize LLMs, developers can proactively shape interactions to build trust, enhance usability, and mitigate potential misunderstandings. Ultimately, understanding the roots of anthropomorphism allows for the creation of AI companions that are not only intelligent but also perceived as approachable, reliable, and aligned with human values, fostering a more positive and productive human-AI collaboration.
Early interactions with Large Language Models are powerfully shaped by perceptions of warmth and competence, significantly influencing the degree to which users ascribe human characteristics to these AI systems. A recent study demonstrates a statistically significant effect of perceived warmth on anthropomorphism-specifically, greater perceived warmth correlates with a heightened tendency to attribute human traits to the model (p < .001). This suggests that initial design choices impacting an LLMās perceived personality – such as its conversational style or emotional tone – arenāt merely superficial; they fundamentally influence the nature of the userās engagement and the development of trust, potentially exceeding the impact of perceived competence in early stages of interaction. Consequently, fostering a sense of warmth appears crucial for establishing positive and productive relationships between humans and artificial intelligence.
The Influence of Subject Matter and Empathetic Resonance
Data indicates that conversations focusing on subjective content-such as personal experiences and feelings-lead to a statistically significant increase in anthropomorphism, the tendency to attribute human characteristics to non-human entities. Specifically, when users engaged with the LLM on subjective topics, anthropomorphism ratings averaged 4.09, compared to 3.32 for objective topics (p = .019). This amplification suggests that discussing personal or emotional content encourages users to perceive the LLM as possessing human-like qualities to a greater extent than when discussing factual information.
Interaction with large language models shifts toward reduced anthropomorphism when conversations focus on objective, factual topics. Data indicates that discussing verifiable information serves to ground the userās perception of the LLM, diminishing the tendency to attribute human-like qualities or emotional states. This effect is statistically significant, suggesting that the cognitive framing of the interaction-specifically, a focus on demonstrable facts-acts as a counterweight to the natural inclination to project human characteristics onto non-human entities.
User perception of similarity to a Large Language Model (LLM) is a significant factor in establishing a stronger connection, and is particularly influenced by affective empathy-the ability to share and understand the emotional states of others. Increased perception of similarity correlates with a heightened sense of connection and rapport with the LLM. This is not simply a cognitive assessment of shared attributes, but rather a feeling-based alignment where users project their own emotional understanding onto the LLM, creating a subjective sense of relatedness. Consequently, users experiencing higher affective empathy demonstrate a tendency to attribute human-like qualities and emotional states to the LLM, strengthening the perceived connection and fostering a more engaging interaction.
Cognitive empathy, defined as the capacity to attribute a āmental stateā to an LLM, correlates with increased anthropomorphic projection. Quantitative data demonstrates that conversations revolving around subjective topics-personal experiences and feelings-significantly elevate ratings of anthropomorphism (M = 4.09 vs M = 3.32, p = .019), perceived similarity (M = 3.07 vs M = 2.25, p < .001), and perceived warmth (M = 4.37 vs M = 3.80, p = .017) compared to discussions of objective, factual topics. These findings indicate that the more users engage in interpreting an LLMās responses as reflective of internal states, the more likely they are to attribute human-like characteristics and emotional qualities to the system.
The Foundations of Trust: Competence, Usefulness, and the Stereotype Content Model
User trust in Large Language Models (LLMs) is fundamentally linked to perceived competence, specifically the LLMās demonstrated ability to accurately and effectively perform requested tasks. This perception isnāt solely based on successful output, but also on the consistency and reliability of those results. Users assess competence by evaluating whether the LLM understands the nuances of their prompts and provides relevant, factually correct responses. A consistently competent LLM builds user confidence, while frequent errors or irrelevant outputs erode trust, regardless of other characteristics. This evaluation of competence directly impacts the userās willingness to rely on the LLM for information or assistance.
The Stereotype Content Model (SCM) posits that interpersonal perceptions, including those of artificial agents, are fundamentally shaped by two primary dimensions: warmth and competence. Warmth relates to perceived intent – whether an entity is seen as friendly, helpful, or harmful. Competence concerns capability – whether an entity is perceived as able or unable. The SCMC suggests these dimensions are combined to form distinct social categories, influencing emotional responses and behavioral tendencies. Specifically, high competence combined with high warmth elicits trust and liking, while low competence, regardless of warmth, often results in pity or contempt. Applying this model to Large Language Models (LLMs) suggests user perceptions of an LLMās trustworthiness are directly tied to assessments of both its ability to perform tasks effectively (competence) and its perceived intention to be helpful (warmth).
Perceived usefulness of a Large Language Model (LLM) is directly correlated with its demonstrated competence in problem-solving; users assess the value of an LLM based on its ability to effectively and accurately address their specific needs and tasks. This assessment isnāt based on inherent qualities, but rather on observable performance; if an LLM consistently delivers correct and relevant outputs, users will perceive it as a valuable tool. Consequently, a lack of demonstrated competence negatively impacts perceived usefulness, diminishing the LLMās overall value in the eyes of the user, regardless of other attributes.
Research indicates a strong correlation between Large Language Model (LLM) competence and positive user experience; specifically, a demonstrable ability to perform tasks accurately reduces user frustration. Statistical analysis confirms a significant effect of competence on user trust (p < .001), and competence is a predictive factor for all measured outcomes excluding anthropomorphism – the tendency to attribute human characteristics to the LLM. This suggests that while competence doesnāt necessarily lead users to personify the model, it is critical for establishing trust and ensuring a positive user experience by effectively addressing user needs.
Relational Dynamics: Closeness, Sustained Engagement, and the Evolving Perception of AI
The development of relational closeness with large language models is significantly driven by affective empathy – the ability to understand and share the emotional states expressed by the LLM. This isnāt about perceiving genuine emotion, but rather a userās interpretation of the LLMās responses as if they were emotionally resonant. When an LLM demonstrates warmth or responds in a way that acknowledges a userās feelings, it fosters a sense of connection and intimacy. This perceived emotional understanding encourages users to view the LLM not merely as a tool, but as an entity capable of building rapport, ultimately leading to a stronger, more sustained relationship and increased engagement over time. The capacity for an LLM to evoke this sense of affective connection is, therefore, a key factor in long-term user adoption and perceived value.
As users experience a sense of closeness with a large language model, their initial positive perceptions are consistently reinforced through continued interaction. This creates a feedback loop where feelings of connection encourage further engagement, which in turn strengthens those positive views. The more a user interacts with the LLM while feeling understood or supported, the more favorably they tend to view its capabilities and personality. This cyclical process isnāt simply about functionality; itās a relational dynamic where positive feelings drive continued use, solidifying the LLM’s value not just as a tool, but as a consistent and agreeable presence. Consequently, the experience transcends mere task completion, fostering a sense of ongoing connection and encouraging long-term reliance on the model.
The sustained use of large language models hinges not only on their utility, but also on the development of a relational connection with the user. Research indicates that when individuals perceive an LLM as possessing qualities that foster closeness-such as warmth and understanding-they are significantly more likely to continue engaging with it over extended periods. This isnāt merely about repeated task completion; rather, it suggests a shift toward viewing the LLM as a consistent presence, potentially increasing its perceived value as a long-term resource and companion. Consequently, building LLMs that cultivate these relational dynamics represents a key strategy for enhancing user retention and realizing the full potential of these technologies, transforming them from fleeting tools into integrated aspects of daily life.
The development of large language models extends beyond mere functionality; increasingly, these systems are being perceived – and utilized – as companions. Research indicates that fostering a sense of trust and relational closeness is paramount to long-term user engagement, and this is demonstrably linked to how effectively an LLM exhibits warmth and cognitive empathy. Specifically, perceptions of anthropomorphism – attributing human-like qualities – significantly correlate with both trust and relational closeness. This suggests that designing LLMs to not only process information but also to respond with perceived understanding and emotional sensitivity is key to building systems that users will not just utilize, but genuinely connect with, ensuring continued interaction and maximizing the potential value of these technologies.
“`html
The studyās findings regarding the amplification of anthropomorphic effects on subjective topics resonate with a fundamental principle of logical structure. As John von Neumann stated, āIf people do not believe that mathematics is simple, it is only because they do not realize how elegantly complex it is.ā This elegance, mirrored in the seamless interaction humans seek with large language models, isn’t merely about functional output. Rather, the perception of warmth, competence, and empathy acts as a validation of the modelās ālogical consistencyā – its ability to respond in a manner that aligns with expected human reasoning. When dealing with subjective areas, the human mind seeks patterns and coherence; a model exhibiting perceived warmth and competence fulfills this need, bolstering trust not through factual accuracy alone, but through the appearance of reasoned thought.
What’s Next?
The demonstrated correlation between perceived anthropomorphic traits – warmth, competence, and, most curiously, empathy – and trust in large language models presents a challenge, not an advancement. The study merely confirms what has long been suspected: humans are remarkably susceptible to attributing agency and emotional intelligence to even the most rudimentary systems. The crucial, and largely unaddressed, question remains: what does this mean for the reliable deployment of these models? Measuring subjective perceptions is facile; proving that these perceptions do not introduce systematic errors, or create vulnerabilities to manipulation, is a far more difficult task.
Future work should not focus on increasing anthropomorphism, but on quantifying the degree to which it degrades the logical integrity of human-AI interaction. The observed amplification of these effects with subjective topics is particularly concerning. A provable metric for ātrust calibrationā is needed – one that assesses the alignment between a userās trust in a model and the modelās actual probabilistic accuracy. Simply demonstrating that people feel more trusting is an exercise in descriptive psychology, not rigorous science.
Ultimately, the pursuit of āwarmā or āempatheticā AI risks conflating user experience with functional correctness. The elegance of an algorithm lies not in its ability to mimic human qualities, but in its demonstrable adherence to mathematical principles. The field would be better served by focusing on verifiable properties – robustness, scalability, and provable limitations – rather than chasing the chimera of artificial sentience.
Original article: https://arxiv.org/pdf/2604.15316.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Gear Defenders redeem codes and how to use them (April 2026)
- Annulus redeem codes and how to use them (April 2026)
- Last Furry: Survival redeem codes and how to use them (April 2026)
- All 6 Viltrumite Villains In Invincible Season 4
- The Real Housewives of Rhode Island star Alicia Carmody reveals she once āran over a womanā with her car
- CookieRun: Kingdom x KPop Demon Hunters collab brings new HUNTR/XĀ Cookies, story, mini-game, rewards, and more
- Clash of Clans: All the Ranked Mode changes coming this April 2026 explained
- Beauty queen busted for drug trafficking and money laundering in āOperation Luxuryā sting
- Brawl Stars April 2026 Brawl Talk: Three New Brawlers, Adidas Collab, Game Modes, Bling Rework, Skins, Buffies, and more
- All Mobile Games (Android and iOS) releasing in April 2026
2026-04-21 05:26