Author: Denis Avetisyan
New research explores how a robot’s physical appearance influences our expectations of its explanations and whether we perceive it as simply processing information or exhibiting genuine mental capacity.
![The study leverages three distinct robotic appearances, as defined by the ABOT framework [Phillips2018], to investigate the impact of physical embodiment on human-robot interaction.](https://arxiv.org/html/2512.11746v1/img/robots.png)
A study reveals that increased anthropomorphism in robots correlates with expectations for explanations rooted in mental states rather than purely mechanistic processes.
Despite growing sophistication in robotics, a fundamental disconnect remains between how robots operate and how humans expect them to justify their actions. This study, ‘The Influence of Human-like Appearance on Expected Robot Explanations’, investigates whether a robot’s physical form-specifically, its degree of human-likeness-shapes the kind of explanations people anticipate receiving. Our findings demonstrate a positive correlation between human-like appearance and the tendency to expect anthropomorphic explanations from robots, even for purely mechanical behaviors. As robots become increasingly integrated into daily life, how can we design explanations that align with user expectations and foster appropriate levels of trust and understanding?
The Imperative of Intent: Understanding Robotic Action
The increasing presence of robots in everyday life, particularly as domestic service robots, necessitates a deeper understanding of their actions for successful human-robot interaction. As these machines transition from factory floors to homes and public spaces, their behaviors are subject to increased scrutiny and demand greater transparency. Effective collaboration hinges not simply on what a robot does, but on a human’s ability to anticipate and interpret why a particular action is being performed. This comprehension builds trust, reduces anxiety, and ultimately allows people to work alongside robots seamlessly, sharing spaces and tasks with confidence. Without this capacity to understand robotic intent, interactions can feel unpredictable and even unsettling, hindering the potential benefits of these increasingly sophisticated machines.
A significant hurdle in seamless human-robot collaboration lies in the current inability of robotic systems to articulate the reasoning behind their actions. While robots can successfully perform tasks, a lack of transparency regarding why a particular behavior was chosen fosters mistrust and hinders effective teamwork. This isn’t merely a matter of user experience; without understanding a robot’s motivations – its internal goals and the logic driving its decisions – humans struggle to predict future actions, coordinate effectively, and intervene safely when errors occur. Consequently, even demonstrably competent robots can be perceived as unpredictable or unreliable, limiting their acceptance and integration into complex, collaborative environments. The challenge isn’t simply building robots that can act, but robots that can clearly communicate why they act, paving the way for genuine partnership.

Explainable Robotics: Modeling the Robot Mind
Explainable Robotics is a research area focused on developing robotic systems whose actions are readily understandable by humans. The primary goal is to facilitate the creation of accurate Mental Models in users – internal representations of the robot’s behavior, capabilities, and intentions. Transparency in robotic operation is achieved through techniques that reveal the reasoning behind actions, allowing users to anticipate future behavior and build trust. These models are crucial for effective human-robot collaboration, enabling users to predict outcomes, diagnose errors, and effectively intervene when necessary. The field moves beyond simply observing what a robot does, to understanding why it is doing it, fostering a more intuitive and predictable interaction.
Effective robot explanations necessitate the ascription of internal mental states – specifically, Perception of the environment, internal Desire for specific outcomes, and the process of Thinking to determine actions – to facilitate human understanding. This approach leverages the human tendency to interpret behavior based on inferred motivations and beliefs; by attributing these states to the robot, explanations move beyond simply detailing what the robot did to conveying why it acted in a given manner. Successfully communicating these attributed mental states allows users to build predictive models of robot behavior, anticipating future actions based on perceived goals and sensory input, and thereby improving human-robot interaction and trust.
The application of ‘Theory of Mind’ in robotics leverages the human capacity to infer the mental states of others – beliefs, desires, and intentions – to understand behavior. Specifically, by attributing these states to a robot, users can move beyond simply observing actions to predicting future behavior and interpreting the reasoning behind those actions. This predictive capability is crucial for effective human-robot interaction, as it allows users to anticipate a robot’s next step and understand why it is performing a given task. Successful implementation relies on the robot communicating information that supports the construction of these mental state attributions, effectively simulating an understanding of its internal motivations and goals.
The Illusion of Agency: Human-Like Form and Anthropomorphism
The extent to which a robot exhibits human-like appearance is a key determinant in the degree to which humans engage in anthropomorphism – the attribution of human characteristics, emotions, or intentions to non-human entities. This relationship suggests that visual cues associated with humanity directly influence our tendency to perceive agency and mental states in robots. While not a simple linear progression, increased human-likeness generally correlates with stronger anthropomorphic projections, impacting how users interpret a robot’s actions and anticipate its behavior. This is supported by observational data indicating a statistically significant positive correlation (r = 0.176, p = 0.018) between human-like appearance and anthropomorphic explanations of robotic behavior.
The robot selection for this study leveraged the ‘ABOT Database’, a resource cataloging robots with diverse physical designs and capabilities. Three robots were chosen to represent a spectrum of human-likeness: the ‘HSR (Human Support Robot)’ which has a utilitarian, non-humanoid form; ‘Nao’, a smaller, more stylized humanoid robot; and ‘Nadine’, a highly realistic humanoid robot designed to resemble a human in appearance and behavior. This selection allowed for a comparative analysis of how varying degrees of physical realism correlate with human tendencies to attribute human characteristics and intentions.
Research indicates a positive relationship between the degree of human-likeness in robots and the propensity of observers to attribute mental states and infer intentionality. Specifically, increased human-like features correlate with stronger explicit beliefs regarding a robot’s cognitive abilities – a conscious assessment of its ‘mind’ – and heightened implicit perception of its underlying intentions, even without conscious reasoning. This suggests that as robots more closely resemble humans in appearance, individuals are more likely to both consciously believe they possess mental capacities and subconsciously interpret their actions as goal-directed behaviors.
Statistical analysis of collected data revealed a significant positive correlation between the degree of human-like appearance in robots and the tendency to provide anthropomorphic explanations for their behaviors ($r = 0.176$, $p = 0.018$). Conversely, a significant inverse correlation was observed between human-like appearance and non-anthropomorphic explanations ($r = -0.185$, $p = 0.013$). These findings indicate that as robots exhibit more human-like features, observers are more likely to attribute human-like qualities and intentions to them, and less likely to explain their actions through purely mechanical or functional means. Both correlation coefficients demonstrate statistical significance at the $p < 0.05$ level.
The ‘Uncanny Valley’ is a hypothesized relationship between the degree of an object’s resemblance to a human being and the emotional response to that object; as realism increases, so does empathy until a point is reached where slight imperfections cause a feeling of eeriness and revulsion. This negative emotional response, manifesting as decreased trust and acceptance, occurs when a robot’s appearance approaches, but does not perfectly achieve, human likeness. The effect is thought to stem from the detection of subtle anomalies that violate expectations of human appearance and movement, triggering a subconscious aversion. While increased realism generally correlates with positive anthropomorphism, surpassing this threshold can therefore yield counterproductive results, hindering positive human-robot interaction.
Methodological Rigor: Data Acquisition and Analysis
Participants for this study were recruited through Prolific, a platform for online research participant recruitment, ensuring a diverse and representative sample. Following recruitment, participants completed the study entirely online using Qualtrics, a secure survey and data collection tool. This online administration facilitated data collection from a geographically distributed participant pool and allowed for standardized presentation of stimuli and collection of responses. Participants accessed the study materials and completed all tasks via a web browser, with data automatically recorded and stored within the Qualtrics platform, maintaining data integrity and confidentiality.
Participants viewed video footage of three distinct robots – the HSR (Human Support Robot), Nao, and Nadine – engaged in pre-defined actions. Following each video, a corresponding explanation of the robot’s behavior was presented. The video stimuli were designed to depict observable actions, and the accompanying explanations provided details regarding the robot’s internal state or goals relating to those actions. This paired presentation of action and explanation formed the basis for subsequent analysis of participant responses and perceptions.
A coding framework was employed to systematically categorize both the robot-provided explanations of their actions and the corresponding responses from study participants. This framework facilitated a quantitative analysis of the relationship between three key variables: the degree of human-likeness exhibited by each robot (HSR, Nao, Nadine), the clarity of the robot’s explanations as judged by coders, and participant perceptions of the robot’s intentionality. Coders were trained to identify specific linguistic and behavioral cues indicative of these variables, ensuring inter-rater reliability. The resulting coded data allowed for statistical examination of correlations and predictive relationships between robot appearance, explanation quality, and attributed intentionality.

Implications for Collaborative Systems: Towards Transparent Intelligence
Research indicates that a robot’s physical design significantly influences how readily humans trust and comprehend its intentions. The study demonstrates that subtle adjustments to a robot’s appearance – encompassing elements like its degree of human likeness, the presence of visual cues suggesting intentionality, and even the consistency of its movements – can dramatically alter perceptions of reliability and predictability. These findings suggest that optimizing robot appearance isn’t merely about aesthetics; it’s a critical factor in establishing effective human-robot collaboration, as individuals are more likely to accept assistance and coordinate effectively with entities they perceive as trustworthy and understandable. Consequently, careful calibration of these visual characteristics may prove essential for the successful integration of robots into everyday life and complex work environments.
Effective human-robot collaboration hinges on a robot’s ability to provide explanations that resonate with human cognitive frameworks, specifically leveraging what is known as ‘Theory of Mind’ – the capacity to attribute beliefs, desires, and intentions to others. Research indicates that humans don’t simply evaluate what a robot does, but why it does it, and they expect these justifications to align with their own understanding of intentionality and causality. Consequently, explanations framed in terms of goals, beliefs, and knowledge – mirroring how humans explain their own actions – significantly enhance trust and predictability. Robots that can articulate their reasoning in a human-compatible manner aren’t merely providing information; they are demonstrating an understanding of the collaborator’s mental state, fostering a sense of shared understanding crucial for seamless teamwork and minimizing potential errors arising from misinterpretations.
Investigations into human-robot collaboration are increasingly focused on the potential of tailored communication strategies. Future research should prioritize exploring how personalized explanations – adapting the complexity and style of communication to an individual’s knowledge and cognitive style – can significantly improve trust and performance. Complementing these verbal approaches, a deeper understanding of non-verbal cues, such as robot gaze, posture, and proxemics, is critical. These cues powerfully influence human perception and can dramatically enhance a robot’s perceived transparency, allowing humans to more accurately anticipate its actions and intentions. By integrating both personalized verbal explanations and nuanced non-verbal signals, robots can move beyond simply executing tasks to fostering genuine, intuitive partnerships with people.
The successful integration of robots into daily life hinges not merely on their capacity to perform tasks autonomously, but on their ability to articulate the reasoning behind those actions. Current robotic systems often operate as “black boxes,” executing commands without offering insight into their decision-making processes, which limits human trust and effective collaboration. For robots to become truly seamless partners, they must move beyond skillful action and embrace transparent communication; explaining why a particular course of action was chosen is critical for building understanding, anticipating behavior, and allowing humans to effectively oversee and correct robotic systems. This necessitates advancements in areas like explainable AI and the development of intuitive communication interfaces that bridge the gap between robotic logic and human cognition, ultimately fostering a collaborative dynamic where humans and robots can work together with mutual comprehension and confidence.
The study’s findings regarding the correlation between a robot’s human-like appearance and the expectation of anthropomorphic explanations resonate with a fundamental principle of algorithmic design. As Robert Tarjan once stated, “The most efficient algorithms are those that exploit the inherent structure of the problem.” Similarly, human perception appears to exploit the inherent ‘structure’ of human-like forms, projecting mental capacities onto them. This isn’t merely about aesthetics; it’s about implicit perception and the brain’s tendency to seek patterns. The expectation of explanations mirroring human thought processes, heightened by anthropomorphic features, suggests a deeply rooted cognitive bias – a shortcut taken when assessing the ‘structure’ of the interacting agent. The article demonstrates that, when approaching an infinite degree of human likeness, the expectation for explanations shifts from mechanistic to mentalistic.
The Road Ahead
The observed correlation between a robot’s human-like appearance and the expectation of anthropomorphic explanations reveals a peculiar truth: the aesthetic precedes the logical. It is not sufficient to demonstrate intelligence; the illusion of intelligence, fostered by superficial resemblance, appears to fundamentally alter the criteria by which intelligence is judged. Future work must address whether this preference for narrative coherence over mechanistic accuracy represents a cognitive shortcut, or a deeper, perhaps unavoidable, bias in the perception of agency. The elegance of a provably correct algorithm seems, regrettably, less compelling than a story well told.
A critical limitation lies in the implicit assumption that ‘explanation’ itself is a monolithic construct. The study measures expectation, not necessarily satisfaction. It remains unclear whether humans genuinely prefer explanations framed in terms of mental states, or merely tolerate them when offered by entities possessing a human-like form. A rigorous investigation should move beyond behavioral observation to directly assess the cognitive effort required to process different explanatory styles – to determine if anthropomorphism is a heuristic convenience, or a source of computational burden.
Ultimately, the pursuit of explainable robotics demands a more fundamental inquiry into the nature of explanation itself. The field should not merely strive to mimic human-like reasoning, but to define the necessary and sufficient conditions for genuine intelligibility – a standard that, ideally, transcends the vagaries of appearance and the seductive power of narrative. Only then can the promise of truly transparent and accountable artificial intelligence be realized.
Original article: https://arxiv.org/pdf/2512.11746.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash Royale Best Boss Bandit Champion decks
- Brawl Stars December 2025 Brawl Talk: Two New Brawlers, Buffie, Vault, New Skins, Game Modes, and more
- Best Hero Card Decks in Clash Royale
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- Call of Duty Mobile: DMZ Recon Guide: Overview, How to Play, Progression, and more
- Best Arena 9 Decks in Clast Royale
- Clash Royale Witch Evolution best decks guide
- All Boss Weaknesses in Elden Ring Nightreign
- Deneme Bonusu Veren Siteler – En Gvenilir Bahis Siteleri 2025.4338
- Brawl Stars December 2025 Brawl Talk: Two New Brawlers, Buffie, Vault, New Skins, Game Modes, and more
2025-12-15 07:30