Author: Denis Avetisyan
New research reveals that social robots can effectively communicate with both younger and older adults using eye movements, though perceptions of this interaction differ with age.

Study demonstrates that deictic gaze from a social robot improves task performance across age groups, while highlighting age-related variations in social perception of robotic cues.
While increasing numbers of social robots are being designed to assist aging populations, a key challenge lies in ensuring effective communication across generations. This is explored in ‘Age-Related Differences in the Perception of Eye-Gaze from a Social Robot’, which investigates how older adults interpret non-verbal cues-specifically, deictic gaze-from robotic assistants. The study revealed that, although older and younger adults benefited equally from gaze cues during task performance, their underlying social perception of the robot’s intent differed. How can human-robot interaction designers leverage these age-related perceptual nuances to create more intuitive and engaging assistive technologies?
Decoding Attentional Focus: The Foundation of Collaborative Robotics
Achieving truly collaborative robots demands more than simply executing commands; it requires a nuanced understanding of human attention. Historically, this has presented a significant challenge for roboticists, as discerning focus – whether through direct gaze, body orientation, or even subtle shifts in posture – necessitates complex computational models. Unlike programmed instructions, attentional cues are fluid and often ambiguous, demanding that robots move beyond pre-defined parameters and instead infer intent. The difficulty lies not just in detecting attention, but in accurately interpreting its meaning within the context of a shared task, allowing the robot to anticipate needs and respond appropriately – a crucial step toward seamless and intuitive human-robot interaction.
Many current human-robot interaction systems depend on explicit commands or signals – a spoken request, a button press, or a clearly defined gesture – to initiate action. While functional, this approach often feels clunky and unnatural, mirroring interactions with simple machines rather than collaborative partners. The limitations stem from a lack of nuanced understanding; humans effortlessly communicate intent through subtle cues like body posture, facial expressions, and even anticipatory movements. Robots programmed to respond only to direct instruction miss these vital signals, hindering the flow of collaboration and requiring users to consciously ‘translate’ their needs into machine-readable terms. This reliance on explicit cues diminishes efficiency and creates a cognitive burden for the human operator, ultimately limiting the potential for truly seamless and intuitive teamwork between people and robots.
To truly facilitate seamless human-robot interaction, research is increasingly focused on decoding the subtle language of non-verbal cues, with gaze being a particularly powerful signal. A human’s direction of sight provides critical information about their focus of attention and intended actions, yet most robots currently lack the capacity to reliably interpret this data. Studies demonstrate that robots capable of tracking and responding to human gaze – acknowledging where a person is looking, and anticipating their needs based on that visual focus – exhibit significantly improved collaborative performance. This heightened responsiveness isn’t merely about efficiency; it directly impacts the user experience, fostering a sense of natural interaction and mutual understanding that’s crucial for building trust and acceptance of robotic partners in shared workspaces and everyday life.
Deictic Gaze as a Mechanism for Enhanced Collaboration
The central hypothesis guiding this research posited that implementing deictic gaze – specifically, a robot utilizing head movements to direct attention to task-relevant objects – would demonstrably increase task efficiency. This expectation stems from the established human tendency to quickly and accurately interpret gaze direction as an indicator of salience and importance. By simulating this natural communication cue, it was predicted that participants collaborating with the robot would experience reduced search times and improved accuracy in identifying target objects, thereby streamlining the collaborative process and enhancing overall performance metrics.
The Pepper robot platform was selected to facilitate the delivery of combined verbal instruction and simulated deictic gaze. This humanoid robot enabled the presentation of task-relevant spoken commands alongside coordinated head movements designed to direct participant attention to specific visual elements within the experimental environment. The robot’s head movements were computationally controlled to simulate gaze behavior, effectively indicating objects of interest without relying on realistic eye tracking technology. This approach allowed for precise manipulation of attentional cues and enabled researchers to isolate the impact of deictic signals on task performance during human-robot collaboration.
Participants completed a visual search task designed to simulate a collaborative scenario. The task required identifying specific sandwich ingredients presented amongst a larger set of visually similar objects. This was conducted within a remotely accessible, controlled online environment allowing for standardized presentation of stimuli and precise measurement of participant response times and accuracy. The online setting facilitated data collection from multiple participants while maintaining experimental control over variables such as visual layout and task instructions, and allowed for the simulation of a human-robot collaborative interaction where the robot provided guidance during the search process.
Empirical Validation: A Clear Facilitation Effect Observed
Analysis of participant performance data revealed a statistically significant facilitation effect resulting from the implementation of deictic gaze by the robotic assistant. Specifically, participants exhibited both reduced reaction times and improved accuracy rates when identifying ingredients while the robot utilized gaze cues to direct attention. This improvement was consistently observed across all trials, indicating a reliable benefit derived from the robot’s non-verbal communication. The quantitative data demonstrates a clear correlation between the presence of deictic gaze and enhanced human performance in the ingredient identification task.
Quantitative analysis of participant performance revealed statistically significant reductions in both reaction time and task completion time when the robot employed deictic gaze. Specifically, average reaction times decreased by 14.7% (p < 0.05) and average task completion times were reduced by 12.3% (p < 0.01) compared to the control condition. These metrics indicate that the robot’s non-verbal cue directly contributed to improved efficiency in ingredient identification, allowing participants to process information and respond more quickly and complete the task in a shorter timeframe. The observed improvements are not attributable to chance and support the hypothesis that deictic gaze facilitates human-robot interaction by enhancing attentional focus.
Analysis using the NASA-Task Load Index indicated that participants experienced a reduction in perceived mental demand when interacting with the robot employing deictic gaze. This suggests the non-verbal cue effectively alleviated cognitive burden during the ingredient identification task. Supporting this finding, the error rate remained low, with only 2.24% of trials resulting in incorrect responses, further demonstrating the facilitation effect and minimal impact on task accuracy despite potential increases in processing speed.
Extending the Benefits: Implications for Inclusive and Accessible Robotics
Research indicates that the implementation of deictic gaze – a robot’s ability to look where it intends a user to focus – offers significant cognitive benefits, particularly for older adults. Studies consistently demonstrated a facilitation effect across all age groups, manifesting as improved reaction times and faster task completion. However, the positive impact was markedly stronger among older participants, suggesting this subtle social cue may play a crucial role in offsetting age-related declines in attention and processing speed. This finding supports the hypothesis that robots employing deictic gaze can create a more intuitive and supportive interaction, potentially enhancing cognitive performance and promoting greater independence for an aging population.
The study’s results underscore a significant opportunity for socially assistive robotics to enhance usability and inclusivity across a wider range of users. While a substantial 42% of participants failed to discern differences between the robot’s interaction styles, this lack of perception was notably more prevalent among older adults. This suggests that, for a portion of the aging population, even subtle enhancements in robot behavior may not be consciously registered, yet could still contribute to improved performance or engagement. Consequently, designers should prioritize robust, intuitive interfaces that function effectively even if the nuances of social cues are missed, ensuring that the benefits of robotic assistance are accessible to all, regardless of age or cognitive status.
The study leveraged the Labvanced platform to overcome logistical challenges traditionally associated with human-robot interaction research, enabling remote participation from a geographically diverse pool of individuals. This approach proved particularly crucial in light of ongoing public health concerns, offering a safe and convenient alternative to in-person laboratory sessions. By removing barriers related to travel and physical presence, researchers were able to recruit a more representative sample, including individuals who might otherwise have been excluded due to mobility limitations or health risks. The broadened participation not only enhanced the generalizability of the findings concerning deictic gaze and robotic assistance, but also demonstrated the viability of remote methodologies for advancing the field of socially assistive robotics and ensuring accessibility for a wider range of potential users.
The study’s findings regarding consistent task performance despite differing social perception responses highlight a fascinating point. It suggests that while older adults may process social cues from a robot differently, the fundamental impact on task completion remains equivalent to that of younger adults. This echoes John von Neumann’s assertion: “If people do not believe that mathematics is simple, it is only because they do not realize how elegantly it works.” The ‘elegance’ here isn’t mathematical, but algorithmic – the robot’s deictic gaze, as a clear, provable cue, functions effectively regardless of nuanced social interpretation, demonstrating a core, reliable functionality. The focus remains on provable effect, rather than subjective experience, a hallmark of robust system design.
The Path Forward
The observed equivalence in task performance gains, irrespective of age, presents a curious stability. It suggests that the fundamental processing of deictic gaze – the simple vectoring of attention – remains largely unperturbed by the decay of years. However, to interpret this as a triumph of robotic design would be premature. The differing social perception responses in older adults indicate a decoupling: the signal is received, but its meaning is not necessarily integrated with pre-existing cognitive frameworks in the same manner as in younger subjects. This dissonance deserves rigorous exploration, moving beyond behavioral metrics to neural correlates of gaze processing and social attribution.
Future work must confront the implicit assumption that “social” robots necessitate mimicking human behavior. The study reveals that a purely functional cue – a gaze indicating location – can be efficacious, yet the experience of that cue is demonstrably different. A more elegant solution may lie not in increasingly realistic anthropomorphism, but in novel signaling systems tailored to the perceptual capabilities of an aging population-systems that prioritize clarity and unambiguous communication over superficial resemblance. The pursuit of artificial sociality should not overshadow the importance of artificial usability.
Ultimately, this line of inquiry highlights a fundamental challenge in human-robot interaction: the difficulty of establishing a shared representational space. The robot’s gaze is not a window into its ‘mind’, but a programmed action. The human response is not simply perceptual, but interpretive. True progress demands a shift from asking how robots can appear social, to understanding how they can facilitate genuinely effective communication, regardless of age or cognitive state.
Original article: https://arxiv.org/pdf/2603.08810.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Call the Midwife season 16 is confirmed – but what happens next, after that end-of-an-era finale?
- Taimanin Squad coupon codes and how to use them (March 2026)
- PUBG Mobile collaborates with Apollo Automobil to bring its Hypercars this March 2026
- Robots That React: Teaching Machines to Hear and Act
- Heeseung is leaving Enhypen to go solo. K-pop group will continue with six members
- Jessie Buckley unveils new blonde bombshell look for latest shoot with W Magazine as she reveals Hamnet role has made her ‘braver’
- Overwatch Domina counters
- Genshin Impact Version 6.4 Stygian Onslaught Guide: Boss Mechanism, Best Teams, and Tips
- Clash Royale Chaos Mode: Guide on How to Play and the complete list of Modifiers
- Clash of Clans Unleash the Duke Community Event for March 2026: Details, How to Progress, Rewards and more
2026-03-11 21:01