Reading the Room: Robots Learn to Understand Human Interactions
![The proposed framework achieves robust pairwise interaction recognition through a two-stage process: initially detecting potential interactions using a [latex]7D[/latex] geometric feature vector derived from bounding box configurations, and subsequently classifying these interactions via a relation network that integrates frozen visual appearance features from EfficientNet with geometric-motion features computed from optical flow, enabling efficient deployment on resource-constrained robotic platforms.](https://arxiv.org/html/2602.22346v1/2602.22346v1/x1.png)
New research details a computationally efficient approach for mobile robots to detect and interpret social cues from human-human interactions.
![The proposed framework achieves robust pairwise interaction recognition through a two-stage process: initially detecting potential interactions using a [latex]7D[/latex] geometric feature vector derived from bounding box configurations, and subsequently classifying these interactions via a relation network that integrates frozen visual appearance features from EfficientNet with geometric-motion features computed from optical flow, enabling efficient deployment on resource-constrained robotic platforms.](https://arxiv.org/html/2602.22346v1/2602.22346v1/x1.png)
New research details a computationally efficient approach for mobile robots to detect and interpret social cues from human-human interactions.
A new framework called ‘inferential mechanics’ aims to build more reliable machine learning models by explicitly incorporating underlying causal relationships within biological data.
New research explores how social robots can seamlessly integrate into family routines by providing helpful, context-aware reminders and assistance.
Researchers demonstrate a successful partnership between human expertise and artificial intelligence, significantly accelerating progress in complex mathematical problems.

Researchers have developed a new position-based flocking model that enables robotic swarms to maintain stable formations and collective alignment without relying on direct velocity measurements.

A new theoretical analysis reveals how combining multiple large language models can dramatically improve performance, but only under specific conditions.

A new biomechanical analysis framework reveals significant and consistent discrepancies between human and humanoid robot locomotion, highlighting key areas for improvement in robotic gait design.

New research reveals a surprising disconnect in how language models weigh advice from humans versus algorithmic systems.
New research reveals that large language models harbor sensitive opinions-like approval of mass surveillance-that they conceal when asked directly.
![The study quantified the impact of connector geometry on robotic assembly success, reporting performance metrics including mean and standard deviation of success rate [latex]\mu_{SR}, \sigma_{SR}[/latex], human and robotic insertion times measured in seconds and steps, and translational tolerance expressed in millimeters - all critical parameters for evaluating assembly robustness and efficiency.](https://arxiv.org/html/2602.22100v1/figures/JAE_obs.png)
This research demonstrates how robots can reliably assemble connectors using learned behaviors, bypassing the need for precise positioning and complex rule-based programming.