Decoding Feelings: A Multi-Agent System for Smarter Human-Robot Interaction

Researchers are developing new frameworks for robots to better understand human emotions by combining insights from video and other sensory data.

Researchers are developing new frameworks for robots to better understand human emotions by combining insights from video and other sensory data.
Researchers are investigating how AI agents can personalize and preserve individual preferences in advance care planning, potentially bridging the gap between current wishes and future medical decisions.

This review unpacks the rapidly evolving field of Vision-Language-Action models, exploring how AI is learning to understand the world and interact with it physically.
A new framework aims to make affective AI systems demonstrably transparent and trustworthy by securing explanations on a blockchain.

Researchers are harnessing the power of video generation to create realistic training data for robots, allowing them to learn complex manipulation tasks from limited human examples.

New research demonstrates how equipping AI agents with access to current medical data and robust retrieval methods significantly improves their ability to tackle complex therapeutic decision-making.
A new framework leverages the power of multi-agent systems to tackle the growing complexities of data processing in the age of large language models and heterogeneous data pipelines.

Researchers have developed a new framework that allows humanoid robots to perform complex, coordinated movements while interacting with their environment.

A new interface leverages artificial intelligence to help medicinal chemists move beyond traditional methods and efficiently generate compelling hypotheses for identifying promising drug targets.

New research explores how combining the power of artificial intelligence with logical reasoning can create more capable and reliable robots.