Igniting Curiosity: How to Build AI Tutors That Learn What Students Love
New research explores the art of weaving student interests into personalized learning, offering crucial insights for designing truly engaging AI-powered educational tools.
New research explores the art of weaving student interests into personalized learning, offering crucial insights for designing truly engaging AI-powered educational tools.

A new reinforcement learning framework empowers robots to autonomously assemble stable structures from individual blocks, bypassing the need for pre-programmed plans.
A new approach allows robots to master complex manipulation tasks simply by observing human demonstrations, bypassing the need for explicit programming or reward signals.

New research reveals how adolescents envision artificial intelligence supporting their health journeys, prioritizing understanding and control over simple efficiency.

A new study explores the potential of teleoperated humanoid robots to provide stable endoscopic visualization and versatile assistance during surgical procedures.
![The system navigates the inherent limitations of base decision models-whether directly trained ([latex]DT[/latex]), globally approximated, or locally surrogated-by generating explanations constrained not by the model itself, but by a meta-interpretation of query language, informed by both user input characteristics (background and distance) and the model’s own embedded representations, acknowledging that any explanation is fundamentally a prophecy of future inadequacy.](https://arxiv.org/html/2602.23810v1/2602.23810v1/workflow.png)
A new framework allows users to not just see why a model made a decision, but to actively reason about those explanations and explore alternative scenarios.

A new approach to pneumatic actuator design leverages geometric principles and constraint layers to achieve precise and reliable soft robotic movement.
![An agent learns to navigate a complex world not by directly modeling its dynamics, but by constructing a verifiable world model-a learned representation assessed by a dedicated verifier-that simultaneously optimizes performance and guarantees adherence to a user-defined specification [latex]\varphi[/latex], effectively decoupling policy learning from precise environmental knowledge and enabling runtime certification of both behavioral correctness and model abstraction quality.](https://arxiv.org/html/2602.23997v1/2602.23997v1/x1.png)
A new framework combines reinforcement learning with formal verification to create AI agents capable of reliable performance in dynamic, real-world environments.

A comprehensive study reveals how the way robots are told to move significantly impacts their performance and ability to generalize to new tasks.
A new analysis explores how generative AI technologies could reshape legal conflict resolution, presenting both opportunities and challenges for the justice system.