Robots Learn to Navigate From Sight Alone

A legged robot’s environmental exploration establishes a traversability prior-a learned understanding of navigable space-which is then leveraged by an aerial robot for safe passage through the same volume, demonstrating a cross-embodiment transfer of knowledge regarding spatial feasibility and enabling trajectory generation for a different robotic platform.

A new diffusion-based framework allows robots to infer safe paths and navigate complex environments directly from visual input, without relying on pre-defined maps or extensive training data.