Legged Robotics Takes a Step Forward: Introducing GuangMing-Explorer

Author: Denis Avetisyan


A new four-legged robot platform and hierarchical planning framework enables robust and efficient autonomous exploration in challenging real-world environments.

The system enables a quadrupedal robot to independently navigate and investigate environments through an automated exploration pipeline.
The system enables a quadrupedal robot to independently navigate and investigate environments through an automated exploration pipeline.

This paper details the design and validation of GuangMing-Explorer, a fully integrated system for LiDAR-based SLAM, localization, and mapping.

Despite advances in robotics, creating a truly integrated system for robust autonomous exploration remains challenging. This paper introduces GuangMing-Explorer: A Four-Legged Robot Platform for Autonomous Exploration in General Environments, a fully integrated hardware and software platform designed to address this need. We demonstrate a hierarchical exploration framework deployed on a quadrupedal robot, enabling efficient and accurate operation in complex, real-world environments. Could this platform serve as a foundational tool for deploying robots in challenging scenarios like search and rescue, environmental monitoring, or infrastructure inspection?


Decoding the Unknown: Autonomous Navigation as Reverse Engineering

The pursuit of genuinely autonomous robots necessitates a sophisticated interplay between perception, localization, and planning, particularly when operating within uncharted territories. These robots must not simply execute pre-programmed routes, but actively interpret sensory data to build a representation of their surroundings. This involves identifying objects, understanding terrain, and simultaneously determining the robot’s own position within that environment – a process known as simultaneous localization and mapping (SLAM). Effective planning then leverages this environmental understanding to chart a safe and efficient path toward a designated goal, dynamically adjusting to unforeseen obstacles or changes. The complexity arises from the inherent uncertainties in sensor readings and the computational demands of processing information in real-time, pushing the boundaries of robotic intelligence and algorithm design. Ultimately, a robot’s ability to navigate and operate independently hinges on its capacity to perceive, understand, and react to the unknown, mirroring aspects of biological intelligence.

Conventional robotic navigation systems, frequently reliant on pre-programmed maps or highly structured environments, demonstrate limited adaptability when confronted with the unpredictable nature of real-world settings. These approaches often falter due to challenges like dynamic obstacles, varying lighting conditions, and the sheer complexity of unstructured spaces. For instance, Simultaneous Localization and Mapping (SLAM) algorithms, while powerful, can become computationally expensive and prone to drift in large or feature-poor environments. This inefficiency stems from the need to constantly update and refine maps while simultaneously determining the robot’s position within them, creating a significant bottleneck for truly autonomous and sustained exploration. Consequently, research is increasingly focused on developing more robust and efficient algorithms – including those inspired by biological systems – that can enable robots to navigate reliably in challenging, real-world conditions.

Successful autonomous exploration hinges critically on a robot’s ability to accurately determine its position within an environment – a process known as localization – and to construct a representation of that environment – mapping. Without a dependable understanding of where it is and what surrounds it, a robot cannot effectively plan paths, avoid obstacles, or achieve complex goals. Current research focuses on Simultaneous Localization and Mapping (SLAM) techniques, employing sensors like cameras and lidar to build these internal representations. However, achieving robust and efficient SLAM remains a significant challenge, particularly in dynamic or visually ambiguous environments where sensor data can be noisy or incomplete. The fidelity of the map directly impacts the robot’s ability to navigate and interact with the world, meaning improvements in localization and mapping are foundational to unlocking truly autonomous capabilities.

The robotic platform features a specific configuration designed for its intended functionality.
The robotic platform features a specific configuration designed for its intended functionality.

GuangMing-Explorer: A System for Dissecting Reality

GuangMing-Explorer is a fully integrated autonomous exploration system designed and implemented using the Unitree Go2 mobile robotic platform. This quadrupedal base provides the locomotion and physical structure for the system’s onboard sensors and computing hardware. The Go2 platform was selected for its demonstrated capabilities in navigating complex terrains and its payload capacity, which accommodates the LiDAR sensors, processing unit, and power systems required for autonomous operation. The complete system facilitates untethered, real-world exploration and data collection without the need for remote control or pre-mapped environments.

GuangMing-Explorer utilizes a dual LiDAR sensor configuration to address the limitations inherent in single-sensor systems. The HESAI XT16 provides long-range detection, with a maximum range of up to 200 meters and a 360° horizontal field of view, enabling the system to perceive distant obstacles and map large areas. Complementing this, the DJI Livox MID-360 offers a shorter range, optimized for detailed close-range perception and improved performance in environments with reflective surfaces. This sensor provides a 360° field of view with higher point density for enhanced object recognition and accurate mapping of immediate surroundings, particularly in complex or cluttered spaces. Data fusion between the two sensors allows for robust and reliable environmental understanding across varying distances and conditions.

The GuangMing-Explorer system utilizes the NVIDIA Jetson Orin NX as its primary onboard computer to facilitate real-time data processing and control. This embedded computing platform features a high-performance CPU and GPU, delivering approximately 275 TOPS of AI performance at under 70W power draw. The Jetson Orin NX enables the execution of computationally intensive algorithms, including LiDAR odometry via Fast-LIO2 and simultaneous localization and mapping (SLAM), directly on the robot, eliminating the need for external computing resources and minimizing latency. The system’s architecture is designed to leverage the Jetson Orin NX’s capabilities for sensor data fusion, path planning, and autonomous navigation tasks in complex environments.

GuangMing-Explorer utilizes Fast-LIO2, a LiDAR odometry and mapping algorithm, to achieve accurate robot localization and pose estimation. Fast-LIO2 is a LiDAR-inertial odometry framework that operates directly on point cloud data, enabling real-time performance on embedded systems like the NVIDIA Jetson Orin NX. The algorithm employs a keyframe-based approach, extracting and managing representative point clouds to minimize drift and maintain consistency over extended trajectories. By fusing LiDAR measurements with inertial measurements from an onboard IMU, Fast-LIO2 provides robust and accurate six-degree-of-freedom pose estimates, essential for autonomous navigation and mapping in complex environments.

Precision Through Synchronization: Calibrating the System’s Nervous System

Precise calibration of LiDAR sensors is achieved through the implementation of the Precision Time Protocol (PTP), IEEE 1588. PTP synchronizes clocks across the network, enabling the system to accurately timestamp LiDAR point clouds. This temporal alignment is critical because it compensates for the inherent delays and offsets between multiple sensors and the processing unit. Without accurate time synchronization, point clouds from different sensors would be misaligned, leading to inaccuracies in the reconstructed environment and negatively impacting odometry and mapping performance. The system relies on hardware-level time stamping to minimize latency and achieve sub-microsecond synchronization, which is essential for maintaining data consistency and enabling reliable operation in dynamic environments.

The calibration process is directly linked to the performance of Fast-LIO2, as accurate alignment of LiDAR data is fundamental to reliable LiDAR odometry. Errors in calibration introduce systematic biases in the estimated robot trajectory, negatively impacting the accuracy of subsequent pose estimation and mapping. Specifically, precise calibration minimizes distortions in point cloud registration, enabling Fast-LIO2 to more effectively extract features and maintain consistent localization. This is achieved by ensuring that time synchronization between LiDAR sensors and the inertial measurement unit is maintained within acceptable tolerances, reducing drift and improving the overall robustness of the odometry solution.

Quantitative analysis demonstrates the mapping accuracy of the system, yielding an Average Absolute Error of 1.0 cm when compared to ground truth data obtained from manually measured lines. This indicates a typical deviation of 1.0 cm between the system’s estimated map and the physical environment. Furthermore, the Maximum Error recorded was 2.5 cm, representing the largest observed discrepancy in any single measurement. These metrics establish a high degree of precision in the generated maps, with errors consistently remaining within a narrow range around the average value.

Accurate robot pose estimation is fundamental to autonomous navigation and operation within dynamic environments. The system’s ability to precisely determine the robot’s position and orientation – including $x$, $y$, and $\theta$ values – allows for the creation of reliable path plans that avoid obstacles and efficiently traverse the workspace. This capability extends to effective exploration, where the robot can build a map of its surroundings while simultaneously localizing itself, even as the environment changes due to moving objects or people. Without precise pose estimation, path planning becomes unpredictable and exploration is severely limited, hindering the robot’s ability to operate safely and effectively.

Unveiling the Unknown: Planning and Exploration as a Form of Intelligence

GuangMing-Explorer achieves robust navigation through a hierarchical planning system, integrating both global and local approaches. The robot first employs TARE, a global planner, to establish a broad, long-term path toward unexplored areas, effectively mapping out the overall trajectory. However, TARE’s output is then refined by Pure Pursuit, a local planner, which focuses on immediate obstacle avoidance and precise trajectory tracking. This combination allows the robot to efficiently navigate complex environments; TARE provides the ‘where to go’ while Pure Pursuit determines ‘how to get there’ safely and accurately, enabling dynamic adjustments in response to unforeseen obstacles and ensuring smooth, real-time operation. This synergistic approach is critical for maximizing exploration speed and completeness.

To navigate unfamiliar spaces, GuangMing-Explorer leverages sampling-based methods, prominently featuring the Rapidly-exploring Random Tree (RRT) algorithm. These techniques enable the robot to efficiently explore and map its surroundings by constructing a tree of possible paths from a starting point. RRT works by randomly sampling points in the environment and connecting them to the nearest node in the tree, iteratively expanding the search space. This probabilistic approach is particularly effective in high-dimensional spaces and complex environments, allowing the robot to quickly identify promising areas for further investigation and avoid getting trapped in local optima. By intelligently sampling the configuration space, GuangMing-Explorer can efficiently discover feasible paths and build a comprehensive map of its surroundings, forming the foundation for successful autonomous navigation and exploration.

GuangMing-Explorer employs a frontier-based exploration strategy to navigate and map unknown spaces, prioritizing areas that offer the most potential for new information. This approach identifies boundaries between explored and unexplored regions – the “frontiers” – and evaluates each one using an information gain metric. Essentially, the robot doesn’t simply move forward randomly; it actively seeks out locations where sensing will dramatically reduce uncertainty about the environment. By quantifying the value of exploring each frontier – considering factors like size, distance, and potential visibility – the system directs the robot towards the most informative areas first. This targeted exploration, driven by maximizing information gain, allows GuangMing-Explorer to efficiently build a complete map and achieve comprehensive environmental coverage, surpassing the performance of simpler navigation methods.

Evaluations within standard office environments reveal that GuangMing-Explorer attains a remarkable 92.39% in environment coverage completeness. This figure represents a substantial advancement over the foundational TARE algorithm, which previously achieved 78.27% under identical conditions. The improvement underscores the efficacy of the integrated planning and exploration strategies employed by GuangMing-Explorer, allowing for more thorough and efficient mapping of previously unknown spaces. Such a high degree of coverage is critical for applications requiring comprehensive environmental understanding, like autonomous cleaning, security patrols, and detailed inspection tasks, demonstrating a significant step toward robust and reliable robotic autonomy.

A critical component of GuangMing-Explorer’s navigational success lies in its computational efficiency; the system consistently computes a feasible path plan in under one second per iteration. This rapid processing speed is achieved through a streamlined integration of both global and local planning algorithms, allowing for real-time adaptation to dynamic environments. The low latency ensures the robot can react swiftly to unexpected obstacles or changes in the environment, maintaining continuous progress during exploration. Such quick calculations are especially valuable in time-sensitive applications and contribute directly to the robot’s ability to achieve high environment coverage, exceeding the performance of previous algorithms by a significant margin.

Test 3 demonstrates that explored volume, traveled distance, and runtime per planning iteration all increase over time.
Test 3 demonstrates that explored volume, traveled distance, and runtime per planning iteration all increase over time.

Beyond Automation: Towards Truly Intelligent Exploration

GuangMing-Explorer’s future development centers on integrating reinforcement learning algorithms to refine both its physical movement and its capacity for independent judgment. This approach moves beyond pre-programmed behaviors, allowing the robot to learn through trial and error within complex environments, ultimately optimizing its gait and navigational choices. By rewarding successful actions and penalizing failures, the system will progressively enhance the robot’s ability to traverse challenging terrain and make informed decisions regarding exploration routes. The anticipated outcome is a more adaptable and resourceful robot capable of navigating unfamiliar landscapes with greater efficiency and autonomy, representing a significant step toward truly intelligent robotic exploration.

The capacity for environmental adaptation represents a significant leap forward for GuangMing-Explorer. Rather than relying on pre-programmed responses to static conditions, the robot will leverage ongoing sensory input to dynamically adjust its movements and investigative priorities. This means navigating previously unseen obstacles, responding to shifts in terrain, and even altering its exploration patterns based on detected anomalies – effectively learning how to explore more efficiently. By continuously refining its understanding of the surroundings, the robot moves beyond simple traversal and toward an optimized strategy for maximizing data collection and achieving its objectives in complex, unpredictable landscapes.

GuangMing-Explorer represents a convergence of critical robotic competencies designed to redefine the limits of independent environmental investigation. The system doesn’t simply react to stimuli; it integrates robust perception – allowing for accurate environmental modeling – with sophisticated planning algorithms that chart optimal exploration routes. Crucially, this is coupled with continuous learning through advanced techniques, enabling the robot to refine its understanding of the environment and adapt its strategies in real-time. This synergistic approach moves beyond pre-programmed behaviors, fostering a system capable of genuine autonomy and allowing it to tackle increasingly complex and unpredictable terrains – ultimately paving the way for more versatile and efficient robotic explorers across diverse fields like disaster response, planetary science, and infrastructure inspection.

The development of GuangMing-Explorer signifies a leap towards robotic systems capable of far more than pre-programmed tasks. This advancement isn’t limited to enhanced exploration; it foreshadows a new generation of robots poised to optimize performance across diverse fields. From streamlining logistical operations in warehouses and enhancing search-and-rescue efforts in unpredictable terrains, to facilitating detailed environmental monitoring and even contributing to infrastructure inspection, the principles guiding GuangMing-Explorer – robust perception, adaptable planning, and continuous learning – are broadly applicable. The resulting systems promise not just increased efficiency, but also a heightened ability to operate autonomously in complex, real-world scenarios, ultimately reducing human risk and maximizing operational potential.

The development of GuangMing-Explorer embodies a relentless pursuit of understanding through deconstruction and reconstruction-a principle keenly articulated by Carl Friedrich Gauss: “If I were to wish for anything, I should wish for more time.” This platform doesn’t simply accept the limitations of existing robotic systems; it actively challenges them through a hierarchical planning framework and robust LiDAR odometry. The platform’s ability to perform autonomous exploration in general environments isn’t about flawless execution from the start, but rather a process of iterative refinement-testing, breaking down complexities, and rebuilding a more capable system. The core idea of the platform-efficient and accurate autonomous exploration-relies on precisely this willingness to dissect and improve upon existing methodologies, embodying Gauss’s sentiment regarding the value of time dedicated to thorough investigation.

Where to Next?

The GuangMing-Explorer platform, in its pursuit of autonomous exploration, neatly sidesteps several long-standing challenges – but only by defining the problem in a specific way. What happens when the ‘general environments’ become genuinely adversarial? The current framework relies on successful LiDAR odometry and hierarchical planning; both falter predictably when presented with deliberately misleading sensory data, or environments designed to exploit the planner’s assumptions. A truly robust explorer requires not just mapping, but belief in its map – a system for quantifying and mitigating uncertainty, and for actively seeking information that resolves it.

Furthermore, the emphasis on efficient exploration begs the question of purpose. The platform navigates and maps; it does not, inherently, understand. Integrating semantic understanding – recognizing objects, inferring affordances, and building a causal model of the environment – is not merely an extension of the current framework, but a fundamental shift. Can a robot, even one capable of flawless SLAM, truly explore without a hypothesis to test, a question to answer?

The logical endpoint of this line of inquiry isn’t a more accurate map, but a robot that discards the map when it becomes demonstrably false – or, more provocatively, a system that creates maps designed to deceive potential adversaries. The real challenge isn’t building a robot that can explore any environment, but one that can redefine the environment itself.


Original article: https://arxiv.org/pdf/2512.15309.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-19 00:56