Author: Denis Avetisyan
This review details the architecture and implementation of ARTEMIS, a humanoid robot system that recently secured a championship win at RoboCup 2024 through advanced integration of perception, planning, and control.

A hierarchical, model-based system enables robust locomotion, whole-body control, and behavior execution for high-performance humanoid soccer.
Achieving truly dynamic and robust performance remains a central challenge in humanoid robotics, particularly within the complex environment of competitive soccer. This paper details ‘A Hierarchical, Model-Based System for High-Performance Humanoid Soccer’, presenting the integrated hardware and software innovations behind ARTEMIS, the champion of the RoboCup 2024 Adult-Sized Humanoid Soccer Competition. Through a tightly coupled perception, planning, and control framework-encompassing novel hardware design and a sophisticated behavior tree-we demonstrate a significant advancement in autonomous athletic capability. Can this model-based approach pave the way for increasingly sophisticated humanoid robots capable of competing with-and learning from-human athletes?
The Inevitable Chaos of Embodied Intelligence
The RoboCup Humanoid League pushes the boundaries of robotics by simulating the complexities of human soccer. Unlike industrial automation or controlled laboratory environments, a robotic soccer match demands constant adaptation to an ever-changing field – opponents, ball position, and unpredictable collisions all require immediate responses. This isn’t simply a test of locomotion; it’s a challenge in real-time perception, dynamic balance, and strategic maneuvering. Robots must not only walk and kick, but also maintain stability while being jostled, predict the ball’s trajectory, and collaborate – or compete – with other autonomous agents. The league, therefore, serves as a proving ground for robust and adaptable behaviors crucial for deploying robots in other unstructured, real-world scenarios, demanding far more than pre-programmed sequences and precise movements.
Conventional robotic control systems, designed for predictable environments and precise, pre-programmed motions, frequently falter when confronted with the chaotic dynamism of soccer. The sport’s inherent unpredictability – rapid changes in direction, collisions with other robots or the ball, and uneven playing surfaces – introduces significant disturbances that challenge a robot’s ability to maintain balance and execute complex maneuvers. Unlike industrial robots operating in controlled settings, a soccer-playing robot must continuously adapt to unforeseen circumstances, requiring sophisticated algorithms to compensate for external forces and maintain postural stability. This often manifests as jerky movements, loss of balance, or an inability to quickly recover from perturbations, hindering a robot’s performance and limiting its capacity for agile, human-like play. The core difficulty lies in decoupling precise motor control from the constant need for reactive adjustments, demanding a new generation of control architectures capable of seamlessly integrating both.
The pursuit of truly capable humanoid robots in soccer demands a convergence of low-level motor skills and sophisticated cognitive abilities. Current robotic systems often excel at executing pre-programmed movements, but falter when confronted with the rapidly changing conditions of a match. Reaching human-level performance requires more than just precise joint control; it necessitates the integration of planning, perception, and learning algorithms that allow a robot to anticipate opponent actions, adapt to unexpected events, and formulate effective strategies. This involves developing systems capable of not only how to move, but also when and why, bridging the gap between reactive execution and proactive, intelligent gameplay. Such a holistic approach promises robots that don’t merely react to the ball, but actively participate in the dynamic flow of the game, mirroring the intuitive decision-making of their human counterparts.

Synergistic Design: Beyond Modular Assembly
ARTEMIS employs a fully integrated design philosophy, moving beyond modular component assembly to achieve synergistic performance. This involves co-design of hardware – specifically actuators, joints, and structural elements – with corresponding software control systems. The platform utilizes custom-designed actuators and torque-controlled joints, paired with whole-body control algorithms, to facilitate complex movements and maintain dynamic stability. This holistic approach extends to sensor integration, data processing, and real-time decision-making, allowing ARTEMIS to coordinate movements and adapt to changing environmental conditions. The goal is not simply to assemble capable components, but to create a system where each element’s performance is optimized by its interaction with the others.
ARTEMIS utilizes quasi-direct-drive actuators and torque-controlled joints to achieve high-performance locomotion and manipulation. Unlike traditional geared systems, quasi-direct-drive minimizes backlash and maximizes responsiveness, enabling precise control of joint positions and velocities. Torque control, implemented at each joint, allows for the application of specific forces, crucial for maintaining balance during dynamic movements and absorbing impacts. This configuration results in a system with a high power-to-weight ratio and improved agility, allowing for rapid changes in direction and forceful interactions with the environment. The actuators are designed to handle both the sustained loads of locomotion and the peak torques required for kicking and collision recovery, contributing to the robot’s resilience and overall athletic capability.
Whole-body control algorithms within the ARTEMIS platform utilize a centralized model predictive control (MPC) approach to coordinate the robot’s numerous degrees of freedom. These algorithms process sensor data – including inertial measurement unit (IMU) readings, foot force sensor data, and joint position feedback – to compute optimal joint trajectories. The resulting trajectories ensure dynamic stability during locomotion and manipulation, enabling the robot to maintain balance while simultaneously executing complex actions like kicking. Furthermore, the system incorporates real-time replanning capabilities, allowing it to adapt to unexpected disturbances and changes in the environment, such as contact with the ball or opposing players, thereby maintaining consistent performance throughout gameplay.
ARTEMIS achieves continuous locomotion during dynamic actions through in-gait kicking capabilities. This functionality bypasses the typical robotic halting or significant deceleration required for kicking motions by integrating kick execution directly into the robot’s walking gait. The system predicts and compensates for disturbances caused by the kick, maintaining balance and forward momentum via coordinated adjustments in joint trajectories and center of mass control. This is accomplished through a combination of predictive control algorithms and the robot’s quasi-direct-drive actuators, enabling rapid torque adjustments and precise force control during the swing leg’s impact with the ball, all while the robot continues to step with its supporting leg.

Perception as Dynamic Cartography
The Navigation System employs Dynamic Augmented Visibility Graphs (DAVGs) as the foundational method for path planning. Traditional visibility graphs are static, calculated once based on a fixed environment map. DAVGs, however, are continuously updated to account for both the fixed geometry of the environment – representing static obstacles – and the real-time positions of dynamic agents, specifically the opposing players. This dynamic augmentation involves recalculating graph edges and node connectivity as player positions change, allowing the system to generate paths that avoid predicted collisions. The graph construction prioritizes efficiency by only considering relevant nodes and edges within a defined search radius, and utilizes heuristics to estimate the cost of traversing each edge, leading to optimized path solutions.
The Perception System employs the YOLOv8 object detection model to provide real-time identification of critical elements within the robot’s operational environment. Specifically, YOLOv8 is trained to accurately locate the ball, all active robots (both teammate and opponent), and pre-defined landmarks. This identification process generates bounding box coordinates and confidence scores for each detected object, enabling the system to differentiate between object types and assess detection reliability. The resulting data stream provides a continuous, updated representation of the surrounding environment, which is essential for downstream navigation, localization, and strategic decision-making.
The Localization System determines the robot’s position and orientation using the CLAP (Continuous Localization and Pose estimation) algorithm. Isolated testing of CLAP demonstrates high accuracy, quantified by a Mean Squared Error (MSE) of $0.0357 m^2$ and a Mean Absolute Error (MAE) of $0.1651 m$. These metrics indicate that the system’s pose estimation consistently remains within a small margin of error, facilitating precise navigation and interaction with the environment. Input for CLAP is derived from the Perception System’s identification of environmental features like the ball, other robots, and designated landmarks.
Mid-Level Planning and High-Level Behavior function as a hierarchical control system. Mid-Level Planning receives desired waypoints from the High-Level Behavior module and generates dynamically feasible, collision-free trajectories using optimization-based methods. These trajectories account for robot kinematics and dynamics, ensuring smooth and achievable movements. The High-Level Behavior module, responsible for strategic decision-making, determines overall mission goals and sequences waypoints for the Mid-Level Planner, effectively orchestrating actions based on the perceived environment and game state. This division of labor allows for both reactive obstacle avoidance and proactive tactical execution, contributing to overall system effectiveness.

The Illusion of Stability: Robust Control and Estimation
Collision-free Model Predictive Control (MPC) operates by iteratively solving an optimization problem to determine the optimal sequence of control actions. This optimization minimizes a cost function, typically representing factors like time, energy, and deviation from a desired trajectory, while simultaneously satisfying a set of constraints. These constraints explicitly define the robot’s dynamic limitations – such as joint limits, velocity restrictions, and torque capabilities – and, critically, enforce collision avoidance. The MPC algorithm predicts the future behavior of the system over a finite time horizon, evaluating potential control inputs for collision risk using the robot’s kinematic and dynamic models. By incorporating these collision constraints directly into the optimization process, MPC generates trajectories that ensure safe and efficient movement, effectively preventing impacts with the environment and other agents. The resulting control inputs are then applied, and the process repeats at each time step, allowing the controller to react to changing conditions and maintain collision-free operation.
The Invariant Extended Kalman Filter (IEKF) provides a robust solution for state estimation in dynamic systems by maintaining consistent filter covariance, preventing issues common in traditional Extended Kalman Filters where covariance can become non-positive definite. This is achieved through a specific covariance update scheme that ensures positive semi-definiteness. Further enhancing robustness, the IEKF is coupled with Contact-Aided Estimation, which incorporates data from contact sensors to refine the robot’s pose estimate. This fusion of inertial measurements and contact data mitigates the effects of sensor noise and external disturbances, improving localization and tracking accuracy, particularly in scenarios with wheel slippage or uncertain ground contact.
The robot’s decision-making process is structured around a Behavior Tree (BT) architecture, allowing for flexible and reactive responses to dynamic game conditions. BTs facilitate modularity by organizing behaviors into reusable nodes, enabling complex actions to be built from simpler components. This hierarchical structure permits the robot to evaluate multiple potential actions based on real-time sensor data and game state, selecting the most appropriate behavior based on prioritized conditions. The BT continuously replans and adapts its actions, enabling it to handle unexpected events and maintain robust performance throughout the game, unlike static, pre-programmed sequences.
During testing, the ARTEMIS robot achieved a kick success rate of 46 out of 50 attempted kicks. Furthermore, control of the kick direction remained within a ±15° tolerance for 46 of those same 50 kicks. These results demonstrate the effectiveness of the robot’s specialized Foot Attachment Design, which utilizes a PETG-CF (Carbon Fiber reinforced Polyethylene Terephthalate Glycol) material. The high success and accuracy rates suggest the foot attachment contributes significantly to stable ball contact and predictable trajectory control during the kicking motion.

Beyond Competition: The Ecosystem of Autonomous Teams
The achievement of championship status by the ARTEMIS humanoid robot at the RoboCup 2024 competition signifies a pivotal advancement in the field of robotics. This success wasn’t merely a demonstration of engineering prowess, but a validation of the underlying principles guiding the development of adaptable and highly capable robotic systems. ARTEMIS’s performance showcased an ability to navigate complex, dynamic environments and execute intricate tasks with a level of autonomy previously unseen in humanoid robots competing at this level. The platform’s success suggests that the longstanding goal of creating robots capable of operating effectively in unstructured, real-world scenarios is increasingly within reach, offering a tangible glimpse into a future where humanoid robots can reliably assist – and even collaborate with – humans in a diverse range of applications.
The ARTEMIS humanoid robot wasn’t simply built for competition; it functioned as a crucial proving ground for complex algorithms governing robotic autonomy. Researchers leveraged the platform to rigorously test and refine innovations in three core areas: perception, enabling the robot to accurately interpret its environment through sensor data; planning, allowing for the creation of dynamic and efficient movement strategies; and control, ensuring precise and stable execution of those plans. This iterative process of development and validation, performed within the demanding context of RoboCup, yielded substantial progress in each field and demonstrated the feasibility of integrating these advanced capabilities into a single, functional robotic system. The resulting advancements represent a significant step towards robots capable of operating independently and adapting to unforeseen challenges in real-world scenarios.
Ongoing research endeavors are concentrating on refining the collaborative capabilities of robotic systems, moving beyond individual performance to focus on cohesive team dynamics. This involves developing algorithms that enable robots to not only perceive their environment and plan individual actions, but also to anticipate the actions of teammates, negotiate tasks, and adapt strategies in real-time. The goal is to create robotic teams capable of complex, coordinated behaviors without direct human intervention, fostering truly autonomous operation. This necessitates advancements in areas such as distributed sensing, decentralized control, and robust communication protocols, ultimately allowing robots to function as a unified, intelligent entity capable of tackling multifaceted challenges in dynamic and unpredictable environments.
The advancements demonstrated by the ARTEMIS platform extend significantly beyond the RoboCup competition, holding considerable promise for real-world applications demanding robust and adaptable robotic systems. Specifically, the algorithms refined through this research – focusing on perception, planning, and control in dynamic environments – are directly transferable to scenarios like search and rescue operations, where robots can navigate complex and dangerous terrain to locate and assist individuals. Similarly, disaster response efforts could be dramatically enhanced through the deployment of robotic teams capable of assessing damage, delivering aid, and supporting recovery efforts in environments unsafe for human responders. Furthermore, this work lays a crucial foundation for improved human-robot collaboration, enabling robots to function not as simple tools, but as intelligent partners capable of anticipating needs and working alongside humans in complex and unpredictable situations, ultimately expanding the scope of what’s possible in fields ranging from manufacturing to healthcare.

The system detailed in this work isn’t merely assembled; it evolves. The architecture, a tightly integrated perception-action loop culminating in demonstrable success at RoboCup 2024, isn’t a triumph of foresight, but a careful acceptance of inevitable adaptation. It echoes Andrey Kolmogorov’s sentiment: “The shortest way to learn is through intuition.” The developers didn’t build a soccer-playing robot; they fostered an ecosystem capable of learning and responding to the unpredictable chaos of the game. Each component, from the motion planning to the whole-body control, isn’t a solved problem, but a prediction of where failure might occur, and a method for graceful recovery. It’s a controlled apocalypse, repeated with every kickoff.
The Long Game
The pursuit of autonomous soccer, embodied here in ARTEMIS, reveals less about dominion over robotics and more about the inherent limits of prediction. Each successful pass, each deftly avoided collision, is not a victory over complexity, but a temporary truce. The system functions, yes, and even achieves a championship – a localized maximum in a vast, chaotic search space. But the underlying fragility remains. Dependencies accumulate, the weight of accumulated choices pressing down on future iterations. Technologies change, dependencies remain.
Future work will inevitably focus on increased robustness – more sensors, faster processors, more sophisticated algorithms. Yet, these are merely attempts to delay the inevitable erosion of performance as the environment shifts and unforeseen circumstances arise. The true challenge lies not in building better robots, but in designing systems that can gracefully degrade, that can adapt and improvise when the predicted world diverges from the actual one.
Architecture isn’t structure – it’s a compromise frozen in time. The field will likely move away from monolithic, tightly coupled systems towards more modular, decentralized architectures. Perhaps the future of humanoid soccer-and robotics generally-lies not in striving for perfect control, but in embracing controlled instability, in cultivating systems that are resilient not by virtue of their precision, but by their capacity for unexpectedness.
Original article: https://arxiv.org/pdf/2512.09431.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash Royale Best Boss Bandit Champion decks
- Best Hero Card Decks in Clash Royale
- Clash Royale Witch Evolution best decks guide
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- Best Arena 9 Decks in Clast Royale
- Clash of Clans Meltdown Mayhem December 2025 Event: Overview, Rewards, and more
- Cookie Run: Kingdom Beast Raid ‘Key to the Heart’ Guide and Tips
- JoJo’s Bizarre Adventure: Ora Ora Overdrive unites iconic characters in a sim RPG, launching on mobile this fall
- Best Builds for Undertaker in Elden Ring Nightreign Forsaken Hollows
- Clash of Clans Clan Rush December 2025 Event: Overview, How to Play, Rewards, and more
2025-12-11 14:33