Author: Denis Avetisyan
A new robotic system combines remote operation with shared control to tackle the nuanced demands of retail environments where full automation isn’t always feasible.

This review details a dual-arm omnidirectional mobile robot leveraging shared control teleoperation with haptic feedback for enhanced performance in retail applications.
Despite advances in retail automation, fully autonomous robots still struggle with the unpredictable nature of real-world store environments. This paper introduces a novel [latex]Teleoperated Omni-directional Dual Arm Mobile Manipulation Robotic System with Shared Control for Retail Store[/latex] designed to address this limitation through remote human guidance. By combining omnidirectional mobility, dual-arm manipulation, and a VR-based shared control interface, the system enables efficient handling of diverse retail products, even in dynamic scenarios. Could this approach represent a crucial step towards seamlessly integrating robotic assistance into the complex demands of modern retail operations?
The Evolving Landscape of Retail Automation
The retail sector is currently navigating a confluence of challenges that are rapidly accelerating the need for automation. Persistent labor shortages, exacerbated by shifting workforce demographics and increased competition for skilled employees, are straining operational capacity and driving up costs. Simultaneously, customers are demanding increasingly personalized and seamless shopping experiences – from rapid order fulfillment and convenient delivery options to interactive in-store assistance. This dual pressure is forcing retailers to seek innovative solutions beyond traditional methods, exploring technologies like robotics, artificial intelligence, and computer vision to bridge the gap between limited resources and escalating consumer expectations, ultimately ensuring both profitability and customer satisfaction.
Conventional automation, largely built upon rigidly programmed machines and pre-defined pathways, frequently falters when applied to retail environments. These spaces are inherently dynamic – layouts shift for promotions, product arrangements change rapidly, and unpredictable human behavior is the norm. Unlike the controlled conditions of a factory floor, retail presents a constant stream of unstructured data – varying lighting, cluttered aisles, and diverse object shapes – that overwhelms systems designed for predictability. This inability to adapt to unforeseen circumstances and navigate complex, real-world scenarios significantly hinders the broad implementation of traditional robotic solutions, leaving many retail tasks still reliant on manual labor despite increasing demands and persistent staffing challenges.
Retail environments present unique challenges for robotics, demanding systems far beyond the capabilities of traditional industrial automation. Unlike the highly structured settings of factories, stores are characterized by unpredictable layouts, constantly shifting displays, and the presence of numerous people. Consequently, successful automation requires robots equipped with advanced perception – utilizing technologies like computer vision and depth sensors – to dynamically map their surroundings and identify objects. Furthermore, these systems must demonstrate dexterity in handling a wide range of products, from delicate produce to bulky items, and possess robust navigation capabilities to maneuver safely through crowded aisles. The development of such adaptable robotic systems is not merely about replacing human labor; it’s about augmenting it, enabling retailers to optimize operations, enhance customer experiences, and address the growing demands of a rapidly evolving market.

Introducing GriffinX: A Mobile Robotic Platform
GriffinX is a fully in-house developed mobile robotic platform designed for autonomous operation. It utilizes an omni-directional base, allowing for movement in any direction without the need for turning. The robot is equipped with two collaborative robotic arms, providing a redundant and versatile manipulation capability. These arms are engineered to replicate human-like picking motions, enabling the robot to grasp and manipulate a variety of objects with dexterity. This dual-arm configuration increases efficiency and allows for complex task execution in unstructured environments.
GriffinX utilizes a holonomic Omni-Directional Platform, allowing for movement in any direction without requiring the robot to rotate. This is achieved through the implementation of mecanum wheels, providing enhanced maneuverability in constrained environments. Complementing this mobility are the integrated perception systems: a LiDAR unit provides 360° environmental mapping and distance measurements, while the Intel RealSense Depth Camera offers high-resolution depth sensing for object localization and obstacle avoidance. These sensors collectively enable robust perception and autonomous navigation, allowing GriffinX to operate effectively in dynamic and complex spaces.
GriffinX utilizes the Yolo V6 object detection system to identify target items within its operational environment. This system provides real-time identification, enabling the robot to locate objects necessary for manipulation. Following object detection, Inverse Kinematics (IK) calculations are employed to determine the necessary joint angles for each of the dual-arm manipulators. The IK solver computes these angles based on the desired end-effector pose – position and orientation – required to grasp and manipulate the identified object. This precise control of arm movements, driven by the IK solution, ensures accurate and reliable object handling throughout the manipulation process.

Advanced Control for Robust and Adaptive Operation
GriffinX implements a shared control architecture, allowing a human operator to collaborate with the robot in real-time. This is achieved through two primary interfaces: Teleoperation, which provides direct remote control of the robot’s movements, and Virtual Reality (VR) integration. The VR interface immerses the operator in a simulated environment mirroring the robot’s surroundings, enhancing situational awareness and facilitating intuitive control. Shared control distributes tasks between the human and the robot, leveraging human expertise for complex decision-making and the robot’s precision for accurate execution, resulting in a synergistic control scheme.
Model Predictive Control (MPC) within GriffinX functions by repeatedly solving an optimization problem to determine the optimal sequence of control actions over a finite time horizon. This optimization is driven by a Cost Function, which quantifies the desirability of different trajectories and actions, typically including terms for tracking error, control effort, and collision avoidance. The MPC algorithm predicts the future behavior of the robot based on a dynamic model, and selects control inputs that minimize the Cost Function while satisfying system constraints-such as joint limits and obstacle distances. At each time step, only the first control input from the optimized sequence is applied, and the optimization is repeated with updated state information, providing a receding horizon control strategy that adapts to changing conditions and disturbances.
GriffinX incorporates both a Two-Fingered Rigid Gripper and a Three-Fingered Soft Gripper to address a diverse range of manipulation tasks. The Two-Fingered Rigid Gripper provides a secure and precise grasp suitable for objects with well-defined surfaces and stable geometries. Conversely, the Three-Fingered Soft Gripper utilizes compliant materials and adaptable finger placement to conform to objects with irregular shapes, delicate surfaces, or those requiring a gentler handling approach. This dual-gripper system allows GriffinX to manipulate objects varying in size, weight, and fragility without requiring tool changes, increasing operational efficiency and broadening the scope of applicable tasks.

Demonstrating Impact and Future Potential
GriffinX has undergone rigorous evaluation within a purpose-built retail mockup, consistently demonstrating a capacity for intricate picking operations with notable precision and speed. This testing environment, replicating the complexities of a modern retail space – including varied shelf heights, product shapes, and ambient lighting – proved instrumental in validating the robot’s core functionalities. Results indicate GriffinX reliably identifies, locates, and retrieves a diverse range of items, minimizing errors and maximizing throughput. The system’s ability to navigate cluttered environments and adapt to unforeseen obstacles further highlights its potential to significantly streamline order fulfillment and reduce operational costs within dynamic retail settings.
GriffinX’s cloud integration represents a significant advancement in robotic adaptability for retail environments. By continuously transmitting operational data – including pick rates, error occurrences, and environmental factors – to a centralized cloud platform, the system enables real-time analysis and optimization of picking strategies. This data-driven approach allows for the identification of bottlenecks, predictive maintenance scheduling, and the dynamic adjustment of robotic workflows to meet fluctuating demands. Furthermore, the cloud connectivity facilitates remote software updates and the deployment of new functionalities, ensuring GriffinX remains at the forefront of automation technology and readily adapts to evolving retail needs without costly on-site interventions. This continuous learning loop not only enhances operational efficiency but also unlocks the potential for proactive problem-solving and optimized resource allocation within the retail space.
GriffinX distinguishes itself through a highly adaptable architecture, featuring a modular design and the innovative Universal Vacuum Gripper (UVG). This combination allows for swift reconfiguration to handle diverse retail products and environments, ensuring scalability as operational demands evolve. Rigorous testing reveals a significant performance advantage with shared control, demonstrating a 30% reduction in task completion time when compared to traditional, purely remote teleoperation methods. This improvement stems from the system’s ability to intelligently balance human oversight with automated precision, paving the way for more efficient and responsive fulfillment processes within dynamic retail landscapes.

The presented system navigates a landscape of necessary reduction. It isn’t about automating every facet of retail, but rather augmenting human capability where autonomy falters. The dual-arm omnidirectional robot, through shared control teleoperation, embodies this principle – a focused intervention for complex tasks. As Blaise Pascal observed, “The dignity of man lies in thought.” This robotic system doesn’t aim to replace thought, but to extend its reach, handling the physical complexities while preserving human oversight and cognitive control, thereby truly dignifying the operator’s role within the retail environment. The system’s core strength resides not in what it does autonomously, but in what it enables a human to accomplish remotely.
What Lies Ahead?
The presented system, while a demonstrable exercise in applied kinematics, merely shifts the problem. True autonomy in retail-a space defined by unpredictable human behavior and chaotic object arrangements-remains stubbornly elusive. This work acknowledges that failure, offering a remote intervention; yet it skirts the fundamental question of why autonomy consistently falters in these environments. The focus on haptic feedback, commendable as it is, feels almost palliative-a refinement of control for a task that perhaps should not require such granular direction in the first place.
Future iterations should not prioritize adding layers of complexity – more sensors, more sophisticated control algorithms – but rather a ruthless simplification of the task itself. Can retail environments be shaped to meet the limitations of robotic systems, rather than the other way around? Standardized shelving, predictable product placement, and dedicated robotic zones represent a more promising, if less glamorous, avenue of research. Intuition suggests that reducing environmental entropy will yield greater gains than perfecting the robotic response to it.
Ultimately, the success of such systems will be measured not by their ability to mimic human dexterity, but by their capacity to avoid requiring it. The pursuit of general-purpose robotic retail workers feels increasingly like a category error. The ideal robot is the one that renders its own operation unnecessary, by quietly restructuring the world to suit its limitations – a silent, efficient architect of order amidst the inherent chaos.
Original article: https://arxiv.org/pdf/2602.23923.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash of Clans Unleash the Duke Community Event for March 2026: Details, How to Progress, Rewards and more
- Gold Rate Forecast
- Jason Statham’s Action Movie Flop Becomes Instant Netflix Hit In The United States
- Kylie Jenner squirms at ‘awkward’ BAFTA host Alan Cummings’ innuendo-packed joke about ‘getting her gums around a Jammie Dodger’ while dishing out ‘very British snacks’
- Hailey Bieber talks motherhood, baby Jack, and future kids with Justin Bieber
- eFootball 2026 Jürgen Klopp Manager Guide: Best formations, instructions, and tactics
- KAS PREDICTION. KAS cryptocurrency
- Jujutsu Kaisen Season 3 Episode 8 Release Date, Time, Where to Watch
- How to download and play Overwatch Rush beta
- Quadruped Teams Navigate Clutter with Adaptive Roles
2026-03-02 11:34