Author: Denis Avetisyan
Researchers have developed a framework that combines the reasoning power of large AI models with the efficiency of smaller, edge-based systems to enhance multi-robot coordination and exploration.

This paper introduces LSAI, a codesign framework for splitting large AI models to enable efficient, agentic behavior in robotic search scenarios, leveraging federated learning and optimized path planning.
Achieving both high accuracy and low latency remains a significant challenge in deploying artificial intelligence for complex, real-world robotic applications. This paper introduces LSAI-a Large Small AI Model Codesign Framework for Agentic Robot Scenarios-to address this limitation through a novel approach to model partitioning and collaboration. By synergistically combining the global understanding capabilities of large AI models with the efficient execution of small, edge-deployed models, LSAI demonstrably improves sensing accuracy and reduces cooperation latency in multi-robot search scenarios. Could this codesign paradigm unlock new levels of autonomy and efficiency in future robotic deployments across diverse environments?
Navigating Complexity: The Rise of Intelligent Robotics
The proliferation of robotics extends beyond automated manufacturing and into increasingly complex and unpredictable environments, most notably within the healthcare sector. Modern hospitals and care facilities now utilize robots for tasks ranging from dispensing medication and sterilizing equipment to assisting in surgery and providing companionship. This expansion necessitates a shift from robots performing pre-programmed, repetitive actions to systems capable of adaptable and reliable performance in dynamic situations. Successfully navigating crowded hallways, interacting safely with patients, and responding to unforeseen circumstances demands robust sensing, advanced perception, and intelligent decision-making capabilities – effectively pushing the boundaries of robotic autonomy and requiring a new generation of adaptable machines.
Conventional artificial intelligence systems, while potent in controlled settings, frequently encounter limitations when tasked with real-time decision-making in unpredictable environments. The computational demands of processing sensory input, predicting future states, and formulating appropriate responses can quickly overwhelm traditional algorithms, particularly as complexity increases. This is because many established AI techniques rely on exhaustive searches or computationally intensive calculations, rendering them impractical for applications requiring immediate action, such as surgical robotics or rapid response in emergency situations. Consequently, researchers are actively exploring alternative paradigms – including neuromorphic computing and edge AI – to distribute processing, reduce latency, and enable robots to operate effectively in the face of dynamic, real-world challenges.

Hybrid Intelligence: A Codesign Approach with LSAI
The Large and Small AI (LSAI) framework implements a codesign approach that integrates Large AI Models (LAMs) and Small AI Models (SAMs) to address limitations in both model types. LAMs, while possessing strong reasoning and generalization capabilities, are computationally expensive and resource-intensive. Conversely, SAMs offer efficient processing and reduced resource demands but may lack the complex reasoning abilities of LAMs. LSAI seeks to optimize overall performance and resource utilization by strategically combining these models; tasks are decomposed and distributed such that LAMs handle complex reasoning while SAMs manage routine computations and data processing, resulting in a hybrid system designed for both accuracy and efficiency.
LSAI employs model splitting and fusion to strategically distribute computational tasks between Large AI Models (LAMs) and Small AI Models (SAMs). This process involves decomposing complex problems into sub-tasks, assigning those requiring extensive reasoning to LAMs, and delegating computationally intensive or repetitive tasks to SAMs. The results from both model types are then fused, often using weighted averaging or more complex gating mechanisms, to generate a final output. This hybrid approach aims to mitigate the limitations of each model type – the high computational cost and latency of LAMs, and the limited reasoning capabilities of SAMs – resulting in a system that balances performance, efficiency, and resource utilization.
Attention mechanisms within the LSAI framework dynamically weight input data based on its relevance to the task at hand, thereby directing computational resources to the most informative features. This selective focusing improves both the accuracy and efficiency of the hybrid AI system. Specifically, attention weights are calculated to prioritize data points that contribute most significantly to the decision-making process, effectively filtering out noise and reducing the computational burden on both Large AI Models (LAMs) and Small AI Models (SAMs). The implementation of attention allows the system to adaptively allocate resources, concentrating processing power on critical information while minimizing expenditure on less relevant data, ultimately optimizing performance and reducing latency.

Empirical Validation: LSAI in Robotic Applications
Effective robotic navigation is heavily dependent on efficient path planning. Testing demonstrates that LSAI achieves an average path planning efficiency value of 0.936. This represents an absolute improvement of approximately 0.08 to 0.16 when compared to a distributed baseline, and an improvement of 0.13 to 0.23 over a centralized baseline. These results indicate LSAI provides quantifiable gains in path planning performance within robotic systems, suggesting improved navigational capabilities and potentially reduced operational costs.
Robotic collaborative sensing performance is directly linked to the accuracy and timeliness of shared data; the LSAI framework demonstrably improves sensing accuracy as the number of participating robots increases, consistently achieving a value of approximately 1.00. This result indicates near-perfect data consensus among robots utilizing LSAI. Comparative analysis reveals that LSAI outperforms both distributed and centralized benchmark systems in collaborative sensing tasks, suggesting improved robustness and reliability in multi-robot data acquisition and processing.
LSAI utilizes edge computing to reduce system response time in multi-robot systems. Performance evaluations indicate an average response time of approximately 13 minutes, even with an increasing number of participating robots. This represents a demonstrable reduction in latency compared to both distributed and centralized baseline architectures, which exhibited significantly higher response times under similar conditions. The implementation of edge computing allows for data processing and decision-making to occur closer to the robots themselves, thereby minimizing communication delays and improving overall system efficiency.
The combination of Deep Deterministic Policy Gradient (DDPG) algorithms with Star-Agent Model (SAM) architectures facilitates efficient reinforcement learning in robotic systems operating within complex environments. DDPG, an off-policy algorithm, enables continuous action space control, while SAMs provide a framework for multi-agent coordination and decentralized execution. This integration allows robots to learn optimal policies through trial and error, adapting to dynamic conditions and unforeseen obstacles without requiring explicit programming for every scenario. The resulting system demonstrates improved performance in tasks requiring nuanced control and collaborative behavior, particularly in scenarios where centralized control is impractical or inefficient.
Beyond Automation: Envisioning the Future of Hybrid Intelligence
Within the Locally Shared Autonomy (LSAI) framework, federated averaging presents a significant advancement in robotic learning. This technique allows multiple robots to collaboratively refine a shared AI model without directly exchanging their individual training datasets. Instead, each robot computes updates to the model based on its own experiences, and only these updates – not the raw data – are shared with a central server. The server then aggregates these updates, creating an improved global model that is redistributed to all robots. This decentralized approach not only preserves data privacy – crucial when dealing with sensitive or proprietary information – but also enhances robustness and scalability, enabling robots operating in diverse environments to learn collectively and adapt more effectively than through isolated training. Consequently, federated averaging within LSAI paves the way for truly collaborative robotic systems, where knowledge is shared and refined across a network, fostering a more intelligent and adaptable robotic future.
The architecture of Skill Acquisition Modules (SAMs) increasingly relies on the power of neural networks, offering a significant leap in robotic adaptability. Unlike traditional, rigidly programmed systems, these neural network-based SAMs learn and refine skills through experience, allowing robots to tackle previously unseen variations in their environment or task requirements. This flexibility stems from the network’s ability to generalize from training data, enabling it to perform well even with noisy or incomplete information. Furthermore, different network architectures can be tailored to specific robotic capabilities – convolutional neural networks for visual processing, recurrent neural networks for sequential tasks, and reinforcement learning algorithms for complex decision-making. This modularity means a single LSAI framework can support a diverse range of robotic applications, from delicate manipulation and precise navigation to robust locomotion and collaborative assembly, all while continuously improving performance through ongoing learning.
The synergy between Large AI Models (LAMs) and Small AI Models (SAMs) facilitated by the LSAI framework promises a broadening impact well beyond the realm of robotics. Traditionally, deploying sophisticated AI required substantial computational resources, limiting its accessibility. However, LSAI enables powerful LAMs – trained on massive datasets – to distill their knowledge into smaller, more efficient SAMs capable of running on resource-constrained devices. This opens doors to applications in fields like personalized medicine, where on-device diagnostics become feasible, precision agriculture, allowing for real-time crop monitoring and intervention, and even accessible assistive technologies that can adapt to individual user needs without relying on constant cloud connectivity. By overcoming the limitations of hardware and bandwidth, LSAI fundamentally expands the potential reach of artificial intelligence, bringing its benefits to a far wider range of contexts and users.

The presented LSAI framework, designed to harmonize large and small AI models for robotic collaboration, echoes a fundamental principle of responsible innovation. This research prioritizes not simply what can be automated, but how-by distributing intelligence and enabling efficient local execution. As Marie Curie observed, “Nothing in life is to be feared, it is only to be understood.” This sentiment applies directly to the complex interplay between global awareness and localized action within multi-robot systems. LSAI seeks to ‘understand’ the limitations of centralized processing and distributes computational burden, mirroring Curie’s pursuit of knowledge to overcome obstacles. The framework’s focus on edge computing and model splitting is, therefore, a deliberate attempt to build systems that are not only powerful but also ethically sound, ensuring scalability doesn’t come at the cost of robustness or trustworthiness.
Beyond the Horizon
The pursuit of agentic robotics, as exemplified by this work, inevitably confronts the challenge of scaling intelligence without exacerbating existing societal biases. Splitting models-distributing cognition between a global ‘understanding’ layer and local execution-is a pragmatic step, yet it does not inherently address the encoding of values within those models. The framework presented offers a technical solution to computational constraints, but true progress demands scrutiny of what is being optimized. Efficiency gains are meaningless if they serve to amplify inequities in access or perpetuate flawed decision-making.
Future work must move beyond merely optimizing for performance metrics. Investigating methods for verifiable fairness in model splitting, and techniques for incorporating diverse perspectives into the global understanding layer, are crucial. The emphasis should shift toward robust, auditable AI systems-systems where the provenance of data and the rationale behind decisions are transparent and accountable. Technology without care for people is techno-centrism, and ensuring fairness is part of the engineering discipline.
Ultimately, the success of such frameworks will not be measured by computational speed, but by their ability to foster genuinely collaborative and equitable multi-robot systems. The question isn’t simply whether robots can work together, but for whom, and to what end. A relentless focus on technical advancement without parallel ethical considerations risks automating injustice at scale.
Original article: https://arxiv.org/pdf/2603.21726.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Invincible Season 4 Episode 4 Release Date, Time, Where to Watch
- Physics Proved by AI: A New Era for Automated Reasoning
- American Idol vet Caleb Flynn in solitary confinement after being charged for allegedly murdering wife
- Gold Rate Forecast
- “Wild, brilliant, emotional”: 10 best dynasty drama series to watch on BBC, ITV, Netflix and more
- Magicmon: World redeem codes and how to use them (March 2026)
- Total Football free codes and how to redeem them (March 2026)
- Seeing in the Dark: Event Cameras Guide Robots Through Low-Light Spaces
- eFootball 2026 is bringing the v5.3.1 update: What to expect and what’s coming
- Simulating Humans to Build Better Robots
2026-03-24 10:12