Author: Denis Avetisyan
A new approach to laboratory automation separates experimental goals from hardware control, unlocking more efficient and scalable AI-driven research.

This paper introduces Experiment-as-Code Labs, a declarative stack for building autonomous laboratories that prioritizes reproducibility and safety in scientific experimentation.
While artificial intelligence promises to accelerate scientific discovery, realizing its full potential requires bridging the gap between computation and physical experimentation. This paper introduces ‘Experiment-as-Code Labs: A Declarative Stack for AI-Driven Scientific Discovery’, a novel paradigm that encodes experiments as declarative configurations, enabling AI agents to design and execute laboratory procedures independently. By separating experimental intent from device control, this approach facilitates safer, more flexible, and scalable automation for scientific exploration. Could this synthesis of physical, systems, and intelligence layers unlock a new era of AI-driven breakthroughs in the laboratory?
The Inherent Limitations of Empirical Battery Discovery
The development of new battery technologies has historically been constrained by the pace of materials discovery, a process traditionally reliant on painstakingly slow, iterative experimentation. Researchers would synthesize and test materials one at a time, a cycle demanding significant resources and years of effort to achieve even incremental improvements. This methodical approach, while yielding foundational knowledge, struggles to keep pace with the rapidly evolving demands for energy storage in electric vehicles and renewable energy systems. The sheer number of potential material combinations, coupled with the complex interplay of chemical and physical properties crucial for battery performance, creates a vast search space that traditional methods are ill-equipped to navigate efficiently, ultimately hindering innovation and delaying the deployment of next-generation battery technologies.
While high-throughput screening methods represent a significant acceleration over traditional battery materials discovery, their efficacy is often hampered when investigating complex electrolyte formulations. These systems typically prioritize speed by testing a large number of pre-defined compositions, limiting the ability to finely tune variables or explore synergistic effects between multiple additives. The inherent complexity of electrolyte behavior-influenced by ionic conductivity, interfacial stability, and decomposition pathways-demands a level of control that many high-throughput platforms struggle to provide. Consequently, promising electrolyte combinations may be overlooked due to the inability to precisely manipulate and assess the impact of subtle compositional changes, creating a bottleneck in the pursuit of next-generation battery technologies.
The current pace of battery innovation is hampered by a significant bottleneck in materials discovery, necessitating a paradigm shift towards self-directed research. Existing high-throughput methods, while capable of rapidly assessing numerous combinations, frequently lack the precision needed to navigate the intricate interplay of electrolyte components and electrode materials. Consequently, a pressing need exists for automated systems capable of not only proposing novel battery chemistries but also independently conducting experiments, analyzing results, and iteratively refining designs. These ‘self-driving laboratories’ promise to dramatically shorten the development cycle, moving beyond the limitations of human-guided trial-and-error and unlocking a new era of battery technology with enhanced performance and efficiency.
![Clio is a compact, autonomous battery-lab platform demonstrating how embodied-AI control ([latex]EaC[/latex]) can standardize devices, coordinate complex procedures, and maintain shared lab state in a real-world deployment.](https://arxiv.org/html/2605.04375v1/x5.png)
Clio: A Platform for Embodied Experimentation
Clio is a fully automated laboratory platform constructed specifically for high-throughput battery research. The system integrates robotic liquid handling, electrochemical instrumentation, and environmental control within a closed system, enabling the execution of battery testing protocols without manual intervention. This automation extends to sample preparation, cycling, and post-mortem analysis, allowing for continuous, unattended operation. The physical design prioritizes modularity and scalability, facilitating adaptation to diverse battery chemistries, formats, and experiment types. Clio’s architecture is intended to significantly increase experimental throughput and reduce the potential for human error in battery characterization.
ElyteOS functions as the central control system for Clio, managing all aspects of experiment execution and data collection. This software platform utilizes a scheduling system to coordinate hardware components – including potentiostats, environmental chambers, and data logging equipment – according to predefined experimental protocols. Data acquisition is handled through direct integration with instrument APIs, enabling high-frequency, synchronized measurements. Crucially, ElyteOS logs all experimental parameters, software versions, and hardware configurations alongside the raw data, facilitating full reproducibility of results. The software’s modular design allows for straightforward integration of new instruments and control algorithms, while its centralized architecture ensures data integrity and minimizes the potential for human error in data handling.
Clio utilizes a declarative experiment stack, abstracting hardware control from the experimental design process. Researchers define experimental objectives and desired outcomes through a high-level interface, specifying what measurements to take and how data should be analyzed, rather than directly programming hardware parameters such as voltage, current, or temperature setpoints. This approach decouples experimental intent from implementation, enabling automated workflow generation, improved reproducibility, and simplified experiment modification; the system translates these high-level specifications into low-level control commands for the automated hardware, managing data acquisition and logging without direct researcher intervention.

AI-Driven Hypothesis Generation and Validation
The AI Agent functions as the central control system within the autonomous laboratory, operating by formulating scientific hypotheses and subsequently designing experiments to test those hypotheses. This agent utilizes available data – including prior experimental results and existing scientific literature – to propose experiments predicted to yield the most informative outcomes, thereby maximizing the rate of learning about the system under investigation. The agent’s experimental designs specify the necessary parameters, materials, and procedures for execution by the laboratory’s robotic hardware, creating a closed-loop system where data generated from experiments is fed back into the agent to refine future hypotheses and designs. This iterative process enables the autonomous exploration of complex experimental spaces without direct human intervention.
Bayesian Optimization is utilized for sequential experiment selection by constructing a probabilistic surrogate model, typically a Gaussian Process, to approximate the objective function-in this case, the relationship between electrolyte formulation and performance metrics. This model allows the system to quantify uncertainty and balance exploration – testing formulations where uncertainty is high – with exploitation – refining formulations predicted to yield optimal results. The algorithm employs an acquisition function, such as Expected Improvement or Upper Confidence Bound, to determine the next experiment based on predicted performance and associated uncertainty, iteratively refining the surrogate model with each experimental result and converging towards optimal formulations within the defined experimental space.
The autonomous laboratory utilizes a closed-loop system to iteratively refine electrolyte formulations based on experimental results. Following experiment execution, data regarding performance metrics – including ionic conductivity, alongside other relevant parameters – is fed back into the AI agent. This agent then analyzes the data and proposes new electrolyte compositions designed to optimize these metrics. The system’s cyclical nature-experimentation, analysis, and subsequent formulation adjustments-enables autonomous improvement of electrolyte properties without manual intervention, leading to a progressive enhancement of performance characteristics over time.
Optimization of the autonomous battery laboratory workflow resulted in a reduction of both experiment duration and material usage through the elimination of non-essential procedural steps. Prior to optimization, experiments included redundant quality control checks and intermediate data logging that did not contribute significantly to the primary performance metric-ionic conductivity. By streamlining the process to focus solely on critical data acquisition and analysis, experiment execution time was decreased by 15% and material consumption-specifically electrolyte precursor compounds-was reduced by 10% per experimental iteration. This efficiency gain enables a higher throughput of experiments and lowers the overall cost associated with materials and laboratory resources.

The Promise of Scalable Scientific Infrastructure
The integration of cloud infrastructure with autonomous laboratories represents a significant leap in experimental science, moving beyond the limitations of single, localized facilities. This approach enables researchers to remotely design, execute, and analyze experiments, fostering a level of accessibility previously unattainable. By distributing experimental capacity across a network, multiple experiments can proceed concurrently – a form of parallel experimentation – dramatically increasing the throughput of scientific discovery. This scalability isn’t simply about running more tests; it facilitates a more comprehensive exploration of complex scientific spaces, allowing for the rapid evaluation of a wider range of hypotheses and the accelerated identification of promising new materials or processes. The resulting data streams, centrally managed and readily accessible, provide a richer, more nuanced understanding than traditionally possible, paving the way for more informed and efficient research.
A key advancement lies in the system’s capacity to dramatically enhance experimental throughput. By distributing experimentation across a network of automated labs, researchers can move beyond sequential testing to a paradigm of parallel discovery. This distributed architecture allows for the simultaneous evaluation of numerous electrolyte formulations – a process that would be prohibitively time-consuming with conventional methods. The ability to test a vastly expanded chemical space unlocks the potential for identifying novel electrolyte compositions with superior performance characteristics, accelerating the search for next-generation battery materials and reducing the overall time to market for improved energy storage technologies.
A significant benefit of transitioning laboratory automation to the cloud lies in its ability to break down traditional barriers to scientific collaboration. Researchers at geographically dispersed institutions can now seamlessly contribute to experiments, accessing instruments and data remotely. This distributed access fosters a more open and inclusive research environment, allowing for the pooling of expertise and resources. The cloud infrastructure enables shared experimental design, real-time data analysis, and collective interpretation of results, accelerating the pace of discovery by leveraging a broader scientific community. This collaborative potential is particularly valuable in fields like materials science, where diverse perspectives and interdisciplinary approaches are crucial for innovation.
The advent of scalable, cloud-based autonomous laboratories promises a paradigm shift in the pace of materials discovery, particularly for next-generation battery technologies. By removing limitations imposed by physical infrastructure and enabling massively parallel experimentation, this system allows researchers to explore vast chemical spaces with unprecedented speed and efficiency. This accelerated exploration isn’t simply about testing more combinations; it’s about dramatically increasing the probability of identifying novel electrolyte compositions and materials with superior performance characteristics. The resulting data, generated at an exponential rate, fuels machine learning algorithms capable of predicting promising candidates, further streamlining the discovery process and potentially reducing the time to market for advanced energy storage solutions. Ultimately, this infrastructure fosters a cycle of rapid innovation, moving beyond incremental improvements toward genuinely transformative breakthroughs in battery technology.

The pursuit of a declarative laboratory stack, as outlined in the paper, echoes a fundamental tenet of rigorous methodology. It champions a system where the ‘what’ of an experiment is distinctly separated from the ‘how’ of its execution. This approach aligns perfectly with Karl Popper’s assertion: “Science is not a collection of truths to be learned, but a method of inquiry.” The paper’s emphasis on separating intent from implementation isn’t merely about automation; it’s about creating a system amenable to falsification, enabling researchers to rigorously test hypotheses and refine their understanding – a cornerstone of Popperian epistemology. The pursuit of reproducibility, central to Experiment-as-Code, becomes intrinsically linked to the ability to critically examine and potentially disprove experimental claims.
What Remains to be Proven?
The decoupling of experimental intent from device control, as proposed, is a necessary, though not sufficient, condition for genuinely autonomous scientific discovery. The current instantiation, while demonstrably functional, rests on the assumption that experimental protocols can be exhaustively and unambiguously specified in a declarative form. This is a bold claim, and one which invites scrutiny. The complexity of real-world scientific inquiry often resides not in the execution of a protocol, but in its formulation – a process inherently iterative, and frequently reliant on tacit knowledge difficult to encode. Future work must therefore address the limits of declarative specification, and explore methods for incorporating inductive reasoning and error-driven refinement into the experimental loop.
A critical bottleneck lies in the management of state. While the stack addresses stateful systems, the asymptotic behavior of maintaining consistency across increasingly complex, long-running experiments remains an open question. Scalability is not merely a matter of increased computational resources; it demands a formalization of state invariants, and the development of algorithms capable of verifying their maintenance with provable guarantees. Without such guarantees, the pursuit of ‘reproducibility’ becomes a pragmatic exercise in statistical convergence, rather than a statement of logical equivalence.
Ultimately, the true measure of success will not be the automation of existing experiments, but the capacity to discover novel phenomena. This requires a shift in focus from control to exploration-from prescribing what to measure, to defining the criteria for interestingness. The pursuit of such an AI-driven heuristic remains, at present, largely theoretical. It demands, not merely clever algorithms, but a formal understanding of the very nature of scientific insight.
Original article: https://arxiv.org/pdf/2605.04375.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash of Clans “Clash vs Skeleton” Event for May 2026: Details, How to Progress, Rewards and more
- Clash of Clans May 2026: List of Weekly Events, Challenges, and Rewards
- Last Furry: Survival redeem codes and how to use them (April 2026)
- Gear Defenders redeem codes and how to use them (April 2026)
- The Division Resurgence Best Weapon Guide: Tier List, Gear Breakdown, and Farming Guide
- Neverness to Everness Hotori Build Guide: Kit, Best Arcs, Console, Teams and more
- Brawl Stars Damian Guide: Attacks, Star Power, Gadgets, Hypercharge, Gears and more
- Honor of Kings x Attack on Titan Collab Skins: All Skins, Price, and Availability
- Clash Royale Season 83 May 2026 Update and Balance Changes
- Reverse: 1999 marks its 2.5 Anniversary with Version 3.4 “Spring Unending” on April 16, 2026
2026-05-07 13:22