Safeguarding Collaborative Robots with AI

A new approach uses large language models to formalize safety protocols and redundant systems to ensure reliable operation in human-robot workcells.

A new approach uses large language models to formalize safety protocols and redundant systems to ensure reliable operation in human-robot workcells.

A new framework, ETac, accelerates the development of dexterous robot manipulation skills by providing realistic and efficient tactile feedback in simulation.

A new system combines the power of artificial intelligence with human oversight to tackle the complex task of creating comprehensive, long-form scientific literature.
![The study reveals a nuanced relationship between model architecture, parameter count, and reward function, demonstrating that while comparable performance is achieved across configurations on the [latex]RkR\_{k}[/latex] reward, Multi-Layer Perceptrons (MLPs) excel with the [latex]RsR\_{s}[/latex] reward and Cyclic Policy Gradients (CPGs) dominate with [latex]RgR\_{g}[/latex], generally favoring larger CPGs and smaller MLPs for optimal results.](https://arxiv.org/html/2604.20365v1/x4.png)
A new study reveals that drawing inspiration from biological systems can yield surprisingly efficient robot designs, even when competing with the power of overparameterized neural networks.
A new framework, Prism, allows multi-agent AI systems to build and refine a shared memory, enabling them to tackle increasingly complex and open-ended challenges.

A new review argues that fostering genuine trust in AI mental health support requires moving beyond simply feeling confident in the technology, and instead calibrating trust to its actual capabilities.

A new approach empowers AI agents to autonomously discover and integrate data from complex sources by reasoning about what the data means, not just what it contains.

New research reveals that the cooperative behaviors of artificial intelligence teams-measured through game-like interactions-can accurately forecast their performance on complex scientific tasks.
![Without relying on predefined physical principles, a deep generative model operating on a dense attention graph autonomously derived the underlying structure of a one-dimensional spin chain-including sequence identities and frustrated short-range interactions-by algebraically inverting neural predictions to collapse onto an exact Hamiltonian basis, effectively functioning as a direct force estimator through the equivalence between its diffusion score field and thermodynamic restoring force [latex]\mathbf{s}\_{\theta}\equiv-\beta\nabla H[/latex].](https://arxiv.org/html/2604.20821v1/x1.png)
New research reveals deep generative models can independently infer the underlying Hamiltonian dynamics of complex systems, going beyond simple statistical approximation.
New research reveals that artificial intelligence agents can inadvertently reflect the behavioral patterns of their users, raising significant concerns about data privacy and potential disclosure.