When Robots Gamble With Us: Understanding Human Risk in Physical Collaboration

Author: Denis Avetisyan


New research explores how humans perceive and react to uncertainty when physically working alongside robots, revealing predictable patterns in decision-making.

Participants physically interact with an ArmMotus M2 robot guided by real-time visual feedback-a green cursor displaying handle position and a game status presented on a monitor-following a “3, 2, 1, Go!” countdown to initiate each trial.
Participants physically interact with an ArmMotus M2 robot guided by real-time visual feedback-a green cursor displaying handle position and a game status presented on a monitor-following a “3, 2, 1, Go!” countdown to initiate each trial.

This pilot study investigates the application of Cumulative Prospect Theory to model human behavior in physical human-robot interaction, paving the way for adaptive robotic systems.

Conventional models of human-robot interaction often assume rational responses to uncertainty, yet real-world collaboration introduces complexities beyond optimal control. This pilot study, ‘Prospect Theory in Physical Human-Robot Interaction: A Pilot Study of Probability Perception’, investigated how humans adapt to probabilistic disturbances during a shared physical task. Findings revealed distinct behavioral clusters-one modulating responses to likelihood and another exhibiting strong risk aversion-suggesting individualized perceptions of probability significantly influence action. Could incorporating behavioral models like cumulative prospect theory enable more nuanced and adaptive robot controllers that better align with human preferences in physical collaborative scenarios?


Predicting Human Response: The Foundation of Seamless Collaboration

Human reaction during physical interaction is rarely a straightforward calculation, presenting a fundamental challenge to robotics. Individuals don’t consistently respond to robotic actions based on purely logical or predictable patterns; instead, responses are shaped by a complex interplay of factors including prior experiences, emotional state, and even momentary attention. This inherent variability means that even seemingly simple physical exchanges – a collaborative lift, a guiding touch – can unfold in surprising ways. Consequently, modeling human behavior requires moving beyond deterministic frameworks and embracing probabilistic approaches that account for the full spectrum of potential reactions, acknowledging that a degree of unpredictability is simply intrinsic to human-robot collaboration. This necessitates a shift from precise control to adaptive strategies, allowing robots to respond gracefully to the inevitable deviations from anticipated human behavior.

Conventional robotic control systems, designed for predictable environments and tasks, often falter when interacting with humans due to the inherent variability of human behavior. These systems frequently rely on precise calculations of force, position, and timing, but human responses are seldom perfectly aligned with these expectations. This mismatch can manifest as awkward physical interactions, where a robot’s movements feel unnatural or jarring to a human partner. More critically, the limitations of these traditional methods raise safety concerns; an inability to anticipate a human’s reaction could lead to collisions, unintended pressure, or even injury. Consequently, a fundamental shift is needed in how robots are controlled, moving beyond rigid pre-programmed sequences towards more adaptive and responsive strategies that acknowledge and accommodate the unpredictability of human action.

Effective human-robot collaboration hinges not solely on technical precision, but profoundly on anticipating the psychological responses of human partners. Studies reveal that individuals don’t react to robotic actions with purely logical calculations; instead, emotional states, prior experiences, and even subtle cues like perceived intent heavily influence their reactions. This means a robot’s success isn’t measured by flawlessly executing a task, but by its ability to interpret and appropriately respond to human comfort levels, trust, and potential anxieties. Recognizing that humans often prioritize social cues and collaborative harmony over strict efficiency necessitates a shift towards robot designs that prioritize psychological safety and foster a sense of shared understanding, ultimately paving the way for more intuitive and productive interactions.

Predicting human reaction during physical interaction with robots presents a significant challenge, as individuals rarely respond in a completely predictable manner. A recent study, involving ten participants, highlighted this variability, revealing that even seemingly simple robot actions can elicit diverse and often unexpected responses from humans. This suggests that traditional robotic control systems, which often rely on pre-programmed responses, are inadequate for truly seamless human-robot collaboration. Instead, a more nuanced approach to behavioral modeling is required – one that accounts for the inherent unpredictability of human behavior and allows robots to adapt their actions in real-time. This necessitates moving beyond purely kinematic or dynamic models and incorporating principles from psychology and cognitive science to better anticipate and accommodate the complexities of human interaction.

Participant compensation probabilities shifted based on robot perturbation levels, with initial (solid lines) and subsequent (dashed lines) revealing consistent behavioral adaptation, though some participants repeated identical actions in both rounds.
Participant compensation probabilities shifted based on robot perturbation levels, with initial (solid lines) and subsequent (dashed lines) revealing consistent behavioral adaptation, though some participants repeated identical actions in both rounds.

Understanding Human Decisions: Beyond Rationality

Behavioral economics has consistently demonstrated that human decision-making frequently diverges from the predictions of rational choice theory, especially when choices involve risk or uncertainty. This deviation is not attributable to computational limitations but rather to systematic cognitive biases. Individuals do not consistently maximize expected utility; instead, they exhibit predictable patterns of irrationality, such as overestimating the probability of rare events and disproportionately fearing losses compared to equivalent gains. These biases are well-documented across a range of experimental paradigms and real-world scenarios, indicating that deviations from rationality are not random errors but rather integral aspects of human cognition. The magnitude of these biases can be influenced by contextual factors and individual differences, but their prevalence suggests that models of human behavior must account for these departures from strict rationality to achieve accurate predictions.

Cumulative Prospect Theory (CPT) is a behavioral economic model that describes how individuals make choices involving risk and uncertainty. Unlike Expected Utility Theory, which assumes rational actors maximize expected value, CPT incorporates two key concepts: loss aversion and probability weighting. Loss aversion posits that individuals feel the pain of a loss more strongly than the pleasure of an equivalent gain. This is reflected in a steeper negative value function for losses than for gains. Probability weighting suggests that individuals do not perceive probabilities linearly; small probabilities are often overweighted, and large probabilities are underweighted. Mathematically, CPT replaces utility with a value function $v(x)$ and objective probabilities $p$ with weighted probabilities $\pi(p)$. This framework allows for more realistic predictions of human behavior in situations involving risk, accounting for commonly observed biases that deviate from purely rational models.

Predictions of human action are further complicated by cognitive biases related to trust and technology. Asymmetric trust describes the tendency for individuals to place disproportionately higher trust in human recommendations than in those generated by algorithms, even when the algorithm demonstrates superior performance. Algorithm aversion, conversely, manifests as a reluctance to adopt algorithmic recommendations, particularly when those recommendations differ from human-provided alternatives or result in unfavorable outcomes. These biases are not static; levels of trust and aversion can vary based on factors such as the transparency of the algorithm, the perceived competence of the human counterpart, and the context of the decision. Consequently, models aiming to predict human behavior must account for these non-rational preferences to achieve accurate results.

Framing effects, observed in our study population with a mean age of 27.7 ± 3.6 years, indicate that human choices are consistently influenced by how information is presented, rather than the objective value of the options themselves. This phenomenon manifests as differing preferences for equivalent outcomes depending on whether they are framed as gains or losses; for example, a surgical procedure with a “90% survival rate” is generally preferred over one with a “10% mortality rate,” despite representing identical probabilities. The magnitude of this bias varied among participants, suggesting individual sensitivities to presentation; however, a consistent pattern demonstrated that positive framing generally encouraged risk-seeking behavior, while negative framing promoted risk aversion. These findings reinforce the importance of considering cognitive biases when predicting human decision-making in contexts ranging from economic choices to medical treatments.

Both Bayesian Linear Regression (BLR) and CPT fitting successfully model the relationship between perturbation and compensation probabilities, as demonstrated by their close alignment with the raw data.
Both Bayesian Linear Regression (BLR) and CPT fitting successfully model the relationship between perturbation and compensation probabilities, as demonstrated by their close alignment with the raw data.

Integrating Behavioral Insights into Robot Control Systems

Optimal control techniques establish a mathematical framework for robot behavior design by defining a cost function that quantifies performance and stability. These techniques, often formulated as solving $min_x J(x)$, where $J$ is the cost function and $x$ represents the robot’s control inputs, enable the derivation of control laws that minimize this cost. Common methods include Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC), which utilize system dynamics and constraints to calculate optimal actions. By explicitly defining objectives – such as minimizing execution time, energy consumption, or tracking error – and incorporating system limitations, optimal control ensures predictable and reliable robot operation, even in complex environments. The resulting control policies are demonstrably superior to purely reactive or heuristic approaches in scenarios demanding precision and robustness.

Stochastic Optimal Control and Noisy-Rational Models are critical for effective human-robot interaction due to the inherent unpredictability of human behavior. Traditional optimal control assumes complete knowledge of the environment and agent actions; however, human responses are subject to noise and internal variations. Stochastic Optimal Control extends this framework by incorporating probabilistic models of human action, allowing the robot to account for uncertainty in predicting outcomes. Noisy-Rational Models specifically posit that humans, while generally rational, introduce noise into their decision-making processes. By modeling this noise, the robot can better estimate the probability of different human responses to its actions and optimize its behavior accordingly, increasing the robustness and naturalness of the interaction. These models enable the robot to move beyond deterministic planning and embrace probabilistic reasoning when anticipating human actions.

Bayesian Logistic Regression is employed to model the probability of discrete human responses – such as yielding, continuing a task, or expressing discomfort – contingent on the robot’s actions. This statistical method utilizes a sigmoid function to predict the log-odds of a specific response, incorporating prior beliefs about response probabilities and updating these beliefs based on observed robot actions and corresponding human reactions. The resulting model, parameterized by the robot’s action features, allows for probabilistic forecasting of human behavior, enabling the robot to estimate the likelihood of different responses given a particular action and to select actions that maximize desired outcomes or minimize potential negative reactions. This approach differs from traditional regression by explicitly modeling the probability of a binary or categorical outcome, and the Bayesian framework allows for incorporating prior knowledge and quantifying uncertainty in the predictions.

Robot control systems can be designed to anticipate human actions through the integration of stochastic optimal control, noisy-rational models, and Bayesian logistic regression. Data collection follows a structured protocol of 5 blocks per experimental round, with each block requiring 10 trials for completion. Analysis of this data revealed two distinct behavioral strategies employed by human subjects during interaction; these strategies were consistently observed across trials and blocks, indicating predictable patterns of response to robot actions. This allows for the implementation of adaptive robot behaviors tailored to the identified strategies, improving overall interaction performance and reducing the need for constant human correction.

Logistic regression analysis revealed two distinct behavioral strategies among participants: consistent compensation and strategic trade-offs.
Logistic regression analysis revealed two distinct behavioral strategies among participants: consistent compensation and strategic trade-offs.

Towards Robust and Intuitive Physical Interaction: The Future of Collaboration

Robots operating near humans benefit significantly from predictive capabilities, allowing them to preemptively adjust actions and avoid causing jarring or destabilizing movements. Rather than reacting to a person’s response, a robot designed with anticipatory behavior can model likely human reactions – considering factors like expected force exertion and postural adjustments – and subtly modify its own movements to minimize any disruptive forces. This proactive approach isn’t about dictating human behavior, but rather smoothing the interaction by reducing unexpected perturbations and the associated cognitive load. By minimizing these unnecessary ‘bumps’ or corrections, the robot fosters a sense of seamless collaboration and allows the human partner to maintain a more natural and comfortable flow of motion, ultimately increasing both safety and efficiency in shared workspaces.

The comfort and efficacy of human-robot interaction are fundamentally linked to how a robot’s actions impact the physical effort required from a human partner. Research indicates that even subtle assistance or disturbance from a robot can significantly alter a person’s perceived workload and willingness to collaborate. A robot that consistently minimizes effort – effectively assisting – fosters positive engagement, while actions perceived as disruptive, even if unintentional, increase physical strain and erode trust. Consequently, designing robotic systems that intelligently balance assistance and disturbance, informed by models of human biomechanics and behavioral responses, is paramount. Understanding this interplay allows for the creation of robots capable of adapting their behavior to minimize human effort cost, leading to more intuitive, comfortable, and ultimately, successful collaborative experiences.

Effective human-robot collaboration hinges on shared control strategies, and recent advancements demonstrate that these are significantly improved when informed by detailed behavioral modeling. By predicting how a human partner will respond to robotic assistance or interference, robots can dynamically adjust their actions to minimize effort and maximize efficiency. This isn’t simply about avoiding collisions; it’s about anticipating a human’s intended movements and providing support exactly when and where it’s needed. Such proactive assistance fosters a sense of seamless teamwork, building trust as the human perceives the robot as a reliable and intuitive partner. Consequently, robots employing these behaviorally-informed strategies aren’t just tools; they become collaborators, enhancing performance and acceptance in shared workspaces and ultimately leading to more natural and productive human-robot interactions.

Recent advancements in robotics are yielding machines capable of navigating shared spaces with increased safety and efficacy, largely due to a deeper understanding of human behavioral responses. Research indicates that individuals don’t react uniformly to robotic assistance or interference; instead, they exhibit adaptive behaviors contingent on perceived probabilities of success or failure, consistently demonstrating a preference for minimizing risk. This means that as a robot’s actions become less certain, humans intuitively adjust their own efforts to compensate, and they consistently favor actions that avoid potential instability or harm. By incorporating these probability-dependent and risk-averse patterns into robotic control algorithms, engineers can design systems that anticipate human reactions and proactively mitigate potentially disruptive interactions, fostering smoother and more reliable human-robot collaboration in complex, real-world settings.

The study highlights how humans navigate uncertainty during physical collaboration with robots, revealing a nuanced approach to probabilistic reasoning. This echoes Tim Bern-Lee’s sentiment: “The Web is more a social creation than a technical one.” Just as the Web’s structure evolved through collective interaction, so too must robotic systems adapt to individual human preferences in shared control scenarios. The research demonstrates that Cumulative Prospect Theory offers a valuable framework for understanding these preferences, allowing infrastructure to evolve – in this case, robotic behavior – without requiring a complete rebuild of the underlying system. This adaptive approach is crucial for fostering seamless and intuitive physical human-robot interaction.

Where Do We Go From Here?

The observation that humans don’t consistently maximize expected utility during physical collaboration with robots is hardly surprising; elegance rarely arises from complexity. Rather, the interesting result lies in the consistency of the deviation. Cumulative Prospect Theory offers a framework, but it remains a descriptive tool. The real challenge isn’t fitting a curve to the data, but understanding the underlying structure that necessitates such a curve. Why do humans consistently prioritize avoiding losses over achieving gains, even when the magnitudes are equivalent? This speaks to a deeper principle, potentially rooted in embodied cognition and the inherent asymmetries of physical interaction.

Future work must move beyond simply modeling human behavior and toward predicting its emergence. Shared control algorithms that adapt to individual risk profiles are a logical next step, but a truly robust system will anticipate those preferences before they manifest as observable choices. The field should consider the limitations of relying solely on probabilistic reasoning; a robot that understands the physical constraints and affordances of the task may infer human intent more effectively than one that merely calculates probabilities.

If a design feels clever, it’s probably fragile. The pursuit of adaptive robotics should therefore prioritize simplicity and robustness. A system that operates on minimal assumptions, and focuses on clear, direct communication, will ultimately be more resilient – and more human – than one that attempts to replicate the full complexity of human decision-making. The goal isn’t to build a perfect model of the human mind, but a reliable partner in the physical world.


Original article: https://arxiv.org/pdf/2512.08481.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-10 11:39