Can a Robot Be Your Friend? The Rise of Personality in AI

Author: Denis Avetisyan


New research explores how giving robots distinct personalities, powered by large language models, impacts how humans interact with and perceive these machines.

A pre-study dialogue revealed that a chatbot exhibiting uncooperative behavior - appearing disinterested and unwilling to engage - demonstrated a clear lack of alignment with collaborative task completion.
A pre-study dialogue revealed that a chatbot exhibiting uncooperative behavior – appearing disinterested and unwilling to engage – demonstrated a clear lack of alignment with collaborative task completion.

This study investigates the effect of LLM-driven robot agreeableness on human motivation, perceived likability, and cooperative task performance.

While effective human-robot collaboration hinges on positive interaction, instilling robots with compelling personalities remains a significant challenge. This is addressed in ‘Robots with Attitudes: Influence of LLM-Driven Robot Personalities on Motivation and Performance’, a study investigating how large language models can shape robot personas and impact human perceptions. Results demonstrate that programming a robot with an agreeable personality enhances likability, though effects on intrinsic motivation were not conclusive, suggesting a more nuanced relationship between personality and collaborative success. Could carefully crafted robot personalities unlock new levels of trust and efficiency in human-robot teams?


The Illusion of Rapport: Why Robots Need to Seem Consistent

The foundation of successful human-robot interaction hinges on the development of rapport and trust, a dynamic built upon a robot’s demonstrable consistency and predictability. Humans instinctively assess behavioral patterns to determine reliability, and robots are no different; erratic or illogical actions quickly erode confidence and hinder collaboration. Researchers are discovering that a robot’s actions must not only achieve desired outcomes but also adhere to expected patterns, allowing users to anticipate responses and build a mental model of the robot’s ‘intentions’. This predictability extends beyond task execution to encompass communication style, emotional expression – even subtle movements – fostering a sense of safety and allowing humans to comfortably relinquish control or delegate tasks. Without this baseline of consistent behavior, a robot risks being perceived as unreliable, hindering the development of effective teamwork and limiting its integration into everyday life.

Robot personality significantly shapes human perception and engagement, extending beyond mere functionality to influence trust, comfort, and collaborative potential. Research indicates that humans instinctively attribute traits – such as agreeableness, competence, and emotional stability – to robots based on their behavior, appearance, and communication style. These perceived traits then trigger established social biases and expectations, mirroring how individuals interact with one another. A robot exhibiting perceived warmth and empathy, for instance, is more likely to elicit cooperative behavior and positive emotional responses from humans, while one perceived as cold or aloof may encounter resistance or distrust. Consequently, designing robots with carefully considered personality profiles is no longer simply about aesthetics or user experience; it’s a fundamental aspect of fostering effective and harmonious human-robot partnerships.

The creation of compelling robot personalities currently faces significant limitations in achieving genuine engagement. Existing methods frequently rely on simplified trait models or pre-programmed responses, resulting in interactions that feel artificial or predictable to humans. This lack of nuance hinders the development of true rapport, as robots struggle to exhibit the subtle behavioral cues – such as adaptive emotional expression or consistent character – that facilitate natural social bonding. Consequently, collaborative potential is diminished; humans may perceive these robots as tools rather than partners, impacting trust and reducing willingness to engage in complex, long-term interactions that require genuine social intelligence. Further research is needed to move beyond superficial personality representations and imbue robots with the capacity for believable, dynamic, and contextually appropriate behavior.

The successful integration of robots into everyday human life hinges on a deep understanding of how personality traits influence interaction dynamics. Research demonstrates that humans instinctively attribute personalities to robots, and these perceived traits significantly shape acceptance, trust, and collaborative potential. A robot perceived as conscientious and agreeable, for example, is more likely to elicit cooperative behavior from humans, while a robot displaying traits associated with dominance or unpredictability can trigger avoidance or distrust. Consequently, designing robots with carefully calibrated personality profiles – encompassing dimensions like warmth, competence, and emotional expression – is not merely an aesthetic consideration, but a fundamental requirement for fostering seamless and effective human-robot partnerships in homes, workplaces, and public spaces. This necessitates moving beyond simplistic programming and embracing computational models that capture the complexity and nuance of human social cognition to create robots that are not just functional, but genuinely compatible with the human experience.

This study investigates the impact of robot personality-manipulated through a large language model enabling unscripted interactions-on human-robot cooperation during a Quickdraw task.
This study investigates the impact of robot personality-manipulated through a large language model enabling unscripted interactions-on human-robot cooperation during a Quickdraw task.

The Five-Factor Framework: Quantifying the Illusion

The robot personality framework utilizes the Five-Factor Model, a widely accepted taxonomy in personality psychology. This model defines personality along five broad dimensions: Agreeableness, representing traits like compassion and cooperation; Conscientiousness, reflecting organization and responsibility; Emotional Stability, encompassing resilience and calmness; Openness, characterizing imagination and intellectual curiosity; and a fifth dimension focused on motivational drive, specifically addressing the robot’s propensity to initiate and persist in tasks. Each of these factors is treated as a continuous variable, allowing for nuanced behavioral profiles to be generated by assigning values along each dimension. This approach provides a standardized and quantifiable method for defining and manipulating robot behavior, moving beyond simple rule-based systems.

The Vicuna Large Language Model was utilized to translate the five personality factors-Agreeableness, Conscientiousness, Emotional Stability, Openness, and a motivation dimension-into observable robotic behaviors. Specifically, Vicuna was prompted with personality trait definitions and expected response patterns, generating textual outputs intended to represent consistent behavioral expressions of those traits. These outputs were then used to inform robot action selection and dialogue generation, ensuring that the robot’s responses and behaviors aligned with its defined personality profile across multiple interactions. This approach allowed for the creation of a system where manipulating the input parameters to Vicuna directly altered the robot’s exhibited personality, producing predictable and repeatable behavioral changes.

Traditional robot behavior relies on explicitly programmed responses to specific stimuli; however, this approach limits adaptability and nuanced interaction. Utilizing the Five-Factor Model enables the definition of robot personality through quantifiable parameters – Agreeableness, Conscientiousness, Emotional Stability, Openness, and a motivational factor – allowing for a continuous spectrum of behavioral expression. These factors are represented numerically, facilitating precise adjustment and replication of desired personality traits. This contrasts with discrete, pre-defined actions, as the LLM can generate novel responses consistent with the assigned personality profile, creating a more flexible and dynamic representation of robot personality that extends beyond simple conditional logic.

The manipulation of personality factors, specifically Agreeableness, Conscientiousness, Emotional Stability, Openness, and a motivation dimension, was undertaken to enable robots to dynamically adjust their interaction strategies based on the context of a given task and the user involved. This adaptive behavior is intended to maximize task engagement by tailoring the robot’s communication style, response timing, and overall demeanor. For example, increasing Agreeableness might foster collaboration, while increasing Conscientiousness could emphasize adherence to instructions and thoroughness. The quantifiable nature of this framework allows for systematic testing of how specific personality profiles affect user motivation and task performance, ultimately aiming to create robots that can optimize their interactions to achieve desired outcomes.

Pre-study analysis using the Ten-Item Personality Inventory (TIPI) reveals statistically significant differences in assessed personality traits between the two chatbots (p < 0.01 and p < 0.001).
Pre-study analysis using the Ten-Item Personality Inventory (TIPI) reveals statistically significant differences in assessed personality traits between the two chatbots (p < 0.01 and p < 0.001).

Validation Through Performance: Measuring the Facade

A pre-study was conducted to validate the personality profiles of the Vicuna chatbot prior to the main experiment. Online evaluations, utilizing human subjects, were used to assess whether Vicuna’s responses consistently reflected the intended personality traits-specifically, varying levels of agreeableness. These evaluations involved presenting participants with conversational prompts and analyzing their ratings of Vicuna’s exhibited behaviors. The results of this pre-study were critical for ensuring that any observed effects in the main study could be confidently attributed to the manipulated personality variable and not to inconsistencies in Vicuna’s expression of those traits. Quantitative analysis confirmed significant differences in perceived empathy and emotional stability between the agreeable and non-agreeable chatbot versions ($p < .001$ for both), as well as a statistically significant difference in perceived intelligence ($p = .030$).

The Main Study utilized the Quickdraw game as a collaborative task to investigate human-robot interaction. Participants were paired with a robotic agent programmed to express one of several distinct personality profiles. In each round of Quickdraw, a participant and the robot were presented with a prompt and tasked with collaboratively drawing a representation of that prompt within a limited time frame. This setup allowed researchers to observe how human participants adapted their behavior and communication strategies when cooperating with a robotic partner exhibiting varying personality traits, and to quantify the impact of these traits on task performance and perceived social dynamics.

Task performance was quantified through metrics related to the Quickdraw game, specifically the number of correctly guessed drawings within a specified time limit and the overall completion rate of collaborative drawing rounds. Participant motivation was assessed using a post-task questionnaire incorporating Likert scale items designed to measure intrinsic motivation, effort expenditure, and continued engagement with the cooperative task. These measures were then statistically analyzed to determine correlations between robot personality profile-agreeable versus non-agreeable-and both individual and team performance outcomes, providing quantitative data on the influence of perceived personality on collaborative success.

Participant perceptions of the robotic agent were evaluated using the Godspeed Questionnaire, a validated instrument for assessing likability, safety, and trustworthiness. Pre-study results indicated statistically significant differences in perceived empathy and emotional stability between the agreeable and non-agreeable chatbot conditions ($p < .001$ for both measures). Furthermore, the agreeable chatbot was rated as significantly more intelligent than the non-agreeable chatbot ($p = .030$). These findings demonstrate that distinct personality profiles, as implemented in the chatbot, elicit varying levels of positive perception from human participants prior to task engagement.

Significant differences in user perceptions, as measured by the Godspeed questionnaire, were observed between the agreeable and non-agreeable robots (p < 0.05 or p < 0.001).
Significant differences in user perceptions, as measured by the Godspeed questionnaire, were observed between the agreeable and non-agreeable robots (p < 0.05 or p < 0.001).

The Illusion Persists: Towards Adaptable Facades

Recent investigations reveal that a robot’s exhibited personality isn’t merely a cosmetic feature, but a crucial determinant in collaborative performance and user engagement. Studies consistently demonstrate that when robots are imbued with defined personality traits, participants not only perform tasks more effectively, but also report higher levels of motivation and enjoyment during the interaction. This suggests that the perception of a robot as a collaborative partner – rather than simply a tool – is heavily influenced by its personality, directly impacting the human’s willingness to engage and cooperate. The findings indicate that carefully designed robotic personalities can unlock more fluid and productive human-robot teams, moving beyond functional efficiency towards genuinely collaborative experiences.

Research indicates a strong link between specific robot personality traits and the effectiveness of human-robot collaboration. Notably, robots exhibiting high levels of agreeableness – displaying traits like kindness and cooperation – and conscientiousness – demonstrating organization and diligence – consistently fostered more successful task completion and heightened user engagement. This correlation suggests that a robot’s perceived personality isn’t merely a superficial characteristic, but a crucial factor influencing how readily humans collaborate and remain motivated when working alongside them. The study highlights that robots designed to be cooperative and reliable partners, rather than simply tools, can significantly enhance the overall collaborative experience and improve performance outcomes, potentially by building trust and reducing user frustration.

The study’s results underscore a critical design consideration for social robots: personality is not merely an aesthetic addition, but a functional element impacting collaboration and user engagement. Researchers found that robots exhibiting traits like agreeableness were perceived as significantly more likeable – a difference confirmed with a p-value of less than .001 – suggesting personality directly influences how readily humans accept and interact with robotic partners. This heightened likeability, in turn, translated to improved collaborative performance and increased participant motivation, demonstrating that carefully crafting a robot’s personality can be a powerful tool for optimizing human-robot interaction and building trust, moving beyond purely functional design towards more intuitive and effective partnerships.

Investigations into human-robot interaction are increasingly focused on the extended impact of robotic personality; future studies will need to assess whether initial positive responses to specific traits translate into sustained engagement and trust over prolonged periods. Beyond static personality design, research is poised to explore the development of adaptive personality profiles, where a robot dynamically adjusts its behavioral traits based on individual user preferences and interaction history. This personalization could optimize the user experience, fostering stronger rapport and enhancing collaborative performance, potentially by mirroring a user’s communication style or providing support tailored to their emotional state. Such advancements necessitate robust methodologies for measuring long-term user satisfaction and addressing potential ethical considerations surrounding manipulation or undue influence stemming from artificially intelligent social cues.

Participants collaboratively sketch an object, like a pizza, with a robot that provides commentary and guesses to facilitate the drawing task.
Participants collaboratively sketch an object, like a pizza, with a robot that provides commentary and guesses to facilitate the drawing task.

The study’s findings regarding nuanced impacts on motivation and task performance feel…predictable. It appears imbuing robots with agreeable personalities boosts perceived likability – a superficial metric, naturally. As John McCarthy observed, “It is better to solve one problem than a thousand.” This research identifies a problem – how to make robots seem cooperative – but doesn’t convincingly demonstrate a solution for actually improving collaborative outcomes. The core idea, that personality alone doesn’t guarantee enhanced performance, confirms a long-held suspicion: anything self-healing just hasn’t broken yet. The pursuit of ‘likable’ robots feels like polishing the casing while the engine sputters.

What’s Next?

The pursuit of ‘agreeable’ robots, predictably, hasn’t unlocked some magical surge in cooperative task completion. It turns out humans aren’t quite so easily swayed by polite syntax, even when delivered by a machine. The initial uptick in perceived likability is… expected. It’s always easier to like something that pretends to like you. The real question, one this work subtly highlights, is what happens when the pleasantries inevitably break down. What happens when the LLM hallucinates a crucial step, or decides passive-aggression is the optimal strategy for maximizing task efficiency? They’ll call it ‘emergent behavior’ and raise funding.

The focus now will undoubtedly shift to quantifying how personality impacts interaction beyond simple preference ratings. Expect to see attempts to correlate specific personality traits with error recovery, or the ability to negotiate ambiguous instructions. But a nagging suspicion remains: this is all just a very elaborate way to avoid addressing the fundamental brittleness of robotic systems. A robot that seems cooperative is still a robot that will eventually fail, and that failure won’t be softened by a well-crafted apology.

One can already envision the escalating complexity. Personality layers built on top of emotion models, which themselves are driven by… more LLMs. It used to be a simple bash script, honestly. And somewhere, buried deep within the layers of abstraction, the documentation will lie again, and someone will be left debugging a robot that insists it was following instructions perfectly, thank you very much.


Original article: https://arxiv.org/pdf/2512.06910.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-09 13:39