Author: Denis Avetisyan
This review outlines a new framework for embedding human values into artificial intelligence, moving beyond abstract principles to concrete, verifiable standards.

A structured process for operationalizing social, legal, ethical, empathetic, and cultural norms into technical requirements for AI agents.
Despite growing international frameworks for ethical AI, translating abstract principles into concrete engineering requirements remains a significant challenge. This paper, ‘Social, Legal, Ethical, Empathetic and Cultural Norm Operationalisation for AI Agents’, addresses this gap by proposing a systematic process for determining, validating, and verifying normative requirements – effectively operationalizing SLEEC norms for AI agents. The core contribution is a framework and research agenda designed to move beyond aspirational principles toward demonstrably aligned AI systems. How can we best scale these operationalization techniques to address the complex interplay of cultural values and legal constraints across diverse deployment contexts?
Foundations for Responsible AI: Beyond Aspirational Ethics
The rapid integration of artificial intelligence into daily life, from automated decision-making systems to increasingly autonomous agents, demands more than a collection of aspirational principles. Simple ethical guidelines often prove insufficient when confronted with the nuanced complexities of real-world scenarios, particularly those involving conflicting values or unforeseen consequences. As AI agents take on roles previously held by humans – impacting areas like healthcare, finance, and criminal justice – the need for a robust and operationalizable ethical framework becomes paramount. This isn’t simply about preventing malicious use; it’s about proactively shaping AI behavior to align with societal values and ensuring accountability when harm occurs, necessitating a move beyond broad statements of intent towards concrete, actionable standards.
Current ethical frameworks for artificial intelligence frequently struggle when confronted with the nuances of practical application. Broad principles, while well-intentioned, often lack the specificity required to navigate complex, real-world dilemmas – situations demanding careful consideration of context, competing values, and unforeseen consequences. This limitation stems from a tendency to prioritize abstract ideals over actionable guidelines, leaving developers and deployers without concrete direction when facing difficult choices. Consequently, AI systems may inadvertently perpetuate biases, violate privacy, or produce outcomes that, while technically compliant, are ethically questionable, highlighting the urgent need for more granular and adaptable ethical reasoning tools.
The development of truly responsible artificial intelligence requires moving beyond broad ethical principles to a more nuanced and comprehensive framework, and the SLEEC norms – encompassing Social, Legal, Ethical, Empathetic, and Cultural considerations – offer just such a foundation. This approach acknowledges that AI operates within a complex web of human values and societal structures, demanding that its behavior be evaluated not simply on what is permissible, but on what is considerate and culturally appropriate. By integrating these five dimensions, the SLEEC framework encourages a holistic assessment of AI systems, prompting developers to consider the broader impacts on individuals, communities, and global society. Ultimately, this multi-faceted perspective aims to ensure that AI technologies are not only technically sound, but also aligned with human well-being and societal flourishing, fostering trust and promoting beneficial outcomes for all.
The establishment of truly ethical artificial intelligence relies not simply on broad principles, but on their rigorous conversion into practical application. This work addresses the critical gap between aspirational ethical guidelines and concrete AI behavior by detailing a five-stage operationalisation process for SLEEC norms – Social, Legal, Ethical, Empathetic, and Cultural considerations. These norms are fundamentally rooted in established Normative Principles, yet their effective implementation demands careful translation into actionable rules that AI systems can interpret and adhere to. The proposed process systematically breaks down these complex considerations, enabling developers to move beyond theoretical frameworks and build AI agents demonstrably aligned with human values and societal expectations, fostering trust and responsible innovation in the field.
From Principle to Practice: Operationalizing Ethical Guidelines
The SLEEC Norm Operationalization Process is a five-stage methodology designed to convert high-level ethical principles into actionable, testable requirements for AI systems. This process begins with identifying relevant ethical considerations, followed by their formalization into specific, measurable norms. These norms are then translated into detailed functional requirements defining permissible and prohibited agent behaviors. The subsequent stage involves technical specification, linking requirements to specific AI agent capabilities and constraints. Finally, the process culminates in verification and validation procedures to ensure the implemented system demonstrably adheres to the defined SLEEC Requirements, providing an audit trail from ethical principle to system behavior.
The SLEEC Norm Operationalization Process incorporates societal values and legal obligations into AI agent design through a systematic approach to requirement definition. This ensures alignment between agent behavior and externally defined ethical and legal standards, mitigating risks associated with unintended consequences or non-compliance. By explicitly linking agent capabilities to these norms, the process facilitates responsible innovation by proactively addressing potential conflicts between technical feasibility and desired ethical outcomes. This structured methodology allows developers to demonstrably account for societal expectations, fostering trust and accountability in AI systems.
SLEEC Requirements articulate precise behavioral expectations for AI agents, detailing actions the agent should undertake when presented with defined situational contexts. These requirements move beyond broad ethical statements by specifying concrete rules, such as mandated data handling procedures, permissible response types, or obligatory safety checks before executing a task. Each requirement is formulated to be directly testable, allowing for verification that the agent is functioning in accordance with established norms; for example, a requirement might state that an agent “must obtain explicit user consent prior to collecting personally identifiable information.” The scope of these requirements covers all foreseeable operational scenarios, addressing potential conflicts between ethical principles and technical capabilities to ensure responsible AI deployment.
SLEEC Requirements are directly informed by, and constrained by, the documented AI Agent Capabilities of the system being developed. This linkage is a critical component of the operationalization process, ensuring that ethical and legal obligations are translated into specifications that are technically achievable. Specifically, each requirement is assessed against the agent’s existing or planned capabilities – including perception, reasoning, action, and communication – to verify feasibility. If a requirement exceeds the agent’s capabilities, it is either refined to align with limitations or the agent’s capabilities are expanded to meet the requirement, with associated resource and timeline implications clearly documented. This iterative process prevents the formulation of unrealistic expectations and facilitates a pragmatic approach to responsible AI development.
Demonstrating Reliability: Verification and Adaptive Systems
Formal verification employs rigorous mathematical techniques to demonstrate that an AI agent’s behavior provably satisfies its Specified Limits and Ethical, legal, and societal Expectations – collectively known as SLEEC Requirements. This process differs from traditional testing by providing a guarantee, rather than simply increasing confidence, that the agent will not violate these requirements under any foreseeable circumstance. Techniques used include model checking, theorem proving, and abstract interpretation, all of which analyze the agent’s design and code against a formal specification of the SLEEC constraints. Successfully completing formal verification significantly minimizes the risk of unintended consequences by establishing a mathematically sound basis for the agent’s reliability and safety.
RoboChart and Tock-CSP are formal methods tools utilized in the verification of AI agent behavior. RoboChart employs statechart modeling to visually represent and analyze agent logic, enabling the detection of design flaws and inconsistencies. Tock-CSP, based on Communicating Sequential Processes (CSP), allows for the specification of agent interactions and properties using a formal language, followed by automated verification against defined requirements. Both tools support the creation of test cases and the generation of proofs demonstrating compliance with specified SLEEC (Safety, Legal, Ethical, and Explainability Constraints) requirements, thereby providing a robust and reliable approach to ensuring predictable and safe AI system operation.
Digital Twins provide a virtual replica of the AI agent and its operating environment, allowing for extensive pre-deployment testing and validation. This simulation capability enables developers to subject the agent to a wide range of scenarios, including edge cases and unforeseen circumstances, without risk to real-world systems or individuals. By creating a closed-loop system where the agent interacts with the simulated environment and its actions are monitored and analyzed, potential flaws in the agent’s logic, sensor integration, or decision-making processes can be identified and corrected. The data generated during Digital Twin simulations offers quantifiable metrics regarding the agent’s performance, robustness, and adherence to specified requirements, increasing confidence in its reliability and safety prior to real-world implementation.
Runtime Adaptation in AI agents addresses the need for continuous ethical compliance by enabling dynamic behavioral adjustments. This functionality allows the agent to respond to alterations in normative requirements – such as updated regulations or evolving societal values – and to unforeseen circumstances encountered during operation. Implementation typically involves a monitoring system that detects deviations from established ethical guidelines or unexpected environmental conditions, triggering a reconfiguration of the agent’s decision-making processes. This adaptation can range from minor parameter adjustments to more substantial changes in behavioral algorithms, all executed without requiring manual intervention or system downtime, thereby maintaining reliable and ethically sound operation throughout the agent’s lifecycle.
Real-World Impact and Future Directions in Responsible AI
The ALMI Project showcases how SLEEC – a framework prioritizing Safety, Legality, Ethics, Explainability, and Controllability – can move assistive AI beyond theoretical potential and into tangible support for individuals facing cognitive and physical challenges. Through the development and deployment of AI agents designed to aid with daily living tasks, the project demonstrates practical applications ranging from medication reminders and fall detection to proactive environmental adjustments. This isn’t simply automation; it’s the creation of intelligent companions capable of adapting to user needs while upholding stringent ethical guidelines. By integrating SLEEC principles, the ALMI Project proves that AI can enhance independence and quality of life for vulnerable populations, establishing a blueprint for responsible innovation in the burgeoning field of assistive technology.
The effective deployment of AI-driven assistive technologies, such as those developed within the ALMI Project, fundamentally relies on sustained human oversight. This isn’t merely a matter of error correction; it’s about safeguarding user autonomy and ensuring the AI operates within ethically defined boundaries. Without continuous monitoring and the capacity for human intervention, even well-intentioned AI agents risk misinterpreting nuanced situations or enacting actions that conflict with a user’s preferences or values. Human oversight provides a critical feedback loop, allowing for real-time adjustments and preventing the AI from exceeding its designated scope, ultimately fostering trust and responsible innovation in the realm of assistive living. It’s through this collaborative approach – AI assistance coupled with human judgment – that these technologies can truly empower individuals while upholding their dignity and self-determination.
The design of ethically-aligned AI agents operating in complex environments inevitably encounters situations where core values conflict. For example, an assistive AI might prioritize a user’s desire for independence while simultaneously needing to ensure their safety, creating a tension between autonomy and beneficence. Resolving such dilemmas demands more than simply programming a hierarchy of values; instead, developers must proactively anticipate potential conflicts and build in mechanisms for transparent trade-offs. This requires explicitly defining the values at play, quantifying their relative importance within specific contexts, and, crucially, making these considerations accessible to users and caregivers. By openly acknowledging the ethical calculus behind an AI’s decisions, and allowing for human oversight where appropriate, systems can move beyond simply avoiding value clashes to managing them in a responsible and accountable manner, fostering trust and ensuring alignment with human preferences.
The robust design of ethically-aligned AI systems demands proactive consideration of unexpected scenarios. Rather than rigidly adhering to pre-programmed responses, advanced architectures now incorporate “defeaters”-predetermined conditions that, when met, trigger a shift in operational priorities to uphold core ethical principles. These defeaters aren’t failures, but rather carefully constructed overrides that allow the AI to gracefully navigate circumstances outside its typical operating parameters. For example, an assistive robot designed to remind a patient to take medication might include a defeater triggered by a detected medical emergency, immediately prioritizing communication with emergency services over medication adherence. This approach ensures that, even when confronted with unforeseen events, the AI’s actions remain consistent with fundamental ethical guidelines, preventing potentially harmful outcomes and maintaining user trust.
The pursuit of operationalizing SLEEC norms reveals a familiar pattern: engineers building elaborate systems where a simpler approach would suffice. They called it a framework to hide the panic, naturally. This paper’s structured process for translating abstract ethical considerations into verifiable requirements is a commendable attempt to inject clarity into a field rapidly becoming burdened by complexity. As Claude Shannon observed, “The most important thing is to get the message across.” This article, at its heart, seeks to do just that – to ensure the ‘message’ of ethical behavior is not lost in translation when communicated to an AI agent. The focus on rigorous verification is a vital step, acknowledging that good intentions, without measurable outcomes, are merely wishful thinking.
What’s Next?
The pursuit of embedding societal values into artificial intelligence inevitably reveals the poverty of the values themselves. This work, by attempting to formalize SLEEC norms into verifiable requirements, does not solve ethical AI-it merely exposes the brittleness of the concepts being formalized. The true challenge lies not in translating norms, but in acknowledging their inherent ambiguity and contextual dependence. A requirement, stripped of nuance, is a directive for a machine, not a reflection of a society.
Future efforts should resist the temptation to seek definitive formulations. Instead, research might focus on mechanisms for negotiating ethical trade-offs, building agents capable of articulating the consequences of conflicting norms, rather than blindly enforcing them. The ideal is not an AI that embodies ethics, but one that exposes them, forcing a continual re-evaluation of the principles upon which it operates.
Ultimately, the field must confront the uncomfortable possibility that some values are simply not amenable to algorithmic representation. To insist otherwise is to mistake the map for the territory, and to believe that a sufficiently complex system of rules can capture the messy, irrational heart of being human. The disappearance of the author, in this case, should not be a goal, but a warning.
Original article: https://arxiv.org/pdf/2603.11864.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- CookieRun: Kingdom 5th Anniversary Finale update brings Episode 15, Sugar Swan Cookie, mini-game, Legendary costumes, and more
- Call the Midwife season 16 is confirmed – but what happens next, after that end-of-an-era finale?
- Taimanin Squad coupon codes and how to use them (March 2026)
- Robots That React: Teaching Machines to Hear and Act
- Heeseung is leaving Enhypen to go solo. K-pop group will continue with six members
- Gold Rate Forecast
- PUBG Mobile collaborates with Apollo Automobil to bring its Hypercars this March 2026
- Alan Ritchson’s ‘War Machine’ Netflix Thriller Breaks Military Action Norms
- Marilyn Manson walks the runway during Enfants Riches Paris Fashion Week show after judge reopened sexual assault case against him
- Who Plays Brook In Live-Action One Piece
2026-03-14 16:21