Beyond the Algorithm: Reimagining AI for a World of Cultures

Author: Denis Avetisyan


A new review highlights the critical gap between current artificial intelligence capabilities and the nuanced understanding of human interaction across diverse cultural contexts.

The First African Digital Humanism Summer School 2025 revealed limitations in generative AI’s ability to interpret and reproduce culturally specific behaviors, underscoring the need for more culturally intelligent AI systems and ethical considerations.

Despite the rapid advancement of artificial intelligence, current systems often struggle with culturally nuanced communication, revealing a critical gap in truly human-centered design. This challenge is explored in research stemming from the First African Digital Humanism Summer School 2025, which investigates AI’s capacity to navigate complex cross-cultural and multilingual contexts. Findings demonstrate that existing generative AI models exhibit limitations in accurately interpreting and reproducing culturally specific greetings and behaviors, underscoring the need for more culturally intelligent systems. How can we build AI that not only processes information but also understands the diverse tapestry of human culture?


The Erosion of Cultural Context in Generative Systems

Contemporary generative AI systems, despite their impressive capabilities, frequently exhibit a pronounced Western-centric bias stemming from the datasets used in their training. These models are overwhelmingly exposed to, and thus prioritize, Western cultural norms, values, and linguistic patterns, resulting in outputs that may misrepresent or fail to adequately address non-Western contexts. This isn’t merely a matter of inaccurate translation; the AI struggles with understanding cultural subtleties like humor, social cues, and appropriate levels of formality, potentially leading to miscommunication or even offense. For instance, an AI trained primarily on Western literature may generate narratives that inadvertently reinforce stereotypes or fail to resonate with audiences from different cultural backgrounds, highlighting a critical limitation in their ability to function effectively in a truly globalized world. The very foundations of these systems, therefore, require careful consideration to mitigate the risk of perpetuating existing inequalities and ensure broader inclusivity.

The inherent limitations of current generative AI models extend beyond simple inaccuracies, frequently manifesting as miscommunication and even offense within cross-cultural contexts. These systems, trained predominantly on Western datasets, often fail to recognize or appropriately interpret cultural nuances, leading to outputs that are insensitive, misrepresentative, or perpetuate harmful stereotypes. For instance, a model might misinterpret non-verbal cues common in one culture as aggressive in another, or generate narratives that reinforce biased portrayals of specific groups. This isn’t merely a matter of politeness; these errors can have real-world consequences, damaging relationships, hindering effective communication in international business, and exacerbating existing social inequalities by amplifying prejudiced perspectives through seemingly neutral technology.

The development of culturally sensitive artificial intelligence represents a pivotal step towards genuinely inclusive technology and seamless global communication. Current AI systems, trained predominantly on Western datasets, often struggle with cultural nuances, potentially leading to misinterpretations or even offense when interacting with individuals from diverse backgrounds. A commitment to building AI that acknowledges and respects cultural differences – encompassing variations in language, customs, and values – isn’t merely about avoiding errors; it’s about ensuring equitable access to technology and fostering meaningful connections across cultures. This necessitates diversifying training data, incorporating cultural expertise into AI development, and prioritizing algorithms that can adapt to different cultural contexts, ultimately creating systems that enhance, rather than hinder, intercultural understanding.

Towards a Pluriversal Intelligence: Recognizing Multiple Ways of Knowing

Pluriversal AI represents a shift in artificial intelligence development away from the historically dominant Western-centric worldview. Current AI systems are often trained on datasets and designed with assumptions reflecting primarily Western values, knowledge systems, and problem-solving approaches. This can lead to biases and inaccuracies when applied to contexts outside this framework. The pluriversal approach advocates for the recognition of multiple, equally valid ways of knowing and being, requiring AI models to accommodate and respect diverse ontological, epistemological, and axiological perspectives. This does not imply a rejection of Western knowledge, but rather a broadening of the scope to include and integrate other knowledge systems – such as those originating from Indigenous, Eastern, and other non-Western traditions – to create more inclusive and globally relevant AI.

Cultural fluency in AI necessitates the development of models capable of accurately identifying and interpreting culturally specific norms, values, and communication styles. This extends beyond simple language translation; it requires understanding contextual cues, non-verbal communication, and the historical and social factors influencing behavior. Achieving this involves training AI on diverse datasets representing a wide range of cultures, incorporating knowledge graphs that explicitly define cultural concepts, and employing techniques like few-shot learning to adapt to new cultural contexts with limited data. Furthermore, evaluation metrics must move beyond universal standards to incorporate culturally-sensitive assessments of appropriateness and relevance, avoiding the imposition of a single cultural framework onto all interactions.

The development of pluriversal AI necessitates a foundational commitment to inclusivity, moving beyond token representation to actively involve diverse communities throughout the entire AI lifecycle. This includes not only data sourcing and annotation – ensuring datasets reflect a multiplicity of cultural perspectives and avoiding inherent biases – but also algorithmic design, model evaluation, and deployment strategies. Genuine participation requires equitable access to AI development resources, fostering local expertise, and establishing feedback mechanisms that prioritize community needs and values. Successful implementation hinges on collaborative partnerships, where diverse groups have agency in shaping AI systems to align with their specific contexts and prevent the perpetuation of dominant cultural norms.

Greetings as a Cultural Canary: Testing the Limits of AI Sensitivity

Traditional greetings are not merely formulaic exchanges but encapsulate deeply embedded cultural information. The specific phrasing, honorifics used, and even the physical actions accompanying a greeting often reflect a culture’s values regarding respect, age, social status, and interpersonal relationships. For example, greetings may emphasize collective identity over individualism, or demonstrate deference to elders through specific linguistic markers or gestures. Variations in greetings can also signal social hierarchies, indicating appropriate forms of address based on relative power or social standing. Furthermore, the level of formality, the inclusion of inquiries about family or wellbeing, and the expected reciprocity of the greeting all contribute to a complex system of social signaling that is crucial for navigating cultural interactions effectively.

An evaluation was conducted to assess the performance of three large language models – GPT-4o, Gemini, and Grok – in replicating greetings from five distinct cultural contexts: Hausa, Luo, Chinese, Baganda, and Brazilian. The investigation focused on the models’ capacity to generate greetings that align with established cultural protocols and linguistic conventions within each of these groups. Data collection involved prompting each model with requests for typical greetings in each culture, followed by expert analysis of the generated responses to determine accuracy and cultural appropriateness. The study aimed to establish a baseline understanding of the current capabilities – and limitations – of generative AI in handling culturally-sensitive communication.

Evaluation of Generative AI models – GPT-4o, Gemini, and Grok – on the reproduction of greetings from Hausa, Luo, Chinese, Baganda, and Brazilian cultures revealed consistent deficiencies in accurately representing cultural nuances. Quantitative assessment using a ‘Cultural Fidelity’ metric yielded consistently low scores across all models and cultures tested. Furthermore, the ability of these models to detect potential norm violations within greetings (‘Norm Violation Detection’) proved inconsistent, demonstrating an inability to reliably identify inappropriate or contextually incorrect greetings. Finally, ‘Demographic Accuracy’ – the correct application of greetings based on factors like age, gender, or social status – was also found to be low, indicating a broader failure to understand the social context governing these communicative acts.

The Fragility of Cultural Understanding: Beyond Dimensions and Towards Nuance

Geert Hofstede’s foundational work on cultural dimensions offers a compelling lens through which to examine variations in communication styles globally. His research highlights that cultures differ significantly in how they address issues of power – as demonstrated by the ‘Power Distance’ dimension, which assesses the degree to which a society accepts unequal distribution of power. Similarly, ‘Uncertainty Avoidance’ reveals how cultures cope with ambiguity and risk, influencing the directness and formality of communication. Societies scoring high on this dimension often favor structured situations and detailed instructions, while those with low scores tend to be more comfortable with improvisation and open-endedness. These dimensions, while not definitive, provide a valuable starting point for understanding potential miscommunications and fostering more effective intercultural interactions by illuminating the underlying values that shape how individuals perceive and respond to different communication approaches.

While frameworks like Hofstede’s Cultural Dimensions offer a foundational understanding of broad cultural tendencies, a truly nuanced comprehension necessitates moving beyond generalized models. These dimensions, though insightful, represent averages and can obscure significant variation within cultures, failing to account for regional differences, socioeconomic factors, or individual perspectives. Effective cross-cultural interaction, particularly in the context of artificial intelligence, demands direct engagement with specific cultural contexts, leveraging local expertise and firsthand observation. Relying solely on dimensional scores risks perpetuating stereotypes and overlooking the complex interplay of values, beliefs, and behaviors that define a culture’s unique character, ultimately hindering meaningful communication and fostering misunderstandings.

Recent research highlights a critical need for culturally intelligent artificial intelligence. The study demonstrates that AI models, when trained on homogenous datasets or relying on broad cultural generalizations, often perpetuate biases and misinterpret communication cues. Consequently, these models can produce inaccurate or even offensive outputs when interacting with individuals from diverse backgrounds. The findings advocate for the development of AI systems capable of recognizing and adapting to the subtle nuances of different cultures, moving beyond simplistic categorizations to embrace the complexities of human interaction. This requires incorporating diverse datasets, utilizing advanced natural language processing techniques, and prioritizing ongoing evaluation with input from cultural experts to ensure respectful and effective cross-cultural communication.

The exploration of culturally nuanced communication, as detailed in the study of greetings and behaviors, echoes a fundamental truth about all complex systems. Just as infrastructure inevitably accrues technical debt, so too do AI models accumulate ‘cultural debt’ when lacking sufficient cross-cultural data. As John McCarthy observed, “It is better to do the right thing than to do things right.” This sentiment perfectly encapsulates the need to prioritize ethical and culturally sensitive AI development, ensuring that advancements in generative AI do not merely optimize for technical proficiency, but also demonstrate genuine understanding and respect for diverse cultural expressions. The pursuit of ‘uptime’ in AI systems-their consistent and reliable performance-is but a fleeting phase if that performance is built upon a foundation of cultural insensitivity.

The Long Calibration

The difficulty generative AI exhibits with even basic cross-cultural communication isn’t a failure of technology, but a symptom of its premature ambition. These systems, built on vast datasets, reveal the inherent limitations of scaling intelligence without first deeply understanding the subtleties of lived experience. A greeting, a gesture, even a pause-these aren’t simply data points to be predicted, but signals embedded in complex social histories. The current focus on multimodal AI, while valuable, feels like adding layers to a foundation that remains unevenly settled.

The field now faces a choice: relentlessly pursue ever-larger models, hoping that nuance will emerge from sheer volume, or embrace a slower calibration. The latter acknowledges that genuine cultural intelligence isn’t about replicating behavior, but about understanding its origins and implications. It’s a process of patient observation, of actively seeking out and incorporating perspectives beyond the dominant datasets. Systems learn to age gracefully, not by becoming infinitely complex, but by recognizing the boundaries of their knowledge.

Perhaps the most fruitful path lies not in correcting biases, but in explicitly acknowledging them. An AI that understands its own cultural limitations, and can articulate those limitations to a user, may prove more valuable-and more ethical-than one that strives for an illusory neutrality. Sometimes observing the process is better than trying to speed it up.


Original article: https://arxiv.org/pdf/2601.08870.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-15 16:14