The Expertise Edge: How Knowledge Shapes Decisions in the Age of AI Content

Author: Denis Avetisyan


New research reveals that pre-existing domain knowledge remains a powerful influence on decision-making, even when information is sourced from AI-generated content.

A study examining online opinion dynamics demonstrates that AI-generated content is evaluated similarly to human-written content when individuals possess relevant expertise.

While online information increasingly shapes individual opinions, the role of domain-specific knowledge in evaluating this content remains unclear. This research, ‘The Impact of AI Generated Content on Decision Making for Topics Requiring Expertise’, investigates how generative AI-driven content influences decision-making compared to human-authored sources, particularly when expertise is required. Findings demonstrate that individuals are less likely to alter opinions on complex topics, relying on presented conclusions when lacking specialized knowledge, and that AI-generated information is perceived as equally helpful as human-written content. As AI increasingly mediates access to information, what implications does this have for expertise, critical thinking, and informed decision-making in specialized fields?


The Shifting Foundations of Online Truth

The digital realm is experiencing a fundamental shift in information access, as conventional methods of retrieval struggle to keep pace with the exponential growth of AI-generated content. Historically, search engines and databases relied on indexing human-authored material, assuming a baseline of originality and intent. Now, sophisticated algorithms can produce text, images, and even video with remarkable speed and increasing realism, flooding online spaces with synthetic data. This proliferation doesn’t simply increase the volume of information; it fundamentally alters the landscape, making it increasingly difficult to discern authentic sources from automated fabrication. Consequently, established techniques for evaluating credibility, such as source verification and citation analysis, are becoming less reliable, demanding innovative approaches to content authentication and a re-evaluation of how information is discovered and consumed.

The rapid advancement of automated agents, commonly known as social bots, presents a growing challenge to the integrity of online information ecosystems. These bots, capable of generating and disseminating content at scale, can mimic human users with increasing sophistication, blurring the lines between authentic engagement and artificial influence. Studies reveal that bots are frequently deployed to amplify specific narratives, manipulate public opinion, and even interfere in political discourse. This ease of dissemination raises critical concerns about the authenticity of online information, as individuals struggle to discern between genuine expressions and computer-generated content designed to sway their perceptions. The potential for widespread manipulation necessitates a deeper understanding of bot detection techniques and the psychological mechanisms that make individuals susceptible to artificially inflated trends and narratives.

The modern digital environment, characterized by a relentless influx of information through computer-mediated channels, significantly complicates how individuals arrive at decisions online. This isn’t simply a matter of information overload; rather, the very process of persuasion has been subtly altered. Traditional models of influence, relying on cues like source credibility and logical argumentation, are now operating alongside algorithmic amplification, emotionally-charged content designed for virality, and the obfuscation inherent in automated accounts. Consequently, a nuanced understanding of these evolving persuasion dynamics is crucial; discerning genuine influence from manipulative tactics requires examining not just what information is presented, but how it’s disseminated and by whom – or, increasingly, by what – within these complex digital networks.

The Architecture of Persuasion: Cognitive Models at Play

The Elaboration Likelihood Model (ELM) posits two primary routes to persuasion: central and peripheral. Central processing involves careful consideration of message content, requiring cognitive effort and typically resulting in attitude changes that are more enduring. This route is activated when an individual possesses the motivation and ability to critically evaluate the information presented. Conversely, peripheral processing relies on superficial cues such as source credibility, attractiveness, or the number of arguments, rather than the arguments themselves. This route requires less cognitive effort and often leads to temporary attitude shifts. The ELM suggests that the route taken depends on the receiver’s motivation and ability to process the message; high motivation and ability favor central processing, while low motivation or ability lead to reliance on peripheral cues.

Research indicates a strong correlation between domain-specific knowledge and resistance to misinformation. Utilizing the principles of the Dunning-Kruger Effect, our quantitative study demonstrated that participants with limited expertise in a subject area were less likely to revise their pre-existing beliefs when presented with evidence-based counterarguments. Specifically, the study found that individuals lacking specialized knowledge on complex topics exhibited a significantly lower rate of opinion change compared to participants possessing relevant expertise; this suggests that central processing, requiring critical evaluation of information, is hindered by a lack of foundational knowledge, increasing reliance on peripheral cues and potentially fostering the acceptance of inaccurate information.

Assessing the persuasive power of AI-generated content necessitates an understanding of its interaction with established cognitive processes like those described by the Elaboration Likelihood Model. Specifically, the capacity for AI to generate seemingly authoritative content, regardless of factual basis, presents a challenge to critical evaluation. If individuals lack domain-specific knowledge – a factor exacerbating susceptibility to misinformation as demonstrated by the Dunning-Kruger Effect – they are more likely to rely on peripheral cues within AI-generated text, such as stylistic consistency or source attribution, rather than engaging in rigorous central processing. Consequently, AI-generated content may disproportionately influence opinions on complex subjects where specialized expertise is lacking, highlighting the need for research into how these cognitive biases are exploited and mitigated.

Mapping User Perceptions: A Qualitative Investigation

Semi-structured interviews were conducted to explore user perceptions of AI-generated content and its effects on opinion formation. This methodology allowed for focused questioning regarding specific attributes of AI-generated text, images, and video, while also enabling participants to freely express their broader attitudes and concerns. Interviews lasted approximately 60-90 minutes and included probes to clarify responses and encourage detailed explanations of reasoning. The sample included 32 participants, recruited to represent a diverse range of demographics and prior exposure to generative AI technologies, ensuring a breadth of perspectives on the topic. Data gathered from these interviews formed the foundation for identifying patterns in user perceptions and understanding the nuances of their responses to AI-generated content.

The recruitment of interview participants and subsequent data collection were managed through Qualtrics, a survey and research platform utilized for participant screening, scheduling, and distribution of consent forms. Following completion of interviews, transcripts were imported into NVivo, a qualitative data analysis software package. NVivo facilitated a systematic coding process, enabling the identification of recurring patterns, themes, and relationships within the interview data. This involved iterative coding of segments of text, development of a codebook to ensure inter-coder reliability, and subsequent analysis to establish the prevalence and interconnectedness of identified themes related to user perceptions of AI-generated content.

Qualitative analysis of semi-structured interview data revealed prominent themes concerning user trust in automated systems and their evaluation of information authenticity when exposed to generative AI content. Specifically, the research identified a correlation between perceived source credibility and trust levels; users demonstrated heightened skepticism toward content they identified as AI-generated compared to content attributed to human sources. Thematic analysis further indicated that users assess authenticity based on perceived writing style, factual consistency, and the presence of emotional nuance, often applying stricter criteria to AI-generated text. These findings suggest that establishing transparency regarding AI involvement in content creation is critical for maintaining user trust and mitigating potential misinformation.

Thematic analysis of interview transcripts indicated users employ several strategies to differentiate between user-generated and AI-generated content. These included assessments of writing style – noting perceived lack of personality or overly polished prose in AI-generated text – and evaluations of factual accuracy, where users cross-referenced information and identified inconsistencies more frequently in AI outputs. Consequently, users generally assigned lower credibility scores to content identified as, or suspected of being, AI-generated, particularly regarding subjective topics or opinions. This decreased trust impacted their evaluation of information, leading to increased skepticism and a preference for content explicitly attributed to human authorship.

The Systemic Implications: Responsibility and Digital Resilience

Research indicates a significant correlation between susceptibility to manipulation via AI-generated content and deficiencies in critical thinking abilities alongside limited domain-specific knowledge. Individuals lacking these skills demonstrate a heightened vulnerability to accepting AI-produced narratives as factual, even when presented with demonstrably false or biased information. This isn’t simply about believing everything one reads; it’s a cognitive shortfall where the ability to analyze sources, identify logical fallacies, and cross-reference information is compromised, allowing AI to subtly shape perceptions and opinions. Consequently, the proliferation of increasingly sophisticated AI content necessitates a renewed focus on bolstering these essential cognitive defenses, as the potential for widespread misinformation and undue influence escalates alongside technological advancements.

The pervasive influence of artificially generated content necessitates a significant investment in bolstering digital literacy. Recent research demonstrates a concerning susceptibility to manipulation, with over 87% of study participants exhibiting some degree of opinion shift following exposure to AI-generated material. This highlights a critical gap in the public’s ability to discern credible information from fabricated narratives, or to recognize inherent biases within seemingly objective content. Effective digital literacy programs must therefore move beyond basic technological skills, focusing instead on cultivating critical thinking, source evaluation, and an understanding of the persuasive techniques employed in online communication. Equipping individuals with these tools is paramount to fostering a more resilient and informed populace capable of navigating the increasingly complex digital landscape.

Generative AI developers bear a significant ethical obligation to build systems prioritizing transparency and accountability, as demonstrated by recent research quantifying the persuasive power of these technologies. Statistical modeling reveals that variations in participant opinion can be explained by the AI’s output in 47% of cases, indicating a substantial influence on belief formation. Critically, changes in an individual’s confidence in their original opinion were explained by the AI’s influence in 44% of instances, suggesting these models aren’t simply changing what people think, but also how firmly they hold those beliefs. This highlights the need for design choices that allow users to understand the basis of generated content – including provenance, potential biases, and the limitations of the model – ultimately fostering a more informed and resilient interaction with AI systems.

The escalating prevalence of AI-generated content necessitates a proactive approach to cultivate a digitally resilient society. Without widespread adaptation of critical evaluation skills and a heightened awareness of potential algorithmic biases, individuals risk becoming increasingly susceptible to manipulation and misinformation. Fostering this resilience isn’t merely about technical solutions; it demands comprehensive digital literacy programs integrated into education and lifelong learning initiatives. These programs must equip citizens with the ability to dissect information, identify fabricated narratives, and understand the limitations of AI systems. Ultimately, a more informed populace is better positioned to navigate the complexities of the digital landscape, promoting responsible innovation and safeguarding against the erosion of trust in information sources.

The study reveals a compelling parallel to systemic design principles. It demonstrates that the source of information – whether human or AI-generated – is secondary to the underlying structure of knowledge itself when influencing decision-making within specialized domains. This aligns with the notion that infrastructure should evolve without rebuilding the entire block; the system’s inherent organization dictates its function. As G.H. Hardy observed, “The essence of mathematics is its economy.” Similarly, this research suggests that effective information retrieval and decision-making benefit from concise, well-structured knowledge, irrespective of its origin. The focus, therefore, must remain on the clarity and integrity of the underlying information architecture.

What’s Next?

The persistence of domain knowledge as a decisive factor in evaluating information, even when that information originates from generative AI, suggests a certain stubbornness in the human cognitive architecture. It is tempting to frame this as a victory for critical thinking, but a more pragmatic reading indicates a simple truth: systems built on opaque foundations – whether human or algorithmic – require trusted intermediaries. If the system looks clever, it’s probably fragile. The question, then, isn’t whether AIGC can generate plausible text, but under what conditions humans will accept it as a substitute for expertise.

Future work should move beyond simply assessing acceptance rates. A more revealing approach would be to map the cognitive load associated with verifying AIGC versus human-authored content across varying levels of domain expertise. One suspects the energetic cost of skepticism scales exponentially with the complexity of the subject matter. This research also implicitly highlights the limitations of treating information retrieval as a purely syntactic problem. Meaning, ultimately, resides not in the arrangement of words, but in the network of assumptions and prior knowledge that allows a reader to interpret them.

Architecture, after all, is the art of choosing what to sacrifice. This field will inevitably confront the trade-offs between accessibility, accuracy, and the increasingly blurred line between human and machine authority. The path forward isn’t about building ‘smarter’ AI, but about designing systems that acknowledge – and even leverage – the inherent limitations of both the algorithm and the end user.


Original article: https://arxiv.org/pdf/2601.08178.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-01-14 20:00