Can AI Make Us Think Worse?

Author: Denis Avetisyan


New research explores how reliance on artificial intelligence tools impacts our critical thinking abilities and reasoning skills.

An exploratory model reveals that perceived efficiency and confidence following AI use are the strongest predictors of critical thinking scores, though diminished patience and a growing reliance on AI also exert a meaningful influence on cognitive assessment-suggesting that the <i>experience</i> of interacting with AI, rather than the technology itself, shapes an individual’s capacity for reasoned judgment.
An exploratory model reveals that perceived efficiency and confidence following AI use are the strongest predictors of critical thinking scores, though diminished patience and a growing reliance on AI also exert a meaningful influence on cognitive assessment-suggesting that the experience of interacting with AI, rather than the technology itself, shapes an individual’s capacity for reasoned judgment.

A survey-based study with machine learning analysis reveals that the effect of AI on cognitive performance is linked to habits of continued cognitive effort.

While artificial intelligence promises enhanced efficiency and learning support, its potential impact on fundamental human reasoning skills remains a complex question. This is explored in ‘Critical Thinking in the Age of Artificial Intelligence: A Survey-Based Study with Machine Learning Insights’, which investigates the relationship between AI usage and critical thinking performance through a mixed-methods approach. Findings reveal that the effect of AI is not uniformly positive or negative, but rather contingent on how it is used, with tendencies toward cognitive offloading correlating with reduced reasoning ability. Ultimately, this raises the question of how to foster effective human-AI collaboration that prioritizes sustained cognitive effort and verification, rather than simply automating thought.


The Erosion of Autonomy: Navigating an Abundant Information Age

Human cognition has long relied on critical thinking – the ability to analyze information objectively and form reasoned judgments – as a fundamental skill. However, the current information landscape, characterized by unprecedented access and volume, presents a unique challenge to this cornerstone of cognitive ability. The sheer abundance of readily available information, often unverified or biased, can overwhelm individuals and impede their capacity for careful evaluation. Rather than actively engaging in analysis, people may increasingly rely on easily digestible summaries or accept information at face value, potentially eroding the skills necessary for independent thought and sound decision-making. This shift necessitates a renewed focus on cultivating critical thinking skills to navigate the complexities of the modern world and maintain intellectual autonomy.

The proliferation of generative AI tools, such as ChatGPT, introduces a complex duality to the process of independent thought. While offering unprecedented access to information and potential for creative collaboration, these technologies also pose risks to the development and exercise of critical thinking skills. The ease with which AI can generate text, solve problems, and even formulate arguments may inadvertently discourage individuals from engaging in the rigorous mental effort required for original thought and analysis. This isn’t simply about accessing answers, but about how those answers are reached – the internal process of questioning, evaluating evidence, and forming reasoned judgments. Consequently, a reliance on AI-generated content could lead to a diminished capacity for independent reasoning, potentially fostering a passive acceptance of information rather than active intellectual engagement, and reshaping cognitive habits in ways that are only beginning to be understood.

Recent investigations into the intersection of artificial intelligence and human reasoning reveal a complex relationship, particularly concerning critical thinking abilities. Studies demonstrate that, on average, participants exhibit a Critical Thinking Score (CTS) of 68.25% – a figure that, while indicating a moderate baseline, necessitates careful consideration in light of increasing AI integration. This suggests a substantial portion of the population may benefit from strategies to bolster analytical skills as they increasingly encounter and utilize AI-generated content. Understanding how AI use influences this score – whether it augments or erodes independent thought – is therefore paramount. Navigating this new cognitive landscape demands a proactive approach to education and skill development, equipping individuals to effectively evaluate information and maintain robust critical thinking capabilities in an age defined by readily available, AI-driven content.

The distribution of critical thinking scores reveals substantial variation in unaided reasoning performance among participants, as indicated by the spread around the sample mean.
The distribution of critical thinking scores reveals substantial variation in unaided reasoning performance among participants, as indicated by the spread around the sample mean.

Mapping the Cognitive Footprints of AI Interaction

Data regarding individual AI-Use Behavior was collected through a custom-designed survey instrument. This survey incorporated questions assessing the frequency, duration, and specific applications of AI tools utilized by participants across diverse task categories, including writing, data analysis, and problem-solving. The survey design prioritized capturing granular details on how AI was integrated into users’ workflows, rather than simply whether it was used. Collected data points included the types of AI tools employed – large language models, image generators, coding assistants – and the extent to which users edited or verified AI-generated outputs. This detailed approach allowed for the creation of a comprehensive dataset characterizing AI utilization patterns for subsequent statistical analysis.

K-Means Clustering and Principal Component Analysis were utilized to categorize participants based on their reported AI usage patterns. Specifically, data from the survey regarding frequency and types of AI tools used for various tasks were subjected to Principal Component Analysis to reduce dimensionality and identify key behavioral components. These components then served as input for K-Means Clustering, an unsupervised machine learning algorithm, to group individuals with similar AI-use behaviors into distinct profiles. The optimal number of clusters was determined through evaluation of within-cluster variance metrics, resulting in the identification of several statistically significant user profiles characterized by differing levels and approaches to AI integration.

Analysis of user profiles derived from AI-use data indicates a quantifiable relationship between the degree of reliance on AI tools and performance on standardized reasoning tasks. Specifically, individuals exhibiting higher levels of AI dependence demonstrated a statistically significant negative correlation with scores on these tasks, suggesting that increased reliance may be associated with diminished independent reasoning ability. Conversely, users who employed AI as a supplemental tool, rather than a primary solution, tended to achieve higher reasoning scores. This correlation was evaluated using Pearson’s r correlation coefficient, yielding values ranging from -0.3 to -0.6, depending on the specific reasoning task and the AI tools utilized. Further analysis is underway to determine the causal mechanisms underlying this observed relationship.

Principal Component Analysis of K-Means behavioral profiles reveals three distinct user groups-Over-Reliant, Mixed-Strategy, and Balanced Support-Seekers-characterized by differing patterns of AI dependency, patience, and independent reasoning, as indicated by cluster centroids [latex]X[/latex] in the two-dimensional PCA space.
Principal Component Analysis of K-Means behavioral profiles reveals three distinct user groups-Over-Reliant, Mixed-Strategy, and Balanced Support-Seekers-characterized by differing patterns of AI dependency, patience, and independent reasoning, as indicated by cluster centroids [latex]X[/latex] in the two-dimensional PCA space.

The Delicate Balance: Cognitive Offloading and the Erosion of Reflection

Cognitive offloading, the practice of utilizing external tools like AI to reduce mental effort, presents a complex relationship with the development of critical thinking skills. While AI can demonstrably improve efficiency in task completion, research suggests this convenience may concurrently impede the refinement of cognitive reflection – the capacity for deliberate, analytical thought. This is not a simple trade-off; the benefit of reduced cognitive load must be weighed against the potential for diminished practice in problem-solving and independent analysis, skills crucial for robust critical thinking ability. The effect is not simply about whether AI is used, but how it is integrated into cognitive processes; over-reliance on AI for tasks that previously required active mental engagement may result in a measurable decrease in an individual’s capacity for reflective thought.

Random Forest modeling identified statistically significant predictors of Critical Thinking Score (CTS) related to AI usage patterns. Analysis revealed a moderate negative correlation (r = -0.36) between reported reductions in patience attributable to AI use and an individual’s CTS. This suggests that reliance on AI tools, particularly when fostering decreased patience with complex problem-solving, may be associated with lower scores on assessments of critical thinking ability. These findings emphasize the need for mindful engagement with AI, advocating for strategies that maintain cognitive effort and patience during information processing and decision-making.

Analysis indicates that an individual’s capacity for Independent Reasoning functions as a moderating variable in the relationship between AI assistance and critical thinking ability. This suggests that the degree to which AI use impacts critical thinking skills is contingent upon the user’s pre-existing ability to formulate and evaluate arguments without external prompting. Individuals with strong Independent Reasoning skills appear to be better equipped to leverage AI tools without experiencing a corresponding decline in their critical thinking capacity, while those with weaker skills may be more susceptible to the potentially negative effects of relying on AI-generated solutions.

A correlation heatmap reveals that behavioral features exhibit both positive and negative associations with critical thinking scores, indicating complex relationships between behavior and reasoning ability.
A correlation heatmap reveals that behavioral features exhibit both positive and negative associations with critical thinking scores, indicating complex relationships between behavior and reasoning ability.

Toward a Symbiotic Future: Human-Centered AI Integration

Recent investigations highlight the critical role of Human-in-the-Loop strategies in maximizing the benefits of artificial intelligence. This approach doesn’t simply automate tasks, but instead actively involves human cognition throughout the process, fostering deeper understanding and skill development. By designing systems that require ongoing human input, evaluation, and refinement, researchers observe a significant increase in cognitive engagement – essentially, the level of mental effort applied to a task. This active learning not only improves performance on the immediate task but also cultivates critical thinking abilities and enhances the user’s capacity to adapt to new challenges, ensuring that AI serves as a tool for empowerment rather than a replacement for human intellect.

The effective integration of artificial intelligence hinges on cultivating mindful use, a practice that allows individuals to leverage AI’s capabilities while simultaneously preserving and strengthening their own cognitive skills. Research indicates that passively accepting AI-generated outputs can erode critical thinking, but actively engaging with the technology – questioning its suggestions, verifying its data, and applying independent judgment – transforms it into a powerful cognitive amplifier. This approach doesn’t view AI as a replacement for human intellect, but rather as a collaborative partner, enabling more nuanced problem-solving and fostering a deeper understanding of complex issues. Consequently, promoting mindful AI interaction is crucial for ensuring that these advanced tools augment, rather than diminish, essential human cognitive abilities.

The evolving relationship between humans and artificial intelligence necessitates a re-evaluation of established learning and working methodologies. Current research suggests that integrating AI effectively demands a shift in educational practices, moving beyond rote memorization toward cultivating critical thinking skills that complement AI capabilities. Similarly, workplace training programs must prioritize adaptation and collaboration with AI systems, fostering environments where human expertise and artificial intelligence synergize to enhance productivity and innovation. This understanding extends to the design of future AI interfaces, which should prioritize user agency, transparency, and intuitive interactions, ultimately creating tools that augment human abilities rather than replace them – a principle crucial for maximizing the benefits of AI across all sectors.

The study’s findings suggest a curious paradox: reliance on artificial intelligence doesn’t necessarily diminish critical thinking, but rather reshapes it. This echoes a sentiment articulated by Donald Knuth: “Premature optimization is the root of all evil.” The eagerness to offload cognitive effort onto AI, while seemingly efficient, risks atrophying the very faculties it intends to supplement. A system that never breaks is dead; similarly, a mind that never struggles to reason is a mind losing its capacity for true thought. The research highlights that habits fostering continued cognitive effort are linked to better reasoning, implying that intellectual ‘exercise’ – even in the face of readily available AI assistance – is vital for maintaining robust cognitive function.

What’s Next?

The study reveals not a displacement of reason by algorithm, but a subtle shift in its ecology. It isn’t that artificial intelligence diminishes critical thought, but that it alters the conditions in which it flourishes – or withers. The observed association between sustained cognitive effort and reasoning performance suggests a principle of ‘cognitive gardening’: a mind, like a field, yields only to continued cultivation. To believe one can simply install critical thinking is to mistake a process for a product.

Future work must move beyond measuring outputs and examine the very texture of this human-AI symbiosis. What are the qualitative differences between a solution reached with algorithmic assistance, and one born of solitary struggle? Resilience lies not in isolating human reason from external influence, but in forgiveness between components – the capacity to absorb error and adapt. The study implies that the most fruitful lines of inquiry will not be about preventing cognitive offloading, but about understanding which forms of offloading are generative, and which are merely parasitic.

Perhaps the core limitation of this work, and the field at large, is the persistent assumption that ‘critical thinking’ is a fixed entity, amenable to quantification. A system isn’t a machine, it’s a garden – neglect it, and you’ll grow technical debt, but attempt to rigidly define its boundaries, and it will inevitably surprise you. The true challenge lies not in optimizing for a specific outcome, but in fostering an ecosystem capable of continuous learning and adaptation.


Original article: https://arxiv.org/pdf/2604.18590.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-04-22 21:17