Author: Denis Avetisyan
New research reveals how equipping high school computer science educators with practical AI auditing tools fosters critical thinking and a sense of agency in addressing algorithmic harms.
Participatory design and implementation of AI auditing lessons shift teachers’ conceptions of AI/ML systems and cultivate computational empowerment around algorithmic justice.
Despite growing concerns about the societal impacts of artificial intelligence, translating awareness into actionable understanding remains a significant challenge for educators. This paper, “You Can Actually Do Something”: Shifts in High School Computer Science Teachers’ Conceptions of AI/ML Systems and Algorithmic Justice, investigates how engaging experienced high school computer science teachers in the co-design and implementation of AI auditing lessons reshaped their understandings of algorithmic justice. Findings reveal a shift toward more situated, critical, and agentic perspectives, empowering teachers to address potential harms within their classrooms and communities. How might participatory design approaches like these cultivate broader critical AI literacy and foster computational empowerment among both educators and students?
The Evolving Landscape of Computational Education
Historically, computer science education has prioritized technical skills – coding, algorithms, and data structures – often to the exclusion of broader societal considerations. This emphasis has left many students unprepared to critically analyze the ethical, social, and political implications of the technologies they create and use. Consequently, curricula frequently fail to address issues such as algorithmic bias, data privacy, digital equity, and the potential for automation to exacerbate existing inequalities. The result is a generation of technologists proficient in building technology, but not necessarily equipped to consider who it serves, how it impacts different communities, or the long-term consequences of its deployment. A growing movement advocates for a more holistic approach, integrating these critical perspectives directly into the core of computer science learning.
Contemporary educators recognize a growing imperative to move beyond simply teaching computational skills and instead cultivate students’ understanding of the ethical dimensions of technology, particularly concerning algorithmic bias and fairness. This awareness stems from increasing visibility of biased outcomes in areas like facial recognition, loan applications, and even criminal justice, revealing that algorithms are not neutral arbiters but reflect the values and limitations of their creators and the data they are trained on. Consequently, teachers are actively seeking resources and pedagogical strategies to equip students with the tools to critically evaluate algorithms, identify potential biases, and consider the societal consequences of automated decision-making, fostering a generation capable of building more equitable and responsible technological systems.
The increasing pervasiveness of artificial intelligence in daily life is driving a fundamental shift in educational priorities, creating a strong demand for pedagogical approaches that move beyond technical proficiency to cultivate critical AI literacy. These new methods aim to equip students with the ability to not only use AI tools, but to deeply understand their underlying mechanisms, potential biases, and broader societal implications. Rather than accepting technology as a neutral force, students are encouraged to question its design, evaluate its impact on different communities, and consider alternative approaches that prioritize fairness, accountability, and human values. This focus on critical engagement empowers the next generation to become informed and responsible creators, rather than passive consumers, of artificial intelligence.
Teachers with backgrounds in equity-driven physical computing demonstrate a heightened capacity for navigating the complex ethical and societal challenges presented by modern computer science education. This preparation extends beyond theoretical understanding; hands-on experience designing and building interactive systems with diverse communities cultivates a practical awareness of potential biases embedded within technology. Such educators are not simply teaching code, but actively considering who benefits from technological solutions and how those solutions might inadvertently perpetuate inequalities. This proactive approach-rooted in the tangible realities of building and deploying technology-equips them to foster critical AI literacy in students, empowering the next generation to question, analyze, and ultimately shape technology for a more just and equitable future. The ability to translate abstract concepts of algorithmic fairness into concrete design choices proves invaluable when addressing the increasingly urgent need to prepare students for a world shaped by artificial intelligence.
Deconstructing the Black Box: AI Auditing as Critical Inquiry
AI Auditing establishes a structured process for teachers and students to systematically investigate and assess Artificial Intelligence and Machine Learning (AI/ML) systems. This methodology moves beyond passive consumption of AI tools by providing a framework to formulate specific queries about an AI/ML system’s inputs, processes, and outputs. Evaluation focuses on identifying potential biases, inaccuracies, or limitations in the system’s performance through direct observation and analysis. The process typically involves defining clear evaluation criteria, collecting relevant data, analyzing the results against those criteria, and documenting findings – thereby enabling a verifiable and repeatable assessment of the AI/ML system’s behavior.
AI auditing leverages AI/ML systems themselves as the primary subject of investigation, shifting the focus from outputs to internal mechanisms. This hands-on approach involves direct interaction with the system – examining training data, analyzing model parameters where accessible, and testing with varied inputs – to understand how decisions are made. Rather than treating these systems as “black boxes,” auditing encourages users to deconstruct the system’s logic, identify potential biases present in the data or algorithms, and evaluate the robustness of the model under different conditions. This process of internal exploration provides insights into the system’s limitations, assumptions, and potential failure points, fostering a deeper understanding of its functionality beyond superficial observation.
Participatory Design, in the context of AI auditing lessons, involves directly incorporating teachers into the design and development process. This collaborative approach moves beyond simply delivering pre-packaged curricula; teachers actively contribute to lesson planning, activity creation, and assessment strategies. This ensures the resulting lessons align with existing pedagogical practices, address specific classroom needs, and utilize locally relevant examples. By fostering teacher ownership through co-creation, the method increases engagement, promotes sustained implementation, and facilitates a deeper understanding of both AI/ML concepts and critical inquiry techniques.
Traditional pedagogical approaches often position teachers as consumers of educational technology, focusing on implementation and utility within established curricula. AI auditing, however, facilitates a shift towards critical engagement by equipping teachers with the methodologies to deconstruct and evaluate the underlying logic and potential biases of AI/ML systems. This process involves examining data inputs, algorithmic processes, and output interpretations, enabling educators to move beyond assessing what an AI tool does to understanding how and why it functions in a particular manner. Consequently, teachers can assess the pedagogical implications of these systems, identify potential limitations, and inform more responsible and effective integration of AI into learning environments.
Toward Algorithmic Justice: A Relational Approach
Algorithmic justice is a primary consideration in the development and implementation of artificial intelligence systems, requiring rigorous evaluation of fairness, equity, and accountability. This necessitates moving beyond purely technical performance metrics to assess potential harms and biases embedded within algorithms and their resulting outputs. Examination focuses on identifying and mitigating disparate impacts across different demographic groups, ensuring procedural fairness in algorithmic decision-making processes, and establishing clear lines of responsibility when algorithmic systems produce unjust or inequitable outcomes. The pursuit of algorithmic justice demands ongoing scrutiny of data sources, model design, and deployment strategies to proactively address ethical concerns and promote socially responsible AI.
AI auditing, as demonstrated by this study, functions as a key mechanism for identifying and addressing fairness concerns within algorithmic systems. The process reveals potential biases embedded in algorithms and highlights existing power imbalances that may be exacerbated through their deployment. Quantitative and qualitative data collected from high school CS teachers indicate a measurable shift in their comprehension of algorithmic justice principles following participation in AI auditing exercises. Specifically, teachers demonstrated an increased ability to recognize and articulate the ethical implications of algorithmic design choices and to connect these implications to broader societal concerns regarding equity and accountability.
Relational Ethics provides a framework for algorithmic development and implementation that moves beyond purely technical considerations of fairness. This approach prioritizes understanding algorithms within their specific social and political contexts, acknowledging that impacts are mediated through existing relationships and power dynamics. It necessitates consideration of the responsibilities developers and deployers have not only to individual users, but to the broader communities affected by algorithmic systems. This includes proactively identifying potential harms to vulnerable groups and designing for accountability, transparency, and equitable outcomes, recognizing that ethical considerations are intrinsically linked to the social fabric in which algorithms operate.
Analysis of participating teachers’ reflections indicates a growing awareness of how students actively negotiate and challenge algorithmic systems in their daily lives. This resistance isn’t necessarily overt; teachers report observing students strategically utilizing platform loopholes, employing alternative accounts to circumvent restrictions, and critically discussing the perceived fairness of algorithmic recommendations. Furthermore, students demonstrate a nuanced understanding of data privacy and are increasingly protective of their personal information online, often employing techniques to limit data collection or misrepresent data to maintain control over their digital profiles. This everyday algorithmic resistance suggests students are not passive recipients of algorithmic dictates, but engaged actors who actively shape their interactions with these systems.
Empowering Future Citizens: Cultivating Critical AI Literacy
The accelerating integration of artificial intelligence into daily life necessitates a move beyond basic digital literacy to a more nuanced critical AI literacy. This skillset doesn’t simply involve knowing how to use AI tools, but rather understanding their underlying mechanisms, potential biases, and societal impacts. Equipped with this understanding, both teachers and students can move beyond passive consumption of AI-driven content and actively evaluate its validity, fairness, and ethical implications. Developing critical AI literacy is thus paramount for fostering informed citizens capable of navigating an increasingly complex technological landscape and shaping a future where AI serves equitable and just purposes – it’s about empowering individuals to question, analyze, and ultimately, responsibly utilize the power of artificial intelligence.
Research indicates that truly grasping the implications of artificial intelligence requires more than abstract definitions; it demands situated learning – a pedagogical approach where concepts are firmly anchored in authentic, real-world applications. The study highlights how students demonstrate a significantly deeper comprehension of AI principles when actively engaging with practical scenarios, such as auditing algorithms used in local community resources or analyzing the biases present in widely-used datasets. This immersive methodology moves beyond rote memorization, fostering an intuitive understanding of how AI systems function – and, crucially, how they can both reflect and perpetuate existing societal inequalities. By connecting theoretical knowledge to tangible experiences, educators empower students not simply to learn about AI, but to develop a nuanced, critical perspective on its impact and potential.
Integrating AI auditing into educational curricula represents a significant shift from traditional technology instruction. Rather than solely focusing on what artificial intelligence is or how it functions, this approach empowers students to critically examine the biases and societal impacts embedded within these systems. Through hands-on exercises that dissect algorithms and assess their outputs for fairness and transparency, students develop a reflexive stance, questioning the assumptions and potential harms of AI technologies. This process moves beyond passive consumption of technology, cultivating an active and informed citizenry capable of advocating for responsible AI development and deployment, and ultimately shaping a more equitable technological landscape.
Computational empowerment transcends mere algorithmic literacy, offering students the agency to actively shape technology’s trajectory. This approach moves beyond dissecting existing systems to fostering the capacity to design and build new ones – systems intentionally aligned with values of justice and equity. Through hands-on creation, students don’t simply learn how algorithms function, but grapple with the ethical considerations embedded within them, and explore how computational tools can address societal challenges. This proactive engagement cultivates a mindset where technology isn’t a neutral force, but a malleable medium for enacting positive change, enabling students to envision – and ultimately construct – technological futures that prioritize fairness and inclusivity.
The study reveals a crucial insight: true computational empowerment stems not merely from technical skill, but from a deeply contextualized understanding of AI/ML systems and their societal implications. This echoes Bertrand Russell’s observation: “The whole problem with the world is that fools and fanatics are so confident and the intelligent are full of doubt.” The research demonstrates that initially, teachers often approached algorithmic justice with broad, theoretical concerns. However, participatory design-actively building and auditing AI systems-introduced necessary doubt, prompting a shift toward more nuanced, situated understandings of potential harms and how to address them. This process illustrates how structure dictates behavior; the act of doing-of actively engaging with the technology-fundamentally reshapes conceptions and fosters agency.
Future Pathways
The work presented here suggests a crucial point: infrastructural change in education need not demand complete demolition and reconstruction. Instead, a gradual evolution – a reinforcing of existing foundations – proves more sustainable. The participatory design approach, while promising, remains largely contained within the professional development setting. A genuine test lies in how these altered conceptions of algorithmic justice translate into sustained classroom practice and, crucially, into student agency. The scaffolding provided during the intervention must give way to self-supporting structures of critical inquiry.
A persistent challenge concerns the relational ethics inherent in addressing algorithmic harms. This is not simply a matter of technical auditing, but of acknowledging the power dynamics embedded within data, design, and deployment. Further research should explore how teachers navigate these complexities with students, fostering not just awareness, but a sense of ethical responsibility and the capacity for constructive intervention. The study raises the question of scalability – can such interventions be adapted and implemented effectively across diverse educational contexts, without losing their grounding in specific community needs?
Ultimately, the field needs to move beyond simply ‘teaching about’ AI/ML and towards cultivating a practice of ‘critical computational empowerment’. This requires a shift in focus from isolated lessons to the integration of ethical considerations throughout the computer science curriculum – a fundamental rethinking of what it means to be computationally literate in an age of increasingly automated systems.
Original article: https://arxiv.org/pdf/2602.16123.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- MLBB x KOF Encore 2026: List of bingo patterns
- Overwatch Domina counters
- eFootball 2026 Jürgen Klopp Manager Guide: Best formations, instructions, and tactics
- 1xBet declared bankrupt in Dutch court
- eFootball 2026 Starter Set Gabriel Batistuta pack review
- Brawl Stars Brawlentines Community Event: Brawler Dates, Community goals, Voting, Rewards, and more
- Honkai: Star Rail Version 4.0 Phase One Character Banners: Who should you pull
- Gold Rate Forecast
- Lana Del Rey and swamp-guide husband Jeremy Dufrene are mobbed by fans as they leave their New York hotel after Fashion Week appearance
- Clash of Clans March 2026 update is bringing a new Hero, Village Helper, major changes to Gold Pass, and more
2026-02-19 12:58