Who Teaches the Machines?

Author: Denis Avetisyan


As AI systems increasingly shape what we know, it’s crucial to consider how their knowledge is governed and who holds the authority over it.

This review proposes governing educational AI as public infrastructure built on open cognitive graphs and a community-driven governance model.

Despite growing reliance on artificial intelligence in formative assessment and self-directed learning, current governance mechanisms fail to adequately address the epistemic authority these systems wield. This paper, ‘How should AI knowledge be governed? Epistemic authority, structural transparency, and the case for open cognitive graphs’, reconceptualizes educational AI as public cognitive infrastructure and proposes a solution centered on Open Cognitive Graphs (OCGs) and a ‘trunk-branch’ governance model. By externalizing pedagogical structure and distributing expertise, this framework aims to ensure transparency, accountability, and equitable access to knowledge-but can such a structural approach truly align AI with democratic values and public responsibility?


Deconstructing the Black Box: Why AI Must Reveal Its Reasoning

Contemporary educational AI, despite demonstrating impressive capabilities, frequently operates as an inscrutable ‘black box’ for those attempting to learn from it. This lack of transparency stems from complex algorithms and neural networks whose internal workings are often hidden from view, even for the systems’ creators. Consequently, students are presented with answers or solutions without a clear understanding of how those conclusions were reached. This poses a significant challenge to genuine learning, as it inhibits critical thinking and the ability to evaluate the validity of information – skills essential for navigating an increasingly complex world. The issue isn’t simply that the AI arrives at an incorrect answer, but that the learner is denied the opportunity to dissect the reasoning process itself, hindering their ability to identify flaws or build upon the AI’s insights.

The inability to scrutinize an AI’s reasoning poses a significant challenge to genuine learning. When educational AI systems function as opaque ‘black boxes’, students are presented with solutions or insights without a clear understanding of how those conclusions were reached. This lack of transparency doesn’t simply impede comprehension; it actively undermines the development of critical thinking skills. Without the ability to assess the validity of the AI’s process – to identify potential biases, flawed logic, or missing information – students are encouraged to accept outputs passively, hindering their capacity to form independent judgments and build robust, lasting knowledge. Consequently, the potential for deep understanding is sacrificed in favor of surface-level acceptance, creating a dependency that ultimately limits intellectual growth.

To cultivate genuine understanding and trust in artificial intelligence, educational systems must move beyond simply delivering answers and instead prioritize the exposure of underlying reasoning. This necessitates a fundamental shift in design, focusing on systems capable of externalizing their internal logic – effectively making their ‘thought processes’ visible to learners. Such systems wouldn’t just present a solution, but also detail the steps, assumptions, and data used to arrive at it, allowing students to critically evaluate the validity of the conclusions. Crucially, this inspectability should extend to revisability; learners should be empowered to challenge the AI’s reasoning, modify its parameters, and observe the resulting changes, fostering a deeper, more nuanced comprehension of both the AI’s capabilities and its limitations, and ultimately building a more robust and trustworthy learning experience.

The Open Cognitive Graph: Exposing the AI’s Internal Logic

The Open Cognitive Graph (OCG) implements a standardized API – currently utilizing JSON-LD and GraphQL – that allows developers to query and modify the internal knowledge representation of an Educational AI system. This interface externalizes the system’s pedagogical structure, revealing the concepts it utilizes, the relationships between them, and the rules governing its reasoning process. Access to this structure enables external auditing of the AI’s logic, identification of potential biases, and direct modification of the knowledge graph to correct errors or adapt the system to new pedagogical approaches. The OCG facilitates a separation between the AI’s reasoning engine and its knowledge base, promoting modularity and enabling collaborative refinement of the underlying educational content.

The Open Cognitive Graph (OCG) establishes a formal representation of concept dependencies, defining which concepts must be understood before others can be effectively learned. This is achieved through directed edges linking concepts, explicitly stating prerequisite knowledge. The OCG utilizes these relationships to validate the logical flow of educational material, flagging inconsistencies where a concept is introduced without its prerequisites being established. This mechanism allows for the identification of potential student misconceptions arising from missing foundational knowledge, enabling targeted interventions and adaptive learning paths. The formal representation also facilitates automated reasoning about knowledge gaps and the construction of personalized learning sequences.

The Open Cognitive Graph (OCG) facilitates educational scaffolding by representing concepts not simply as endpoints, but as nodes within a network of prerequisite knowledge. This allows the system to dynamically insert intermediate concepts between a student’s existing understanding and a target concept, addressing gaps in prior knowledge. The OCG determines these intermediate concepts by analyzing the relationships between nodes – identifying concepts that establish a logical pathway from the student’s current knowledge base to the new material. This process enables personalized learning paths and supports deeper comprehension by building upon existing cognitive structures rather than assuming complete prerequisite knowledge.

Tracing the Algorithm: Auditing AI Reasoning for Trust and Validity

Educational AI systems utilizing the Output Composition Graph (OCG) are capable of producing detailed, auditable reasoning traces. These traces function as a step-by-step record of the AI’s inference process, documenting each logical operation and data transformation undertaken to arrive at a specific conclusion. The OCG facilitates this by structuring the AI’s reasoning as a directed graph, where nodes represent intermediate results and edges denote the applied inference rules. This granular level of documentation allows for external review and validation of the AI’s reasoning, enabling educators and developers to pinpoint the source of errors or biases and ensure the system’s conclusions are logically sound and pedagogically appropriate. The generated traces are not simply outputs, but rather a complete account of how the AI arrived at those outputs.

The integration of Neural Language Models (NLMs) with the Output Construction Graph (OCG) provides a mechanism for improved reasoning transparency and accuracy in AI systems. While NLMs excel at natural language processing, they are prone to logical inconsistencies and the amplification of errors. By constraining NLM outputs to adhere to the rules and structure defined by the OCG, the system ensures that generated reasoning traces are logically sound and consistent. This constraint process involves validating NLM-produced statements against the OCG’s defined relationships and rules, effectively preventing the propagation of inaccuracies and bolstering the overall reliability of the AI’s inferences.

The Ontological Curriculum Graph (OCG) functions as a core validation layer for AI-driven educational systems by providing a structured knowledge representation against which AI-generated reasoning can be assessed. This graph defines the permissible relationships between pedagogical concepts, effectively establishing boundaries for logical inference. Consequently, any insight produced by the AI must adhere to the pre-defined ontological structure of the OCG to be considered valid. This ensures alignment with established pedagogical principles, as the OCG itself is constructed based on accepted educational theories and best practices. Deviation from the OCG’s defined relationships flags potential inaccuracies or illogical leaps in the AI’s reasoning process, enabling targeted correction and ensuring the reliability of educational content.

Beyond Proprietary Systems: Towards a Public Cognitive Infrastructure

Educational AI, when constructed upon the principles of the Open Cognitive Graph (OCG), represents more than just a technological advancement; it embodies a paradigm shift toward a Public Educational Cognitive Infrastructure – a universally accessible and collaboratively maintained resource for all learners. This infrastructure aims to democratize access to knowledge and personalized learning experiences, functioning as a shared intellectual commons rather than a collection of isolated, proprietary systems. By framing Educational AI as public infrastructure, the focus moves from individual ownership and commercial gain toward collective benefit and sustained improvement through broad participation. This conceptualization necessitates a commitment to open standards, transparent development processes, and a governance model that ensures equitable access and ongoing refinement, ultimately fostering a more inclusive and effective learning ecosystem for generations to come.

The future of Educational AI hinges on a deliberate move away from closed, proprietary development models towards openly governed, community-driven systems. Currently, much of AI research and deployment occurs within corporate structures, limiting access, transparency, and the potential for widespread benefit. Shifting to Community Governance actively invites educators, learners, researchers, and developers to collectively shape the evolution of these tools. This broadened participation isn’t merely about increased input; it’s about establishing a shared ownership and responsibility for ensuring the system aligns with diverse educational needs and values. Such a structure fosters innovation by harnessing a wider range of expertise, enhances accountability through transparent processes, and ultimately creates a more robust and equitable learning resource accessible to all.

The proposed Trunk-Branch Governance Model offers a structured approach to establishing epistemic authority within complex systems like educational AI, moving beyond traditional hierarchical structures. This model organizes decision-making across layered levels of consensus, acknowledging that expertise and perspectives are diverse and evolving. The ‘trunk’ represents foundational, widely-accepted principles-core educational goals and ethical guidelines-while ‘branches’ denote areas open to ongoing debate, innovation, and adaptation. This allows for both stability and flexibility, ensuring accountability through transparent processes and inclusivity by enabling broad participation in shaping the system’s evolution. By distributing authority and fostering pluralism, the model aims to prevent single points of failure or bias, ultimately promoting a more robust and equitable public educational infrastructure built on shared knowledge and continuous improvement.

The pursuit of governing AI knowledge, as detailed in the paper, necessitates a willingness to dismantle conventional approaches to authority and transparency. It echoes G.H. Hardy’s sentiment: “The essence of mathematics lies in its freedom.” This freedom, when applied to knowledge representation, compels a move beyond closed systems and towards the open cognitive graphs advocated within. The ‘trunk-branch’ governance model isn’t merely about control, but about establishing a framework where the very foundations of knowledge can be rigorously examined and challenged – a deliberate act of intellectual ‘reverse-engineering’ as the paper suggests, mirroring the spirit of mathematical inquiry. The paper’s emphasis on community governance is not simply about wider participation, but about fostering a system resilient to dogma and open to continuous refinement, aligning with Hardy’s view that true understanding arises from unconstrained exploration.

What’s Next?

The proposition that AI systems are rapidly accruing de facto epistemic authority isn’t the startling claim; it’s the inevitable consequence of offloading cognitive labor. The more pressing question becomes not if these systems shape understanding, but how to reliably audit the provenance of that shaping. Open Cognitive Graphs offer a potential pathway, but represent only a scaffolding. The true difficulty lies in building a community robust enough to maintain it-one that doesn’t simply replicate existing power structures within a ‘decentralized’ framework.

The ‘trunk-branch’ model, while intuitively appealing, begs the question of who defines the ‘trunk’ – the core, supposedly neutral, knowledge base. History suggests neutrality is a myth. Expect future work to grapple with the inherent biases embedded in any foundational knowledge representation, and the perpetual tension between standardization and creative divergence. The best hack is understanding why it worked, and every patch is a philosophical confession of imperfection.

Ultimately, the success of this approach won’t be measured by technical elegance, but by its ability to resist capture. The field should focus less on perfecting the graph itself and more on developing the sociological and economic incentives needed to ensure genuinely equitable access and governance. A truly open system isn’t simply accessible; it’s actively defended against enclosure.


Original article: https://arxiv.org/pdf/2602.16949.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-02-22 09:47