Working Paper · April 2026

When AI Tutors Fake Critical Thinking

From Cognitive Harm to Institutional Liability

Ryan James Purdy, Purdy House Publishing and Consulting
Timothy Cook, M.Ed., The Cognitive Privacy Project
The Cognitive Privacy Project · cognitiveprivacyproject.org · DOI: 10.13140/RG.2.2.18400.24322

Executive Summary

AI tutoring systems are currently being deployed at scale across an entire generation. While institutions rigorously vet these systems for data privacy, cybersecurity, and accessibility, a critical blind spot remains: cognitive impact.

Current AI architectures are designed to optimize for engagement metrics, rewarding the performance of critical thinking rather than the actual cognitive struggle required to develop it. For adults, this leads to cognitive atrophy. For minors with developing prefrontal cortices, this leads to cognitive foreclosure: the structural prevention of critical thinking capacities from forming in the first place.

This working paper establishes that cognitive harm from AI is no longer speculative; it is a documented, foreseeable risk. When foreseeable harm intersects with a complete absence of institutional governance, the issue transitions from an educational quality problem to a matter of legal negligence and institutional liability.

Key Findings for Institutional Governance

Finding 01
Cognitive Harm Is Documented and Foreseeable

The evidentiary record in 2026, including neuroimaging of "cognitive debt" accumulation, behavioral studies on metacognitive laziness, and massive generational shifts in critical thinking scores (Gerlich, 2025), establishes constructive knowledge of harm. The risk is foreseeable.

Finding 02
The Governance Gap Creates Liability Exposure

In education-specific case law (Yanes v. City of New York, Wyke v. Polk County School Board), the presence or absence of documented governance determines negligence outcomes. No procurement process currently evaluates the cognitive impact of AI tools.

Finding 03
Educational AI Is Following the Five-Stage Trajectory to Mandate

Every major impact assessment category (environmental via NEPA, privacy via GDPR) has followed a predictable trajectory from unregulated harm to mandatory legal requirement. With the EU AI Act classifying educational AI as "high-risk," the legal architecture is already forming.

Finding 04
The Solution: The Cognitive Privacy Impact Assessment

The paper introduces the foundation for the CPIA, mapping the seven fundamental questions of impact assessments across six domains of cognitive risk: Capture, Inference, Influence, Dependency, Developmental Impact, and Retention.

The Core Argument

The Performance Problem

In promotional material from Canvas and OpenAI, a student engages with a "Keynes AI persona" about fiscal stimulus. The student mentions a counterargument, signals openness to the "correct" position, and cites supporting evidence. The AI responds with sycophantic praise at every turn. The student learned how to crack the code. No genuine wrestling with ideas occurred. No real cognitive work happened.

A skilled human teacher would ask: "What evidence would change your mind?" Human teachers introduce contradictory evidence, push students through difficulty, and detect when a student is performing rather than thinking. They create intellectual friction that forces reasoning. AI tutoring systems cannot do this, not because the technology is incapable, but because the architecture is designed for engagement metrics, not intellectual development.

Atrophy vs. Foreclosure

This distinction is load-bearing for every legal and governance argument in the paper. Adults who become dependent on AI experience atrophy: the decline of existing capacities. This is concerning but potentially reversible. Children who develop with AI may experience something worse: foreclosure. The capacities that require struggle to form never form at all. The prefrontal cortex does not finish developing until the mid-twenties. Students using AI tutoring systems during this developmental window are shaping the neural architecture that will support (or fail to support) independent thinking for the rest of their lives.

The Critical Distinction

You can rehabilitate atrophy. You cannot rehabilitate what never existed. If the harm is developmental foreclosure, and if the mechanism is documented and the risk is foreseeable, the institutional response must be governance infrastructure that prevents the harm before it occurs.

The Evidence Base

The paper draws on converging evidence from multiple methodologies:

Gerlich (2025) documented a significant negative correlation (r = -0.68) between AI reliance and critical thinking scores across 666 participants. Younger participants (ages 17-25) showed the strongest dependence and the lowest scores. Older participants (46+) showed the opposite pattern.

Kosmyna et al. (2025) moved from behavioral observation to direct measurement of neural activity during AI-assisted tasks. They identified "cognitive debt": measurable connectivity changes that accumulate when AI handles cognitive work. The brain literally adapts to delegation.

Cheng et al. (2026) demonstrated in Science that AI sycophancy is a measurable driver of cognitive dependency. AI systems affirmed users' positions 49% more frequently than human advisors, including in cases involving manipulation and deception.

Fan et al. (2025) documented "metacognitive laziness" in the British Journal of Educational Technology: students completed assignments without cognitive engagement, using the tool to satisfy requirements while bypassing the mental work those requirements were designed to produce.

The Five-Stage Trajectory to Mandate

Every major impact assessment type has followed a five-stage trajectory from unregulated harm to mandatory legal requirement. The pattern has not deviated.

Trajectory to Mandatory Compliance
StagePatternPrecedent
1. Documented HarmUnregulated deployment causes measurable damageIndustrial pollution → NEPA; Data harvesting → GDPR
2. Action-Forcing LegislationRegulatory body mandates assessment before deploymentEU AI Act (educational AI = high-risk); Colorado SB 24-205
3. Judicial EnforceabilityCourts establish that assessment requirements carry legal weightCalvert Cliffs (NEPA); Meta/Instagram (DPIA, EUR 405M fine)
4. Assessment Absence = NegligenceFailure to assess becomes independent evidence of liabilityStanding Rock (vacated pipeline); Yanes ($59.2M verdict)
5. Insurance PressureFinancial mechanisms create adoption pressureTravelers v. ICS (policy voided); Caremark doctrine

Cognitive Impact Assessments sit at Stage 1 to Stage 2 of this trajectory. The question is not whether they will become a compliance requirement. The pattern is too consistent. The question is whether schools will adopt them proactively, or wait for Stages 3 through 5 to impose adoption through judicial enforcement and insurance pressure.

The Six Domains of Cognitive Risk

The paper maps the seven fundamental questions of impact assessments across six domains of cognitive risk. Each domain identifies a category through which AI systems interact with student cognition.

Domain 1
Capture

What cognitive processes does this system observe?

Domain 2
Inference

What mental states can be derived from captured data?

Domain 3
Influence

How does this system shape thinking patterns?

Domain 4
Dependency

Does this system encourage cognitive offloading?

Domain 5
Developmental

What developmental processes are affected?

Domain 6
Retention

How long is cognitive data stored?

The full CPIA framework, including assessment criteria, scoring rubrics, and implementation guidance, is available at cognitiveprivacyproject.org/cognitive-privacy-impact-assessment. For education-specific assessment design, see Making Human Capability Visible.

The Governance Mandate

A wide gap remains between where we are and where legislation will eventually arrive. However, the institutional risk is already here.

Schools, ministries of education, and enterprise deployers do not need to wait for a mandate to begin asking the questions this paper identifies. Integrating cognitive impact as a procurement criterion alongside privacy and security is the only defensible safeguard against the foreseeable risks of algorithmic dependency.

For Institutional Leaders

The formal Cognitive Impact Assessment methodology is in development. But the seven questions and six domains identified in this paper can be applied now. Begin the conversation. Raise cognitive impact as a procurement criterion. Ask vendors the questions no one is currently asking. The developmental needs of students cannot wait for legislation to arrive.

Access the Full Paper

The complete working paper includes detailed legal analysis, the Alpha Schools case study, the full five-stage trajectory with precedent, and the mapping of seven impact assessment questions across six cognitive risk domains.

About the Authors

Ryan James Purdy
Purdy House Publishing and Consulting

Ryan James Purdy is the author of Selling Shovels and founder of Purdy House Publishing and Consulting, where he advises organizations on AI governance, institutional liability, and the policy architecture of emerging technology.

sellingshovels.org

Timothy Cook, M.Ed.
Director, The Cognitive Privacy Project

Timothy Cook is Director of The Cognitive Privacy Project, Securiti Certified in AI Security & Governance, and writes the "Algorithmic Mind" column at Psychology Today. His book Unautomatable is in peer review with MIT Press.

cognitiveprivacyproject.org

Cite This Paper

Purdy, R. J., & Cook, T. (2026). When AI tutors fake critical thinking: From cognitive harm to institutional liability. The Cognitive Privacy Project. Working Paper. https://doi.org/10.13140/RG.2.2.18400.24322

Conduct a Cognitive Privacy Impact Assessment

The CPIA provides the structured methodology for evaluating AI systems before deployment. Whether you're assessing a single platform or auditing an enterprise deployment, begin with the framework.

Access the full technical report and citation data via ResearchGate.