The Augmented Educator

The Augmented Educator

The AI as a Sparring Partner

The Detection Deception, Chapter 10

Michael G Wagner's avatar
Michael G Wagner
Nov 22, 2025
∙ Paid
Upgrade to paid to play voiceover

Fellow Augmented Educators,

Welcome to week ten of ‘The Detection Deception’ book serialization. New chapters appear here for paid subscribers each Saturday.

This week’s installment begins the book’s fourth part by directly confronting the role of AI in our new pedagogical model. It reframes AI from an academic threat into an essential educational tool, introducing the core concept of the “cognitive sparring partner”. This chapter demonstrates how AI, when embedded in the right pedagogical structure, can be used to prepare students for authentic intellectual performance rather than allowing them to bypass it.

Last week’s chapter demonstrated how dialogic, performance-based assessments work across the full range of academic disciplines, from STEM to the humanities. This chapter extends that foundation by answering the most pressing question: If students are no longer being assessed with traditional essays, how should they use AI? It moves beyond the assessment to the preparation, providing a “Taxonomy of Cognitive Partnership” and a practical three-part framework for structuring AI use to build the very cognitive skills necessary for the authentic, dialogic performances we’ve just explored.

Thank you for reading along! See you in the comments.

Michael G Wagner (The Augmented Educator)


Chapter 10: The AI as a Sparring Partner

The emergence of generative AI in educational settings has created an unprecedented challenge for educators worldwide. Students now have access to tools that can instantly produce essays, solve complex problems, and complete assignments with minimal effort. This technological disruption has sparked an intense debate about the future of learning itself. Some view AI as an existential threat to education, a force that will inevitably erode critical thinking and render traditional assessment obsolete. Others see potential for transformation. Yet this binary framing misses a crucial insight. The impact of AI on learning is not predetermined by the technology’s capabilities but shaped by the pedagogical choices we make in response to it. Understanding how to transform AI from a threat into an educational asset requires us to reconsider not just our teaching methods, but our core assumptions about what learning means in an age of artificial intelligence.

From Substitution to Augmentation

David Jonassen’s framework of “cognitive tools” provides a crucial theoretical foundation for understanding this development. Jonassen argued that no technology possesses an inherent educational function. Instead, the pedagogical context determines whether a tool amplifies or diminishes human cognitive capacity. Consider the humble calculator. In one classroom, it becomes a crutch that prevents students from developing number sense and mental arithmetic skills. In another, the same calculator enables exploration of complex mathematical relationships that would be computationally prohibitive without technological assistance. The calculator hasn’t changed; the pedagogy has.

This principle applies with particular force to generative AI. The same language model that can substitute for student thinking in one context can augment and extend it in another. The determining factor is not the sophistication of the algorithm but the structure of the educational experience within which it operates. Considering the cognitive debt detailed in Chapter 2, whereby students who rely on AI exhibit diminished neural activity and reduced recall of their own work, a revision of our pedagogical strategies is essential. The challenge is not to ban AI but to restructure how we deploy it within educational contexts.

When we shift to a dialogic model of education—one that centers on process, performance, and visible thinking—AI’s function undergoes a categorical transformation. In this paradigm, assessment no longer fixates on the final product but evaluates the journey of understanding. Students know they will need to defend their ideas in seminar discussions, explain their reasoning in oral examinations, or show their problem-solving process in real-time. The endpoint of learning is not a submitted document, but a performed understanding. Within this framework, AI transforms from a substitute for thinking into a preparatory tool for intellectual engagement. The technology becomes what we might call a “cognitive sparring partner,” an entity against which students can test their ideas and refine their arguments.

Consider a philosophy student preparing for a seminar on utilitarianism. Rather than prompting an AI system to generate a paper, they engage it as an intellectual adversary: “I believe utilitarianism fails because it cannot adequately account for individual rights. Present the strongest counterarguments to this position.” The AI’s response doesn’t replace the student’s thinking; it sharpens it. The student must evaluate the counterarguments, identify weaknesses, and develop responses. They are not outsourcing their cognition but exercising it against a responsive opponent.

This shift from substitution to augmentation represents more than a tactical adjustment; it embodies a fundamental reconceptualization of AI’s educational role. The technology moves from being an endpoint to becoming a waypoint, from a destination to a springboard. Students use AI not to avoid thinking but to prepare for when they must think publicly, defensively, and creatively in the presence of their peers and instructors.

The distinction becomes clearer when we examine the incentive structures each model creates. In a product-focused assessment approach, the incentive is to minimize effort while maximizing output quality. AI perfectly serves this goal by providing high-quality output with minimal cognitive investment. In a process-focused model, these incentives reverse completely. Knowing they will need to explain their reasoning and defend their positions in real-time, students have no incentive to use AI as a substitute. A student cannot bring ChatGPT to an oral examination. They cannot deploy an algorithm to participate in a seminar discussion on their behalf.

This reconceptualization aligns with what cognitive scientists have identified as the crucial distinction between “answer-getting” and “sense-making” in learning. Answer-getting focuses on producing correct responses without understanding why those responses are correct. Sense-making emphasizes the construction of meaningful mental models that can be applied flexibly to novel situations. When assessment shifts to evaluate sense-making rather than answer-getting, AI’s limitations become pedagogically productive rather than problematic.

The transformation also addresses crucial concerns about equity in the age of AI. When AI functions as a substitution tool, it creates a new form of digital divide. Students with access to more sophisticated AI tools gain an unfair advantage. But when AI functions as an augmentation tool within a process-focused assessment framework, these inequities diminish. Success depends not on the sophistication of technological resources but on the ability to internalize, synthesize, and articulate understanding.

Furthermore, this shift aligns with the broader educational goal of preparing students for a world where AI will be ubiquitous. In their professional lives, students will need to work alongside AI productively. They will need to know when to rely on algorithmic assistance and when human judgment remains irreplaceable. By positioning AI as a sparring partner rather than a substitute, we prepare students for this future. They learn to engage with AI critically rather than passively, discovering its capabilities and limitations through direct experience.

A Taxonomy of Cognitive Partnership

The reconceptualization of AI from substitute to sparring partner requires more than theoretical understanding. It demands practical strategies that transform this vision into classroom reality. The following taxonomy provides educators with specific approaches for using AI to force students into higher states of metacognitive reflection.

In this context, the metaphor of the sparring partner deserves careful examination. In boxing, a sparring partner provides resistance without attempting to destroy their opponent. They expose weaknesses while allowing space for improvement. They create conditions for growth through controlled challenges. Most importantly, sparring prepares the fighter for the actual bout as no one confuses sparring with the real match. This analogy illuminates how AI should function in education when properly deployed.

The Skeptical Reviewer: In this approach, students use AI as a critical interlocutor for their developing ideas. A student working on an essay about climate change policy presents their thesis to the system: “My thesis is that carbon pricing alone cannot address climate change because it fails to account for international cooperation challenges and social equity concerns. Act as a skeptical peer reviewer and provide the three strongest counterarguments to my position.”

An AI’s response challenges rather than replaces student thinking. The system might argue that carbon pricing mechanisms can be designed with international coordination features, that equity concerns can be addressed through revenue recycling, or that imperfect solutions should not be abandoned for even more imperfect alternatives. The student’s task is to engage with these counterarguments, distinguishing legitimate weaknesses from those that can be disproven.

The Socratic Tutor: In this approach, students prompt AI to guide them toward understanding through questions rather than answers. A student struggling with path dependency in economics might prompt: “I am trying to understand the concept of path dependency. Ask me a series of questions that will help me discover the key principles for myself, but do not give me the answers directly.”

The resulting dialogue forces active cognitive engagement. The AI might ask: “Can you think of a technology that became dominant not because it was the best option but because it gained an early advantage?” Through guided inquiry, the student constructs their own understanding of how historical accidents become locked in through network effects and switching costs. By using this approach, we retain what cognitive scientists term “desirable difficulty,” the beneficial effort that reinforces learning.

The Brainstorming Partner: When used in this way, the goal is not to have AI generate ideas that students adopt wholesale but to use it as a catalyst for creative thinking. A literature student analyzing metaphor in poetry might prompt: “Generate five unconventional analogies to explain how metaphor functions in poetry. For each one, explain its strengths and weaknesses.”

The AI might compare metaphor to a chemical catalyst, a jazz improvisation, a mathematical transformation, dream logic, or cultural translation. The student must evaluate each analogy critically, identifying where it illuminates and where it obscures. This process develops analytical capabilities while potentially sparking genuinely original insights that neither the student nor the AI would have generated independently.

The Gap Finder: Here, students use AI to identify what they don’t know they don’t know—the unknown unknowns that often remain invisible in self-directed learning. A student presents their understanding: “Here is my summary of the causes of the 2008 financial crisis: [summary]. Based on a university-level curriculum, what are the key concepts, debates, or perspectives I seem to be missing?”

The AI might identify overlooked global dimensions or ideological assumptions. This feedback doesn’t provide answers but maps the territory of what remains to be learned. The student discovers the boundaries of their knowledge without having those gaps immediately filled.

The Role-Player Strategy: This approach leverages AI’s ability to simulate perspectives for educational purposes. A history student studying the Constitutional Convention might prompt: “Act as James Madison during the Constitutional Convention. I will ask you questions about federalism, and you should respond based on Madison’s known writings and beliefs from that period.”

This creates an interactive way to explore historical thinking. The student learns not by receiving a summary of Madison’s views but by engaging in a simulated dialogue that requires them to formulate questions and evaluate responses for historical accuracy and consistency.

Collaborative Strategies: The taxonomy includes strategies for collaborative AI use that maintain individual accountability. In the debate preparation model, pairs of students use AI to help prepare opposing sides of a classroom debate. Each uses AI to identify potential weaknesses in their position and strengths in their opponent’s. But during the actual debate, they must perform without AI assistance.

Similarly, in peer teaching preparation, students use AI to help prepare explanations of complex concepts they will teach to classmates. They might prompt AI to identify common misconceptions or suggest helpful analogies. But when they actually teach their peers, they must demonstrate internalized understanding rather than real-time information retrieval.

Critical Evaluation Strategies: The fact-checker approach involves students using AI to generate claims about a topic, then independently verifying those claims through primary sources. This develops crucial skills for the age of AI-generated misinformation as students learn AI can produce plausible-sounding but entirely fabricated information.

The boundary-tester strategy involves deliberately pushing AI to the edge of its capabilities. Students might ask AI to analyze a poem written in their local dialect or solve a problem requiring specific institutional knowledge. Through these experiments, students discover firsthand what AI cannot do, developing intuition about when human intelligence remains irreplaceable.

These strategies share crucial characteristics that distinguish them from using AI as a substitution tool. First, they all require the student to generate initial content, which can be a thesis, a question, or a summary, rather than beginning with a blank page. Second, they position AI’s responses as material for critical evaluation rather than as authoritative answers. Third, they create iterative cycles of engagement where student thinking develops through multiple rounds of challenge and response.

The implementation requires careful pedagogical framing. Students must understand that their interaction with AI is preparatory, not conclusive. Just as athletes understand that performance in practice differs from performance in competition, students must recognize that their ability to engage with AI does not substitute for their ability to perform understanding independently.

Avoiding the “Cognitive Opiate”

The metaphor of AI as a cognitive opiate captures a profound danger that educators must confront directly. Just as opiates provide immediate relief from physical pain while creating long-term dependency, AI can offer immediate relief from cognitive struggle while fostering intellectual dependency and diminished capacity for independent thought.

Keep reading with a 7-day free trial

Subscribe to The Augmented Educator to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Michael G Wagner
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture