Asynchronous and Embodied Models
The Detection Deception, Chapter 8
Fellow Augmented Educators,
Welcome to week eight of ‘The Detection Deception’ book serialization. This week’s chapter moves from the philosophical why to the practical how. After last week established the theoretical foundations for a pedagogy of process, this chapter provides a set of tangible, AI-resistant assessment frameworks built upon that very foundation. It argues that if we are to escape the detection arms race, we must fundamentally shift what we grade. The chapter details three concrete models for achieving this:
Process-based assessment (”grading the journey”),
Embodied assessment (”the performance of understanding”), and
Context-specific assessment (”the power of specificity”).
The chapter ultimately offers a pragmatic path forward. It argues that instead of trying to build a better “mouse trap” to detect AI, we should simply change the “cheese.” By designing assessments that value the very things AI cannot replicate we make generative AI irrelevant and, in the process, create a more authentic, equitable, and meaningful way to measure human learning.
Thank you for reading along! See you in the comments.
Michael G Wagner (The Augmented Educator)
Contents
Chapter 1: The Castle Built on Sand
Chapter 2: A History of Academic Dishonesty
Chapter 3: The Surveillance Impasse
Chapter 4: Making Thinking Visible
Chapter 5: The Banking Model and Its Automated End
Chapter 6: Knowledge as a Social Symphony
Chapter 7: A Unified Dialogic Pedagogy
Chapter 8: Asynchronous and Embodied Models
Chapter 9: Dialogue Across the Disciplines
Chapter 10: The AI as a Sparring Partner
Chapter 11: Algorithmic Literacy
Chapter 12: From the Classroom to the Institution
Chapter 8: Asynchronous and Embodied Models
The human desire to measure learning has always been at odds with learning’s essential nature as a process of transformation. We create endpoints where none naturally exist, draw boundaries around knowledge that refuses containment, and demand proof of understanding in forms that understanding itself resists. For decades, this tension remained manageable, a philosophical problem papered over by practical necessity. Students wrote essays, teachers graded them, and everyone pretended that these textual artifacts adequately represented the messy, recursive, deeply human process of coming to know something. Then, artificial intelligence arrived with the force of a revelation.
The crisis is also an opportunity. If machines can instantly generate the products we’ve been grading, perhaps it’s time to stop grading products and start recognizing process. If algorithms can mimic the surface features of understanding, perhaps we need to dig deeper, finding evidence of learning in places machines cannot reach. The following exploration examines three approaches to making the learning process itself visible and assessable. It considers how we might evaluate the journey rather than just the destination, how embodied performance reveals dimensions of understanding that text cannot capture, and how the specific, irreplaceable context of each classroom can become our greatest asset in preserving authentic assessment. These are not complete solutions but experiments in recognition, attempts to see and value the actual work of learning rather than its fossilized remains.
Grading the Journey, Not the Destination
The solution to our assessment crisis lies not in building better detection systems or engaging in an endless technological arms race, but in fundamentally re-conceptualizing what we choose to assess. The shift from product to process represents more than a tactical adjustment to the AI challenge. It reflects a deeper philosophical commitment to making learning visible in all its messy, iterative complexity. When we grade the journey rather than merely the destination, we create assessments that are not only resistant to AI substitution but also more authentic representations of genuine intellectual engagement.
Consider how human writing actually unfolds. A student confronting a complex assignment does not simply sit down and produce a perfect essay from beginning to end. The authentic writing process involves false starts, abandoned paragraphs, and conceptual breakthroughs that emerge through the act of writing itself. There are moments of confusion where ideas refuse to cohere, followed by sudden clarity when connections become apparent. Arguments grow, sometimes dramatically, as the writer discovers what they actually think through the process of trying to articulate it. This recursive, often chaotic journey of intellectual discovery cannot be replicated by an AI, which generates text through statistical pattern matching, rather than genuine cognitive struggle.
The process-oriented approach transforms this previously hidden work into assessable evidence of learning. Multiple drafts become windows into cognitive development. A first draft might reveal a student grappling with basic comprehension, organizing scattered observations into preliminary themes. The second draft shows refinement as stronger arguments emerge and weaker ones fall away. The final version shows not just the endpoint of thinking but the evolution of understanding. Each iteration carries what might be called a “cognitive fingerprint,” unique patterns of development that reflect an individual mind wrestling with ideas.
This approach requires educators to expand their conception of what constitutes assessable work. Process-based assessment recognizes the preparatory stages as valuable data about student learning. Research notes reveal how students evaluate and synthesize sources. And outline revisions show the development of organizational logic. Even abandoned attempts provide evidence of intellectual risk-taking and the willingness to pursue ideas that might not ultimately succeed.
The power of this approach becomes clear when we consider the role of peer review in the writing process. When peer review becomes a formal, graded component of the assignment, it transforms into something far more substantial. Students must show their understanding of course concepts not just through their own writing but through their ability to identify strengths and weaknesses in others’ work. The feedback a student provides becomes an assessable artifact in its own right.
Imagine a sociology course where students are writing about social inequality. Maria submits her first draft for peer review, and James provides detailed feedback. His comments reveal whether he understands the theoretical frameworks being applied. When he suggests that Maria’s analysis of intersectionality could be strengthened by considering economic factors alongside race and gender, he shows comprehension of how these systems interact. When he identifies a logical gap in her argument about structural barriers, he shows his ability to think critically about causation and evidence. The quality, specificity, and theoretical grounding of his feedback become powerful indicators of his own learning, entirely separate from his ability to produce a polished essay.
This exemplifies the Socratic principle in practice—understanding becomes visible through the act of questioning and examining ideas. The peer review process creates multiple benefits beyond AI resistance. Students receive feedback from multiple perspectives, broadening their understanding of how their ideas land with different readers. They develop metacognitive awareness by articulating what makes writing effective or ineffective. The social dimension of peer review also introduces accountability mechanisms that purely individualized assignments lack.
The revision plan represents perhaps the most powerful tool in the process-based assessment toolkit. After receiving feedback from peers or instructors, students must articulate how they plan to address the critiques and improve their work. This document forces students to engage in explicit metacognition, moving beyond simply making changes to understanding why those changes matter. A strong revision plan demonstrates several sophisticated cognitive moves. The student must accurately interpret feedback, distinguishing between central criticisms that require structural changes and minor suggestions for polish. And they must evaluate which feedback to incorporate and which to respectfully decline, exercising intellectual judgment.
Consider a hypothetical revision plan from a history student writing about the French Revolution. After receiving feedback that her essay lacks a clear argument about causation, she writes:
“The primary criticism of my first draft is that I present multiple factors contributing to the Revolution without establishing a hierarchy of causation or showing how these factors interacted. In my revision, I will restructure the essay around the thesis that economic crisis was the necessary condition that transformed long-standing social tensions into revolutionary action. This requires significant reorganization. I’ll move my discussion of Enlightenment ideas from the beginning to show how economic hardship made these ideas suddenly relevant to broader populations. The section on the nobility’s resistance to tax reform, currently buried in paragraph four, needs to become central to my argument about the fiscal crisis as the triggering mechanism.”
This revision plan reveals genuine intellectual work that cannot be outsourced to an AI. The student demonstrates an understanding of the difference between correlation and causation, the ability to synthesize multiple historical factors into a coherent narrative, and the metacognitive awareness to recognize and address weaknesses in her own thinking.
The reflective essay offers another powerful window into authentic learning processes. Rather than focusing on what students know, reflective writing reveals how they came to know it. These documents ask students to examine their own learning journey with questions that probe beyond surface comprehension.
A student in a philosophy course might write about struggling with the concept of moral relativism: “The most challenging part of this essay was moving beyond my initial instinct to simply reject relativism as obviously wrong. I found myself getting angry at the readings, which made me realize I was bringing strong assumptions I hadn’t examined. The breakthrough came when I tried to steel-man the relativist position, arguing it as strongly as possible before critiquing it. This forced me to understand why thoughtful people might hold this view, even if I ultimately disagree.”
This reflection shows several forms of understanding that resist AI replication. Following Freire’s problem-posing approach, the student demonstrates how genuine learning involves confronting one’s own assumptions and engaging in critical self-examination. They show emotional honesty about their learning process, including frustration and breakthroughs. Most importantly, they show intellectual growth through strategic engagement with challenging ideas.
The focus on process addresses one of the most persistent challenges in writing instruction: the tendency for students to view revision as mere proofreading rather than substantive rethinking. When the entire journey becomes gradable, students cannot simply fix comma splices and call it revision. They must show a genuine evolution in their thinking. This pedagogical shift has benefits that extend far beyond preventing AI substitution. Students develop stronger writing skills when they understand revision as intellectual work rather than mechanical correction.
Implementing process-based assessment does require significant changes in course design and instructor workload. However, this increased investment yields multiple returns. Instructors gain much richer data about student learning, allowing for more targeted intervention when students struggle. And the distributed nature of process assessment also reduces the stakes of any single deadline, potentially decreasing student anxiety and the temptation to seek shortcuts.
This approach fundamentally reframes the relationship between students and their own work. Rather than viewing writing as a performance to be optimized for evaluation, students begin to understand it as a tool for discovering and developing their own thinking. Building on Bakhtin’s dialogic principle, the process becomes intrinsically valuable as students engage in dialogue with their own developing ideas. When students value the journey of learning for its own sake, the option to skip that journey by using AI becomes less appealing, not because it is forbidden but because it would mean missing the point entirely.
The Performance of Understanding
The written word has dominated academic assessment for so long that we often forget it represents only one mode through which humans can demonstrate understanding. In our current moment, when machines can generate flawless prose instantaneously, the limitations of text-based assessment become particularly acute. The solution lies not in abandoning writing but in expanding our conception of what makes up legitimate academic evidence. Video assessment offers a powerful method for capturing dimensions of understanding that text alone cannot convey.
When a student records themselves explaining a concept, something fundamental changes in the assessment’s nature. The performance becomes embodied and situated in time and space, marked by the irreducible specificity of a human voice and presence. These recordings capture not just what students know but how they know it, revealing the cognitive processes that unfold in real time as understanding takes shape through articulation.
Consider the difference between reading a written explanation of photosynthesis and watching a biology student explain the process while sketching diagrams on a whiteboard. In the written version, we see only the polished endpoint of thought. The video reveals the journey of explanation itself. The student might begin confidently, describing how plants convert sunlight into chemical energy. Then comes a pause, a moment of visible thinking as they search for the right way to explain the role of chlorophyll. They might gesture with their hands, mimicking the movement of electrons through photosystems. When they realize they’ve confused the order of the light and dark reactions, we see the self-correction happen in real time: “Actually, wait, let me back up. The Calvin cycle doesn’t require light directly, but it does depend on the ATP and NADPH produced in the light reactions.”
These moments of hesitation, correction, and reformulation provide windows into authentic understanding that no written document can capture. They reveal what cognitive scientists call “knowledge in action” rather than “knowledge in storage.” This exemplifies the Socratic principle in practice—the student’s understanding becomes visible through the process of articulation and self-correction.
The video abstract represents one of the most straightforward applications of this principle. Rather than submitting only a written essay, students record a brief, unscripted explanation of their core argument. The constraints are deliberately minimal: two to three minutes, no editing and no reading from notes. These limitations force students to internalize their ideas deeply enough to articulate them spontaneously.
A student in a literature course, having written an essay about symbolism in Toni Morrison’s “Beloved,” sits before their laptop camera to record their video abstract. They begin: “My essay argues that Morrison uses water imagery not just as a symbol of memory but as a way to show how the past literally flows into the present for the characters.” As they continue, we observe subtle indicators of genuine engagement. Their eyes occasionally drift upward, a sign of accessing memory rather than reciting memorized text. They use their hands to illustrate the flowing motion they’re describing. When they reach a complex point about the relationship between water and trauma, they slow down, choosing words carefully: “It’s not that water represents trauma exactly, but more like... water shows how trauma moves, how it can’t be contained or controlled.”
This performance reveals understanding in ways that transcend the content of what is said. The rhythm of speech, with its natural variations in pace and emphasis, shows which ideas the student finds most compelling or challenging. The moments of searching for words indicate active thinking rather than passive recitation.
The concept explanation video takes this principle in a slightly different direction. Rather than summarizing their own work, students must teach a difficult concept from course material as if explaining it to a peer who missed class. This shifts the cognitive demand from reproduction to translation.
Consider a hypothetical economics student explaining the concept of moral hazard. They begin with a definition but quickly realize this isn’t sufficient. “Okay, so imagine you have car insurance that covers everything, no matter what. You might start driving a bit more recklessly, right? Not on purpose necessarily, but because you know you’re protected. That’s moral hazard—when being protected from risk actually changes your behavior to become riskier.” They continue building the explanation through examples, occasionally checking their imagined audience’s understanding: “Does that make sense? Let me give another example from banking...”
Following Freire’s problem-posing approach, the student transforms abstract economic theory into accessible, concrete examples that connect to lived experience. The authenticity markers in such performances are many and subtle. Natural speech disfluencies—the “ums” and “uhs” that are sprinkled into human conversation—actually serve as indicators of cognitive processing. Self-correction provides another powerful authenticity marker. When humans explain complex ideas, they monitor their own speech for accuracy and clarity, sometimes catching errors mid-sentence.
The paralinguistic dimensions of video assessment offer particularly rich data. Tone of voice conveys confidence or uncertainty in ways that text cannot capture. Facial expressions provide additional information. The furrowed brow of concentration, the slight smile of recognition when making a connection, the momentary look of confusion followed by clarity—these micro-expressions map the internal landscape of learning.
Gesture adds yet another layer of meaning. Humans naturally use their hands when explaining spatial, temporal, or relational concepts. A student explaining the structure of DNA might use their hands to show the double helix twisting. Someone discussing historical causation might use gestures to show connections between events. These embodied representations reveal how students mentally model abstract concepts.
The situated demonstration takes video assessment into the realm of applied knowledge. Here, students must not only explain but also show their understanding through action. A chemistry student demonstrates proper titration technique while explaining the underlying principles. An education student shows how they would facilitate a particular classroom discussion technique. A computer science student walks through their code, explaining their decision-making process as they debug a program in real time.
These demonstrations make visible the integration of theoretical knowledge with practical skill. A nursing student demonstrating patient assessment doesn’t just recite the steps but shows how they adapt their approach based on patient responses. They explain their reasoning: “I’m starting with open-ended questions to build rapport before moving to more specific health questions. Notice how I’m maintaining eye contact while taking notes—this helps the patient feel heard while I document important information.”
The power of video assessment extends beyond its resistance to AI substitution. It addresses a long-standing equity concern in education. Students who struggle with written expression because of dyslexia or other learning differences might excel at verbal explanation. International students who face challenges with academic writing conventions in English might show sophisticated understanding through speech. The multimodal nature of video—combining verbal, visual, and kinesthetic elements—accommodates diverse learning styles and forms of intelligence.
Assessment criteria for video performances must be carefully articulated to maintain reliability and fairness. Rubrics might evaluate content accuracy, explanation clarity, appropriate use of examples, and evidence of understanding connections between concepts. Importantly, these criteria should focus on academic substance rather than presentation polish.
The cognitive science supporting embodied assessment is robust. Research on embodied cognition shows that understanding is not purely abstract but grounded in sensorimotor experience. When students use gestures while explaining concepts, they’re not just communicating but actually supporting their own cognitive processing. This embodied dimension of understanding cannot be replicated by AI, as it lacks a body and the sensorimotor experiences that ground human cognition.
The temporal dimension of video assessment also matters. Unlike written text, which exists outside of time, video performances unfold in real time. This temporality creates what might be called “cognitive commitment”—once words are spoken, they cannot be unwritten. Students must think ahead while speaking, maintaining coherence across time. This online processing demand reveals the robustness and flexibility of understanding in ways that carefully edited text cannot.
The Power of Specificity
The most elegant solution to the AI challenge in education may also be the simplest: make assignments so specific to the lived experience of a particular classroom that generic machine-generated responses become useless. This approach recognizes a fundamental limitation of large language models. These systems are trained on vast corpora of text from the public internet, giving them broad but shallow knowledge about nearly everything. What they definitively lack is access to the unique and deeply human interactions that occur within the four walls of a specific classroom on a particular Tuesday afternoon.
Every class develops its own microculture over the course of a semester. Inside jokes emerge from spontaneous moments. Particular phrases become shorthand for complex ideas discussed at length. Individual students become known for specific perspectives or questions they consistently raise. A guest speaker shares an anecdote that resonates throughout subsequent discussions. These accumulated experiences create what we might think of as a “local knowledge ecosystem”—a body of shared references and understandings that exists nowhere else in the universe.
The principle operates on multiple levels. At the most basic level, assignments can require direct engagement with specific classroom events. Rather than asking students to analyze the concept of market failure in abstract terms, an economics professor might prompt: “In Thursday’s class, Jennifer argued that healthcare markets fail because patients cannot assess quality before purchase. Using David’s counterargument about reputation effects and the specific example of cosmetic surgery that emerged in our discussion, evaluate whose position better explains the American healthcare system’s current challenges.”
This prompt creates multiple barriers to AI substitution. An LLM has no knowledge of Jennifer or David, cannot access their specific arguments, and lacks the context of how cosmetic surgery became relevant to the discussion. Building on Bakhtin’s dialogic principle, the assignment requires engagement with ideas as they actually unfolded in class discussion, with all the nuance and complexity that emerges from live intellectual exchange.
Integrating personal experience adds another layer of specificity that resists algorithmic replication. Assignments that require students to apply course concepts to their own lives create a form of evidence that is simultaneously academic and autobiographical. A psychology student might be asked to analyze their own decision-making patterns through the lens of cognitive biases discussed in class, providing specific examples from their recent experiences.
Consider a hypothetical assignment from an environmental science course: “Reflect on your personal consumption patterns during the week of October 15-22, which you tracked using the journal method we practiced in class. Apply Hardin’s ‘Tragedy of the Commons’ model (as discussed in Tuesday’s lecture, not the textbook version) to analyze three specific consumption decisions. For at least one decision, incorporate the critique that Marcus raised about Hardin’s assumption of purely rational, self-interested actors (and how it overlooks, for example, Ostrom’s work on community governance), and explain whether his objection applies to your example.”
This prompt weaves together multiple strands of specificity. It references a particular tracking method taught in class, a specific version of a theoretical framework that differs from published sources, and a student’s critique that shaped class discussion. Following Freire’s problem-posing approach, the assignment transforms abstract theory into lived experience, requiring students to examine their own actions through critical frameworks.
The power of guest speakers and unique course materials provides another avenue for creating AI-resistant assignments. When a professional shares their experience with the class, they create intellectual resources that exist nowhere else. A journalism course might host a local investigative reporter who describes their process for uncovering municipal corruption. The subsequent assignment asks students not just to summarize the presentation but to apply the speaker’s specific method to analyze a different case study.
Unique course materials can include primary sources, unpublished documents, or materials created specifically for the class. A history professor might share diary entries from their own archival research, documents not available online. These materials create an information asymmetry that no AI can overcome.
The synthesis across modalities represents a sophisticated form of contextual specificity. When courses incorporate multiple forms of media—films watched together, podcasts discussed in class, images analyzed collectively—assignments can require students to make connections across these different modes of engagement. A media studies assignment might ask: “Compare how the documentary we watched on Tuesday represents urban poverty with the narrative strategies in the three podcasts we analyzed. Pay particular attention to the moment in the film when the camera lingers on the empty playground (timestamp 34:15) and relate this to Sarah’s observation about ‘sonic absence’ in the second podcast.”
This type of prompt requires not just knowledge of the materials but memory of specific moments and class discussions about them. The cognitive work of synthesizing across different media forms, especially when grounded in specific moments and peer observations, creates a task that would be nearly impossible to complete without genuine participation in the class experience.
The temporal dimension of classroom learning offers unique opportunities for contextual grounding. Assignments can build on the cumulative nature of semester-long discussions, requiring students to trace how their understanding has strengthened. A philosophy student might be asked to revisit their initial response to the trolley problem from the first week of class, explaining how specific readings and discussions have complicated or confirmed their intuitions.
This temporal threading creates what we might call a “cognitive trail” that is unique to each student’s journey through the course. This exemplifies the Socratic principle in practice—the student must examine their own evolving understanding, making visible the process of intellectual growth over time.
The strategic use of collaborative knowledge adds yet another dimension. When students work together on projects, presentations, or discussions, they create shared intellectual products that exist only within their group. After a group project on urbanization, individual students might be asked to critically evaluate their group’s approach, referencing specific decisions made during their meetings and explaining how alternative choices might have led to different conclusions.
A student might write: “During our second group meeting, Emma suggested we focus on transportation infrastructure as our primary lens for understanding urbanization. While this yielded interesting insights about spatial inequality, I now think James’s alternative suggestion to examine cultural institutions would have better revealed the social dynamics we were trying to understand. The moment we discovered the correlation between subway access and income segregation was genuinely surprising, but it led us to overlook the role of museums and theaters in creating cultural districts that drive gentrification through different mechanisms.”
This response shows an understanding that could only come from participation in the specific collaborative process. Building on Bakhtin’s dialogic principle, the reflection shows how understanding develops through social interaction, grounded in the specifics of their group’s unique intellectual journey.
The implementation of contextually specific assignments does require thoughtful planning from instructors. They must document classroom discussions, note particularly insightful student comments, and track the emergence of productive tangents or debates. The approach also demands clear communication about expectations. Students need to understand that engagement in classroom discussions is not optional but essential for completing assignments successfully.
The contextual specificity approach changes the economics of cheating. When assignments require extensive knowledge of classroom discussions, personal experiences, and unique course materials, the effort required to reconstruct this context for an AI becomes prohibitive. The goal is achieved not through making cheating impossible but through making it more difficult than authentic engagement.
This pedagogical strategy also shifts the value proposition of education itself. When the classroom becomes a site of unique knowledge production, when the discussions and interactions themselves become irreplaceable intellectual resources, the rationale for presence and participation becomes clear. The classroom transforms from a site of information transfer to a laboratory for collaborative thinking, where the specific chemistry of particular people engaging with particular ideas at a particular moment creates unrepeatable and valuable learning experiences.
Thank you for following Chapter 8 to its conclusion. If these asynchronous and embodied models offer a practical defense against AI, I hope you’ll continue with me as we explore the most powerful, offensive strategy: synchronous, human-to-human dialogue.
Next Saturday we continue Part 3 with Chapter 9: ‘The Irreducible Human.’ Having explored how to make the process of learning visible, we now examine how to assess the performance of understanding in real time. The chapter argues that across all academic disciplines—from STEM to the social sciences and professional schools—authentic expertise reveals itself through the irreducibly human act of thinking aloud with others. We will explore time-tested traditions like the ‘whiteboard defense,’ the ‘mock trial,’ and the ‘studio critique’ to show how they all serve the same function: making judgment, reasoning, and adaptation visible through responsive, situated dialogue.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.


