The Augmented Educator

The Augmented Educator

The Mother of All AI-Resistant Assessments: The Design Critique

Deep Dives Into Assessment Methods for the AI Age, Part 1

Michael G Wagner's avatar
Michael G Wagner
Dec 12, 2025
∙ Paid
Upgrade to paid to play voiceover

This post follows my standard early access schedule: paid subscribers today, free for everyone on December 23.

I recently published an overview of 14 assessment methods that can resist the integration of AI-tools into our current pedagogical environment. The article was intentionally general, simply exploring options instead of providing a detailed how-to guide. But in talking with fellow educators, I’ve found that many are interested in the practical specifics of AI-resistant assessment. They want to learn about the concrete steps required to apply these methods in actual classroom settings.

To respond to this need, I will be posting in-depth examinations of these assessment methods every other week. Each installment will take a single approach and unpack it thoroughly from its theoretical foundations to its practical implementation. This series begins, appropriately enough, with what I’ve come to think of as the mother of all AI-resistant assessment methods: the design critique, or simply, the “crit.”

This choice is deliberate. The design critique represents the fullest realization of an argument I made in The Detection Deception, which I recently serialized on this Substack. In that series, I argued we need to move toward what I called the dialogic institution, which is structured around assessment methods that place human interaction between teacher and student at the center of pedagogical interest. The goal is to make learning visible, not through the submission of polished artifacts, but through the process of thinking made audible and observable. The design critique does exactly this.

If you’re interested in following this series and staying updated on developments in AI-enhanced education, I encourage you to subscribe. These are complex topics that require our sustained attention as educators, and I’m committed to providing scholarly depth along with guidance on questions about practical implementation.

From the Atelier to Your Classroom: A Brief History

To understand why the design critique works as an assessment method, we need to understand where it comes from. The critique has roots stretching back to 19th-century Paris and the École Nationale Supérieure des Beaux-Arts, one of the premier institutions for art and design training in the Western world.

The École established a pedagogical model centered on the atelier (studio), where students worked under the guidance of a patron (master). The assessment system was ritualistic and often severe. Students were sequestered in small, unlit cubicles for up to twelve hours to produce an esquisse—a preliminary sketch made without access to reference materials or assistance. This sketch was filed away as proof of the student’s unassisted baseline capability. Then, students returned to the studio for weeks or months to complete the design, culminating in a high-stakes jury where masters judged the work behind closed doors, without the students present.

This system had a critical feature: it verified authorship. The esquisse established intent and baseline competence. The final submission had to adhere to that initial sketch. If it deviated significantly, the student was disqualified. The sheer physical labor and monitored conditions meant that authorship was rarely in doubt, even if the final judgment was subjective.

A second, more closely related, lineage of the design critique originates from the Bauhaus movement, which was started by Walter Gropius in Weimar, Germany, in 1919. Where the École was about preservation of style and judgment by masters, the Bauhaus was about the process of discovery. The critique shifted from a private judgment of the “beautiful” to a public analysis of the “functional.” Students were expected to explain their rationale, for example, by justifying why they chose metal over wood, or by explaining why a form was geometric rather than organic. The assessment became social. It became a place where the collective intelligence of the entire studio was activated to find a solution to a specific problem.

The design critique method we use today originates from these traditions: students present and justify their work by engaging in human dialogue. The thinking behind the object matters as much as the object itself.

Why the Critique Resists Artificial Intelligence

The resilience of the critique in the face of AI stems from its performative and synchronous nature. Let me be precise about this: an AI can often generate a solution to a problem in seconds. But it cannot defend that solution against nuanced, context-specific inquiry in real time. It cannot simulate what Donald Schön, the central theorist of design education, called “reflection-in-action.”

Schön’s work in The Reflective Practitioner (1983) challenged the dominant assumption that professional practice is simply the application of scientific theory to instrumental problems. He argued that competent practitioners engage in an improvisational loop of “seeing-moving-seeing.” They make a move, observe the result, and then reframe the problem based on that result. This is fundamentally different from knowledge application. It’s a conversation with the materials, the context, and the constraints.

The design critique assesses exactly this capacity. When a student stands before peers and instructors to discuss their work, they’re demonstrating several things simultaneously: their understanding of the problem space, their ability to articulate decision-making processes, their capacity to respond to critique in real-time, and their integration into the disciplinary community. Consider what happens when an instructor asks: “You mentioned you pivoted from your initial approach three weeks ago—walk us through what prompted that change and how you evaluated alternatives.” An AI cannot answer this. It has no memory of three weeks ago. It didn’t experience the pivot as a moment of cognitive struggle.

There’s also a social dimension that matters. The critique is a communal event. Students learn what constitutes “good” work not through a rubric alone, but by observing the critique of their peers, a process of socialization into the discipline. An AI cannot be socialized into the specific, localized culture of a classroom in the same way a human student can. It doesn’t pick up on the implicit values, the unstated expectations, the subtle shifts in what the community rewards. This “hidden curriculum” becomes visible in the critique, and it becomes part of what’s being assessed.

Finally, there’s the matter of improvisation. A good critique involves unexpected questions, challenges to assumptions, and requests for clarification. The student must think on their feet, defend choices, acknowledge weaknesses, and propose alternatives. This is fundamentally different from the static “submission-grade loop” that AI has disrupted. It’s assessment as performance, and performance requires

presence.

The Three Forms of Critique

The research literature identifies three distinct types of design critiques, each serving different pedagogical purposes. Understanding these variations allows you to choose the approach that best fits your learning objectives.

The Desk Critique is the most informal and frequent. This is a one-on-one conversation between instructor and student, typically conducted while work is still in progress. The student’s materials are spread on a desk or displayed on a screen, and the instructor asks questions, makes observations, and guides the student’s thinking. These critiques are formative. They happen during the work, not after completion. They’re low-stakes but high-frequency, creating multiple touchpoints where the instructor can verify student understanding and trace the evolution of thought. In a typical studio course, a student might have five to ten desk critiques over the course of a project.

If you’ve ever watched the TV series “Project Runway,” you’ve seen desk critiques in action. Tim Gunn’s studio visits, where he walks from designer to designer, asking questions about their work-in-progress, probing their decision-making, and helping them see problems they’ve overlooked, concluding with his iconic catch phrase “make it work,” are textbook examples of the desk critique.

The Pin-Up Critique involves a small group, usually four to eight students, presenting work simultaneously. Everyone’s work is “pinned up” (whether literally on a wall or displayed on screens), and the group moves from piece to piece. Students present briefly, then receive feedback from peers and instructors. This format emphasizes comparison and pattern recognition. Students see multiple approaches to the same problem side by side, which accelerates their understanding of design possibilities. The social dynamics differ from the desk critique. There’s peer accountability, but the stakes remain moderate because the audience is small and familiar.

The Final Jury is the high-stakes, formal presentation. This is closest to the traditional Beaux-Arts model. Students present to a panel that might include external critics, practitioners, or faculty from other departments. The presentation is structured, typically with 10-15 minutes of presentation followed by 20-30 minutes of questioning. The work is complete or nearly complete. This is summative assessment, and it’s where students must demonstrate not just what they made, but why they made it, how they made it, and what they learned in the process.

Each form serves a purpose. The desk critique allows for iterative guidance and verification of the process. The pin-up develops comparative judgment and peer learning. And the jury demands synthesis and public defense. A well-designed course uses all three in sequence, building students’ capacity to articulate their thinking as the semester progresses.

Integrating the Critique into Your Syllabus

The critique works best when it’s not an isolated event but woven into the fabric of the course. This requires front-loading expectations and structuring the semester around critique cycles. Here’s what that looks like in practice.

Keep reading with a 7-day free trial

Subscribe to The Augmented Educator to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Michael G Wagner · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture