The Mother of All AI-Resistant Assessments: The Design Critique
Deep Dives Into Assessment Methods for the AI Age, Part 1
I recently published an overview of 14 assessment methods that can resist the integration of AI-tools into our current pedagogical environment. The article was intentionally general, simply exploring options instead of providing a detailed how-to guide. But in talking with fellow educators, I’ve found that many are interested in the practical specifics of AI-resistant assessment. They want to learn about the concrete steps required to apply these methods in actual classroom settings.
To respond to this need, I will be posting in-depth examinations of these assessment methods every other week. Each installment will take a single approach and unpack it thoroughly from its theoretical foundations to its practical implementation. This series begins, appropriately enough, with what I’ve come to think of as the mother of all AI-resistant assessment methods: the design critique, or simply, the “crit.”
This choice is deliberate. The design critique represents the fullest realization of an argument I made in The Detection Deception, which I recently serialized on this Substack. In that series, I argued we need to move toward what I called the dialogic institution, which is structured around assessment methods that place human interaction between teacher and student at the center of pedagogical interest. The goal is to make learning visible, not through the submission of polished artifacts, but through the process of thinking made audible and observable. The design critique does exactly this.
If you’re interested in following this series and staying updated on developments in AI-enhanced education, I encourage you to subscribe. These are complex topics that require our sustained attention as educators, and I’m committed to providing scholarly depth along with guidance on questions about practical implementation.
From the Atelier to Your Classroom: A Brief History
To understand why the design critique works as an assessment method, we need to understand where it comes from. The critique has roots stretching back to 19th-century Paris and the École Nationale Supérieure des Beaux-Arts, one of the premier institutions for art and design training in the Western world.
The École established a pedagogical model centered on the atelier (studio), where students worked under the guidance of a patron (master). The assessment system was ritualistic and often severe. Students were sequestered in small, unlit cubicles for up to twelve hours to produce an esquisse—a preliminary sketch made without access to reference materials or assistance. This sketch was filed away as proof of the student’s unassisted baseline capability. Then, students returned to the studio for weeks or months to complete the design, culminating in a high-stakes jury where masters judged the work behind closed doors, without the students present.
This system had a critical feature: it verified authorship. The esquisse established intent and baseline competence. The final submission had to adhere to that initial sketch. If it deviated significantly, the student was disqualified. The sheer physical labor and monitored conditions meant that authorship was rarely in doubt, even if the final judgment was subjective.
A second, more closely related, lineage of the design critique originates from the Bauhaus movement, which was started by Walter Gropius in Weimar, Germany, in 1919. Where the École was about preservation of style and judgment by masters, the Bauhaus was about the process of discovery. The critique shifted from a private judgment of the “beautiful” to a public analysis of the “functional.” Students were expected to explain their rationale, for example, by justifying why they chose metal over wood, or by explaining why a form was geometric rather than organic. The assessment became social. It became a place where the collective intelligence of the entire studio was activated to find a solution to a specific problem.
The design critique method we use today originates from these traditions: students present and justify their work by engaging in human dialogue. The thinking behind the object matters as much as the object itself.
Why the Critique Resists Artificial Intelligence
The resilience of the critique in the face of AI stems from its performative and synchronous nature. Let me be precise about this: an AI can often generate a solution to a problem in seconds. But it cannot defend that solution against nuanced, context-specific inquiry in real time. It cannot simulate what Donald Schön, the central theorist of design education, called “reflection-in-action.”
Schön’s work in The Reflective Practitioner (1983) challenged the dominant assumption that professional practice is simply the application of scientific theory to instrumental problems. He argued that competent practitioners engage in an improvisational loop of “seeing-moving-seeing.” They make a move, observe the result, and then reframe the problem based on that result. This is fundamentally different from knowledge application. It’s a conversation with the materials, the context, and the constraints.
The design critique assesses exactly this capacity. When a student stands before peers and instructors to discuss their work, they’re demonstrating several things simultaneously: their understanding of the problem space, their ability to articulate decision-making processes, their capacity to respond to critique in real-time, and their integration into the disciplinary community. Consider what happens when an instructor asks: “You mentioned you pivoted from your initial approach three weeks ago—walk us through what prompted that change and how you evaluated alternatives.” An AI cannot answer this. It has no memory of three weeks ago. It didn’t experience the pivot as a moment of cognitive struggle.
There’s also a social dimension that matters. The critique is a communal event. Students learn what constitutes “good” work not through a rubric alone, but by observing the critique of their peers, a process of socialization into the discipline. An AI cannot be socialized into the specific, localized culture of a classroom in the same way a human student can. It doesn’t pick up on the implicit values, the unstated expectations, the subtle shifts in what the community rewards. This “hidden curriculum” becomes visible in the critique, and it becomes part of what’s being assessed.
Finally, there’s the matter of improvisation. A good critique involves unexpected questions, challenges to assumptions, and requests for clarification. The student must think on their feet, defend choices, acknowledge weaknesses, and propose alternatives. This is fundamentally different from the static “submission-grade loop” that AI has disrupted. It’s assessment as performance, and performance requires presence.
The Three Forms of Critique
The research literature identifies three distinct types of design critiques, each serving different pedagogical purposes. Understanding these variations allows you to choose the approach that best fits your learning objectives.
The Desk Critique is the most informal and frequent. This is a one-on-one conversation between instructor and student, typically conducted while work is still in progress. The student’s materials are spread on a desk or displayed on a screen, and the instructor asks questions, makes observations, and guides the student’s thinking. These critiques are formative. They happen during the work, not after completion. They’re low-stakes but high-frequency, creating multiple touchpoints where the instructor can verify student understanding and trace the evolution of thought. In a typical studio course, a student might have five to ten desk critiques over the course of a project.
If you’ve ever watched the TV series “Project Runway,” you’ve seen desk critiques in action. Tim Gunn’s studio visits, where he walks from designer to designer, asking questions about their work-in-progress, probing their decision-making, and helping them see problems they’ve overlooked, concluding with his iconic catch phrase “make it work,” are textbook examples of the desk critique.
The Pin-Up Critique involves a small group, usually four to eight students, presenting work simultaneously. Everyone’s work is “pinned up” (whether literally on a wall or displayed on screens), and the group moves from piece to piece. Students present briefly, then receive feedback from peers and instructors. This format emphasizes comparison and pattern recognition. Students see multiple approaches to the same problem side by side, which accelerates their understanding of design possibilities. The social dynamics differ from the desk critique. There’s peer accountability, but the stakes remain moderate because the audience is small and familiar.
The Final Jury is the high-stakes, formal presentation. This is closest to the traditional Beaux-Arts model. Students present to a panel that might include external critics, practitioners, or faculty from other departments. The presentation is structured, typically with 10-15 minutes of presentation followed by 20-30 minutes of questioning. The work is complete or nearly complete. This is summative assessment, and it’s where students must demonstrate not just what they made, but why they made it, how they made it, and what they learned in the process.
Each form serves a purpose. The desk critique allows for iterative guidance and verification of the process. The pin-up develops comparative judgment and peer learning. And the jury demands synthesis and public defense. A well-designed course uses all three in sequence, building students’ capacity to articulate their thinking as the semester progresses.
Integrating the Critique into Your Syllabus
The critique works best when it’s not an isolated event but woven into the fabric of the course. This requires front-loading expectations and structuring the semester around critique cycles. Here’s what that looks like in practice.
First, introduce the critique method on day one. Students, especially those outside design disciplines, often have no experience with public presentation and critique of work-in-progress. They need to understand the purpose (making thinking visible), the format (presentation followed by dialogue), and the norms (constructive feedback, separation of work from self-worth). Consider running a practice critique in the first or second week using sample work or a low-stakes exercise.
Second, structure your assignments around critique deadlines. If you’re teaching a history course and want students to develop historical arguments over time, you might structure it this way: Week 3, desk critiques on thesis statements and source selection; Week 6, pin-up critiques on argument structures and evidence; Week 10, final jury on completed arguments. Each critique builds toward the next, and each provides documented evidence of student thinking.
Third, make the criteria explicit. What are you assessing during a critique? I recommend a rubric that balances product and process. You might assess: depth of research (30%), clarity of argument (20%), ability to articulate decision-making process (20%), response to questions and critique (20%), and contribution to peer learning (10%). The specific percentages matter less than the signal they send: process and articulation matter as much as the final product.
Let me offer a concrete example. Imagine you’re teaching an introductory computer science course and want to move beyond automated testing as your sole assessment method. You assign a medium-sized programming project—say, building a simple web application. You structure it this way:
Week 1: Desk critiques of system design and architecture decisions
Week 2: Pin-up critiques on initial implementation (working prototype with core features)
Week 3: Final jury on completed applications
At the Week 1 desk critique, you’re verifying that students understand the problem, have thought through design decisions, and can explain their architectural choices. You’re not grading the code yet—you’re grading their thinking. At the pin-up, students see each other’s different approaches to the same problem, and you’re assessing whether they can explain their implementation choices. At the final jury, you’re assessing both the functioning code and their ability to defend it: Why this data structure? How did you handle this edge case? What would you do differently?
The key is documentation. Record critiques (with student permission), or require students to submit written reflections after each critique session. These artifacts become part of your assessment evidence, demonstrating student growth over time.
How to Conduct a Critique Successfully
Running an effective critique requires attention to both structure and dynamics. The structure creates safety; the dynamics create learning.
Start with clear time allocations. For a desk critique, 15-20 minutes per student is typical. For a pin-up with six students, plan 90 minutes (10 minutes presentation, 5 minutes feedback per student). And for a final jury, 45-60 minutes per student (15 minutes presentation, 30-45 minutes dialogue). Respect these boundaries. Critiques that run over exhaust everyone and dilute the learning.
Establish ground rules explicitly, especially for formal juries. The presenter speaks first, without interruption. Questions come after the presentation. Feedback should be specific, not vague (“The visual hierarchy is unclear in the top right section” rather than “This doesn’t work for me”). Comments should focus on the work, not the person. And crucially, the goal is to help the student think more clearly, not to redesign the work for them.
During the critique, your role as instructor shifts. You’re facilitating a dialogue, not delivering a lecture. Ask open-ended questions: “Walk me through your decision to pursue this approach.” “What alternatives did you consider?” “What was the hardest problem you encountered, and how did you address it?” These questions make the student’s thinking audible. They also create opportunities for the student to demonstrate learning even if the final product has flaws.
Pay attention to power dynamics. The critique can easily become a performance of instructor expertise rather than a genuine inquiry into student thinking. Combat this by directing questions to the student, not answering them yourself. Invite peer feedback before offering your own assessment. And be willing to admit uncertainty—modeling intellectual humility is itself pedagogically valuable.
Create psychological safety. Students are vulnerable when presenting incomplete work. Acknowledge this explicitly. Liz Lerman’s Critical Response Process offers a useful framework: begin by having observers state what was meaningful, interesting, or successful about the work. Then move to questions (not statements masked as questions, but genuine inquiries). Only then move to critique. This sequencing reduces defensiveness and keeps students engaged.
Document everything. Take notes during critiques. These notes become assessment artifacts, but they also help you track patterns across students. Are multiple students struggling with the same concept? That’s valuable feedback on your teaching, not just their learning.
Finally, teach students to critique each other. Peer feedback is often more specific and more actionable than instructor feedback because peers are working through the same problems. Early in the semester, model good critique. Later, step back and let students lead the dialogue. By the end of the course, students should be able to run critiques with minimal instructor intervention.
Limitations, Pitfalls, and Honest Challenges
The design critique is powerful, but it’s not a magic bullet. Let me be direct about the limitations.
First, scale. The critique is time intensive. A desk critique requires 15-20 minutes per student. If you’re teaching a class of 30, that’s 7.5-10 hours of critique time per cycle, and that doesn’t include your preparation or the students’ preparation. This is workable in a studio course where critique is the primary pedagogical mode and class sizes are small (12-20 students). It’s much harder in a large lecture course. You can mitigate this through peer critique, teaching assistants, or rotating subsets of students through formal critiques, but you can’t eliminate the time cost entirely.
Second, student readiness. Not all students arrive with the capacity for productive self-reflection or public speaking. Some find the public nature of critique deeply anxiety-inducing. Others struggle to articulate their thinking verbally, even when their work shows understanding. This isn’t a reason to abandon the critique, but it is a reason to scaffold carefully. Start with low-stakes critiques, provide sentence stems for students who struggle with verbal articulation (“I chose this approach because...” “The challenge I encountered was...”), and consider offering written reflection options as supplements.
Third, the risk of performative assessment. Students can learn to “perform” a critique—saying the right things, using the right vocabulary—without genuine understanding. An articulate student can sometimes obscure weak work through skilled presentation. This is why the critique must be combined with other forms of assessment. The final artifact still matters. The critique reveals process and thinking, but it shouldn’t be the only evidence you collect.
Fourth, bias and inequity. The critique privileges certain forms of cultural capital—verbal fluency, confidence in public speaking, or familiarity with academic discourse. Students from marginalized backgrounds may face additional barriers. International students working in a second language may struggle to articulate complex ideas under time pressure. Students with social anxiety or neurodivergence may find the public performance aspects genuinely traumatic, not just uncomfortable. You must actively work to make critiques equitable spaces, which might mean offering alternative formats, providing preparation support, or adjusting your assessment criteria to account for different communication styles.
Fifth, the double-edged nature of socialization. Earlier, I noted that the critique socializes students into disciplinary values. They learn what the community considers “good” work by observing how peers’ work is evaluated. This is powerful for building shared standards, but it can also reinforce established assumptions uncritically. Students may absorb what the community values without learning to question whether those values are sound. As an instructor, you need to make implicit criteria explicit and create space for students to challenge these assumptions, not just absorb them. The critique should develop critical thinkers, not conformists.
Finally, there’s the practical reality that some disciplines have no tradition of critique. If you’re the first instructor in your department to use this method, you’ll face skepticism from colleagues and confusion from students. You’ll need to justify it, explain it, defend it. This is manageable, but it’s work.
Your Critique Implementation Toolkit
What follows is a practical recipe for implementing design critiques in your course, regardless of discipline. Treat this as a starting framework you can adapt to your context.
Step 1: Design Your Critique Sequence (During course planning)
Map your major assignment to a three-stage critique cycle: formative desk critique at 30% completion, developmental pin-up at 70% completion, summative jury at 100% completion. Block calendar time for each critique cycle: assume 20 minutes per student for desk critiques, 15 minutes per student for pin-ups, and 45 minutes per student for juries. For classes larger than 20 students, plan rotating subgroups or peer-facilitated critique sessions.
Step 2: Establish Norms and Expectations (Day 1 and Week 2)
On the first day of class, explain why you’re using critiques and what they assess, which is not just the work, but the thinking behind it. In week two, run a practice critique using sample work (your own past student work with permission, or work you create yourself). Establish explicit ground rules: presenters speak first uninterrupted, feedback must be specific and work-focused, questions should be genuine inquiries not disguised criticism, and the goal is collective learning.
Step 3: Create Your Assessment Rubric (Before first critique)
Develop a rubric that weights process alongside product. A balanced rubric might include: Research depth and source quality (25%), Conceptual clarity and argumentation (25%), Articulation of process and decision-making (25%), Response to questions and integration of feedback (15%), and Contribution to peer learning (10%). Share this rubric before the first critique and reference it explicitly during assessment.
Step 4: Prepare Students for Their First Critique (One week before first critique)
Require students to submit a one-page “critique brief” 48 hours before their scheduled time. This brief should outline: the central problem they’re addressing, their current approach and its rationale, specific challenges they’re facing, and questions they have for the critique session. This preparation document serves two purposes: it focuses student thinking, and it allows you to prepare more targeted questions.
Step 5: Structure the Critique Session (During critique)
Follow this sequence religiously: (1) Student presents for allocated time without interruption (5-10 minutes for desk critiques, 10-15 for juries), (2) Observers (peers or panel) note what’s interesting or successful (2-3 minutes), (3) Observers ask clarifying questions (5-10 minutes), (4) Observers and instructor offer critique and suggestions (10-20 minutes), (5) Student has final word to respond or reflect (2-3 minutes). Take notes throughout on both content and process observations.
Step 6: Require Post-Critique Reflection (Within 48 hours of critique)
Assign a one-page written reflection due 48 hours after the critique. Prompt students to identify: the most valuable insight from the critique session, one specific change they plan to make based on feedback, one aspect of their process they now see differently, and one question that emerged from the discussion. These reflections become assessment artifacts and help students consolidate learning.
Step 7: Document and Iterate (Throughout the term)
Keep detailed notes from each critique session, not just on individual students, but on patterns across the cohort. Are multiple students struggling with the same concept? That’s feedback on your teaching. Are certain types of questions consistently productive? Add them to your question bank. After each critique cycle, spend 15 minutes reflecting on what worked and what needs adjustment. The critique method improves through iteration.
Step 8: Build Toward Independence (Final third of the term)
As the term progresses, shift more responsibility to students. Early critiques might be instructor-led with heavy guidance. Mid-semester critiques should involve substantial peer feedback with instructor synthesis. Late-semester critiques should be peer-facilitated with instructor observation and final assessment. The goal is for students to internalize the critical dialogue so that it becomes part of their independent practice.
Why the Effort Matters
The design critique represents a fundamental shift in how we think about assessment—from verifying outputs to making thinking visible, from individual submission to communal dialogue, and from static artifacts to dynamic performance. It’s demanding work, for both instructor and students. It requires time, skill, and a willingness to relinquish some of the efficiency of automated assessment.
But here’s what it offers in return: genuine evidence of student learning, resistance to AI-generated work, development of crucial professional skills, and the restoration of human dialogue to the center of education. In an age when the artifact has become unreliable as evidence of learning, the critique offers a path forward that’s pedagogically sound and practically feasible.
The images in this article were generated with Nano Banana Pro.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.







