From the Classroom to the Institution
The Detection Deception, Chapter 12
Fellow Augmented Educators,
Welcome to the final installment of ‘The Detection Deception’ book serialization. For the past eleven weeks, we have journeyed together through the crisis facing higher education in the age of artificial intelligence—from the collapse of traditional assessment to the surveillance state that replaced it, from the banking model that makes us vulnerable to the dialogic alternative that might save us.This final chapter zooms out from individual classroom practice to ask the harder question: can our institutions themselves change? Can universities move from panic and enforcement to coherence and cultivation?
Chapter 12 follows Maria through her schizophrenic day—encouraged to use AI in one class, surveilled for using it in another, lost in moral ambiguity in a third. It argues that even the most innovative instructors cannot succeed in isolation. Institutional transformation requires clear principles, honest budgeting, and the courage to invest in human relationship over technological surveillance. The choice before us is stark: continue the arms race until our institutions are hollow shells, or rebuild our foundations on the bedrock of trust.
This concludes the serialization of ‘The Detection Deception’. Thank you for reading, questioning, and wrestling with these ideas alongside me.
Michael G Wagner (The Augmented Educator)
Contents
Chapter 1: The Castle Built on Sand
Chapter 2: A History of Academic Dishonesty
Chapter 3: The Surveillance Impasse
Chapter 4: Making Thinking Visible
Chapter 5: The Banking Model and Its Automated End
Chapter 6: Knowledge as a Social Symphony
Chapter 7: A Unified Dialogic Pedagogy
Chapter 8: Asynchronous and Embodied Models
Chapter 9: Dialogue Across the Disciplines
Chapter 10: The AI as a Sparring Partner
Chapter 11: Algorithmic Literacy
Chapter 12: From the Classroom to the Institution
Chapter 12: From the Classroom to the Institution
Maria sits in the back row of her 9:00 AM lecture, “Introduction to Environmental Science,” her laptop open, her cursor blinking on a blank document. The professor, a distinguished ecologist with thirty years of field experience, is projecting a slide with a QR code in the corner. “For today’s data analysis,” he announces, “I want you all to use the custom GPT I’ve linked here. Upload the raw climate data from the Excel sheet, ask it to identify the three primary anomalies in the precipitation patterns, and then—this is the important part—I want you to argue with it. Tell it why its analysis of the ‘98 El Niño event is simplistic.”
Maria feels a surge of engagement. This is what she came to university for: using cutting-edge tools to wrestle with complex, real-world problems. She uploads the data, watches the analysis scroll by, and begins crafting a prompt to challenge the AI’s conclusions. The room hums with the sound of two hundred students thinking, typing, and engaging. She watches the AI generate its initial interpretation, notes where it oversimplifies the relationship between sea surface temperatures and continental precipitation, and begins formulating her counterargument. The professor circulates through the lecture hall, occasionally stopping to look at a student’s screen, asking probing questions: “Why do you think the model weighted that variable so heavily? What assumptions might be baked into its training data?” This is education that feels alive, contemporary, oriented toward the problems that will define her generation’s future.
Two hours later, Maria walks across the quad to her 11:00 AM course, “Western Civilization: 1648 to Present.” She sits in a similar seat, opens the same laptop, and prepares to take notes. The syllabus for this course, distributed on the first day in hard copy, contains a block of text in bold, red font: “Zero Tolerance Policy: The use of ChatGPT, Claude, Grammarly, or any other AI-assisted software for any stage of writing, brainstorming, or editing is strictly prohibited. Any student found using these tools will receive an automatic zero and be referred to the Office of Academic Integrity.”
Maria feels a knot tighten in her stomach. She closes the tab with the climate analysis tool, paranoid that merely having it open in the background might somehow flag her on the university network. She listens as the professor lectures on the Enlightenment, scribbling notes by hand because she is afraid her typing speed might look “suspiciously fast” to the TAs patrolling the aisles. The irony is not lost on her: she is learning about the Age of Reason in an atmosphere of suspicion, studying the philosophers who championed rational inquiry while experiencing the panoptic gaze of a surveillance state. When she raises her hand to ask a question about Voltaire’s critique of religious authority, she wonders if the professor notices her anxiety, mistakes her nervousness for lack of preparation. The same laptop that was a gateway to intellectual engagement two hours ago has become a potential instrument of her academic destruction.
By 2:00 PM, she is in her creative writing workshop. The instructor here has no written policy at all. When a student raises a hand to ask if they can use AI to generate character names, the instructor sighs, looks out the window, and says, “I don’t know. Just... use your best judgment. Try to be authentic.” The vagueness feels almost worse than the prohibition. Maria has spent the past semester listening to this instructor talk about the importance of “finding your voice,” but when it comes to the technological reality that now permeates every aspect of composition, the guidance evaporates into platitudes. What does “authentic” mean when the tools we use to think are changing the nature of thought itself? Maria leaves the workshop with a story draft she’s afraid to polish, uncertain whether revising with grammar-checking software would constitute a betrayal of some undefined principle.
The Schizophrenic Campus
Maria walks back to her dorm room that evening, exhausted not by the intellectual rigor of her coursework, but by the cognitive load of navigating three completely different legal systems in the span of six hours. She is living in a schizophrenic institution. In the morning, she is a collaborator with machine intelligence, encouraged to push against the boundaries of what these tools can do, to question their outputs and improve her analytical thinking through the friction of disagreement. At lunch, she is a suspect in a surveillance state, monitored and presumed guilty until proven innocent, her every keystroke potentially subject to algorithmic interrogation. By dinner, she is wandering in a moral vacuum, given responsibility for ethical decisions without clear frameworks or institutional support. The whiplash between these modes is not merely inconvenient. It represents a fundamental failure of the institution to articulate what it values, what it believes education means, and how its practices align with its stated mission.
This fragmentation is the defining characteristic of the modern university’s response to artificial intelligence. It is chaos born from the collision between a transformative technology and bureaucratic systems designed for a slower, more stable era. As we have explored in previous chapters, the transformation of education requires a shift from product to process, from banking to dialogue, and from surveillance to trust. But these shifts cannot happen in isolated classrooms, no matter how innovative individual instructors might be. Even the most creative teachers, those capable of implementing the Socratic method or whiteboard defenses described earlier, cannot solve this systemic crisis on their own. Their innovations remain vulnerable, fragile experiments that can be undermined or contradicted by the larger institutional context.
An individual teacher may successfully create a “sparring partner” dynamic in their seminar, establishing a sanctuary of genuine inquiry where students learn to use AI as a tool for intellectual development rather than intellectual replacement. But if the institution surrounding them remains committed to the logic of industrial credential production and the bureaucratic policing of behavior, that innovation will remain a fragile island in a rising sea of compliance. The disconnect Maria feels—the bewildering landscape where rules change from door to door—is not just a nuisance or an administrative oversight. It is a structural failure that undermines the very possibility of coherent education. When an institution cannot articulate a consistent vision of what learning means or how students should develop intellectually, it cannot credibly claim to be educating anyone.
To reclaim the university in the age of AI, we must shift our gaze from the micro-level of the syllabus to the macro-level of the institution. We must examine the structures that hold the “Castle Built on Sand” together even as the tide comes in. This chapter outlines a path from institutional incoherence to a new kind of integrity. It demands that we move from the chaotic enforcement of rules to the coherent cultivation of principles. It asks us to look at the budget not as a spreadsheet, but as a moral document that reveals our actual priorities—whether we invest in the illusion of technological security or the reality of human mentorship. And it challenges us to view the university not as a fortress protecting credentials, but as an open space cultivating wisdom.
The Architecture of Panic
When ChatGPT burst onto the scene in late 2022, the institutional response was visceral. University presidents, provosts, and deans looked at the technology and saw an existential threat. If a machine could produce the currency of the realm (the essay, the problem set, the code) at zero cost and infinite speed, what was the value of the degree they were selling? The question was not entirely new. Versions of it had emerged with every previous technological disruption: the calculator, the spell-checker, or the internet itself. But the scale and sophistication of large language models made the challenge qualitatively different. These tools could mimic not just the surface features of student work but something that appeared to approximate genuine reasoning.
The initial response was a mixture of panic and paralysis. Like a biological organism reacting to a foreign pathogen, the institutional immune system kicked in. We saw the immediate implementation of “blanket bans,” where universities attempted to block access to AI. This was a move as effective as trying to ban calculators by locking the math department doors. Students accessed the tools on their phones, through VPNs, or simply off-campus. The bans created an illusion of control while actually driving AI use underground, making it impossible for institutions to have honest conversations about appropriate use. When the futility of prohibition became clear, the panic mutated into a desperate search for a technological shield.
This led to the “arms race” described in Chapter 3, where millions of dollars flowed into detection software that promised to distinguish human from machine. Vendors made bold claims about accuracy rates, about proprietary algorithms that could identify the “statistical fingerprints” of AI-generated text. Universities desperate for any solution that didn’t require rethinking their entire assessment model bought these promises eagerly. They purchased a placebo sold as a universal cure. The result was the surveillance infrastructure that now shadows students like Maria: every essay scanned, every paragraph scrutinized, every submission generating a “probability score” that purports to measure authenticity. As we have seen, these systems are fundamentally unreliable, producing false positives that destroy student trust while missing actual cases of misconduct. But they serve an important institutional function: they allow administrators to claim they are “doing something” about the problem without confronting the deeper questions about what education means in this new context.
Perhaps the most damaging reaction, though, was what might be called “syllabus devolution.” Realizing they could not enforce a campus-wide ban and uncertain about which policies would be effective or even legal, many administrations swung to the opposite extreme. They declared that AI policy was a matter of “academic freedom” and devolved all responsibility to individual faculty members. This move was often framed in the language of professorial autonomy, respecting the diverse pedagogical needs of different disciplines and instructional styles. In principle, the argument had merit: a computer science professor teaching machine learning might legitimately want students to use AI tools in ways that a philosophy professor teaching ethics would not.
On the surface, this sounded respectful of faculty autonomy. In practice, it was a surrender of leadership. It transformed every adjunct instructor, every graduate teaching assistant, and every tenure-track professor into an isolated policy expert responsible for navigating questions that legal scholars, ethicists, and education researchers are still struggling to answer. A professor of medieval history was suddenly expected to become an expert on large language models, data privacy, and the ethics of algorithmic bias. They were forced to craft legalistic policy statements for their syllabi, adjudicate complex cases of potential misconduct, and navigate the emotional fallout of accusing students—all without clear institutional backing, legal guidance, or meaningful support.
The practical consequences of this devolution have been devastating. Faculty report spending hours drafting AI policies for their syllabi, trying to anticipate every possible misuse while leaving room for legitimate applications. They describe the anxiety of trying to determine whether a student’s essay was AI-assisted, knowing that an accusation could destroy a student’s academic career but that failing to act might undermine academic standards. Graduate teaching assistants, often only a few years older than their students and managing their own precarious positions within academic hierarchies, find themselves making high-stakes decisions about academic integrity without training or institutional protection. The burden is particularly acute for contingent faculty—adjuncts and lecturers—who often teach the majority of undergraduate courses but have the least institutional power and the most precarious employment status. When these instructors raise concerns about possible AI misuse, they risk being seen as difficult or out of touch; when they don’t, they risk being blamed for declining standards.
This devolution is what created Maria’s schizophrenic day. It effectively privatized the crisis, leaving students to navigate a minefield where the definition of “cheating” shifts arbitrarily based on which room they occupy. The consequences extend beyond mere inconvenience. This inconsistency erodes the moral authority of the institution at a fundamental level. When a student is told that using an AI to outline an essay is “smart workflow” in one class and “academic dishonesty” in another, they cease to view integrity as a moral value grounded in principles and begin to view it as a game of compliance based on arbitrary rules. They stop asking, “Is this right?” and start asking, “What can I get away with in this specific room?” The shift is subtle but corrosive. It transforms ethical reasoning into strategic calculation, replacing the internalized values that should guide intellectual life with the external enforcement mechanisms that characterize bureaucratic control.
Moreover, the inconsistency creates a kind of learned helplessness among students. When the rules change from course to course, semester to semester, with no underlying logic or coherent rationale, students stop trying to understand the principles and simply wait to be told what to do. This is precisely the opposite of what education should cultivate. We want students to develop independent judgment, to internalize the values of scholarly integrity, to understand why certain practices matter rather than simply following orders. But in an environment of radical policy inconsistency, that kind of moral development becomes nearly impossible.
Moving from this chaos to coherence requires a fundamental shift in governance. We must stop trying to regulate the tools, which change every week as new models are released and new capabilities emerge, and start regulating the principles that should endure for generations. We need a governance model that is less like a penal code, with its exhaustive enumeration of prohibited behaviors, and more like a constitution that articulates foundational values and allows for a reasonable interpretation in specific contexts.
The Three Principles of the Dialogic Institution
To heal the fractured campus, institutions must establish a set of guiding pedagogical principles that provide a “north star” for decision-making across all disciplines and departments. These principles should be flexible enough to accommodate disciplinary differences—allowing the computer scientist and the poet to teach differently, recognizing that the appropriate use of AI in a machine learning course differs from its appropriate use in a literary analysis seminar. Yet they must be rigorous enough to maintain a shared standard of intellectual integrity that transcends individual preferences or departmental silos.
Principle 1: Cognitive Effort as the Metric of Value
The first principle must be an assertion of value: “The work students submit for assessment must represent their own cognitive effort and intellectual growth.” This formulation is deliberately simple, but it cuts through much of the confusion that AI proponents often raise about “efficiency” and “productivity.” In the professional world, efficiency is often the primary goal. If an AI can write a memo faster and more effectively than a human, the organization benefits from using the AI for that task. The measure of success is the quality of the output, not the developmental process that created it.
But the university is not a consultancy, and education is not industrial production. The goal of a history essay is not to have a document about the French Revolution sitting in a filing cabinet or posted to a learning management system. The goal is for the student to struggle with the causality of the French Revolution, to grapple with conflicting historical interpretations, to practice the synthesis of complex evidence, to develop the ability to construct and defend an argument. The essay is simply evidence that this cognitive work occurred; it is a byproduct of learning, not the learning itself.
By enshrining cognitive effort as the metric of value, the institution clarifies that using AI to bypass the struggle of learning is counterproductive not because it violates an arbitrary rule, but because it subverts the fundamental purpose of education. This distinction matters enormously. When students understand that restrictions on AI use are not about institutional control or technophobia but about protecting their own intellectual development, the entire conversation changes. Resistance becomes less likely because the rationale is transparent and student-centered.
This principle empowers faculty to reject the banking model of education at an institutional level. It allows a professor to say to Maria, “I am asking you to write this without AI assistance not because I hate technology or want to make your life harder, but because the neural pathways formed by struggling with this syntax, by wrestling with these concepts, by organizing these arguments—that cognitive development is what you are paying for. That is the actual product we are offering.” It reframes the restriction of AI from a policing action into a pedagogical necessity, from an arbitrary prohibition into a logical consequence of what education means.
Principle 2: Process Transparency
The second principle addresses the changed evidentiary landscape: “Students must be able to demonstrate the process through which their work was created.” This principle acknowledges a fundamental shift in academic culture. In the past, we operated on a trust-based assumption: if a name was on the paper, that person did the work. That assumption no longer holds. The ease with which AI can generate plausible academic work means we can no longer infer authorship from output alone.
However, this does not mean we must default to a surveillance model where every student is presumed guilty until proven innocent. Instead, we can adopt the scientific principle of replicability and extend it to all disciplines. In the sciences, we have long required researchers to document their methods so that others can verify and reproduce their results. The expectation is not punitive; it is simply how knowledge is validated. We can apply this same principle to undergraduate education across all fields.
Process transparency means students should be able to show how they arrived at their conclusions, making visible the intellectual journey that produced their final work. This might take different forms in different contexts, such as drafts, research notes, problem-solving logs, or reflective statements, but the underlying expectation remains consistent: scholarly work should be demonstrable, not just declarative.
Principle 3: Tool Disclosure and Contextual Appropriateness
The third principle recognizes that blanket bans are both impractical and pedagogically limiting: “Students must disclose what tools they have used and demonstrate an understanding of how those tools function and what limitations they have.” This principle allows for the possibility that AI might be appropriately used in certain contexts while maintaining standards of intellectual integrity.
The requirement for disclosure serves multiple purposes. Most immediately, it prevents deception. If a student has used AI to help with research, to generate initial drafts of code, to check grammar, or to explore ideas, that fact should be stated openly. The disclosure itself does not determine whether the use was appropriate, because that depends on the specific assignment and disciplinary context, but it ensures that the work is being evaluated honestly.
More importantly, the requirement to demonstrate understanding addresses the black-box problem we identified in earlier chapters. Students can use tools they do not fully understand, generating outputs they cannot explain or defend. By requiring students to articulate how their tools function and what limitations those tools have, we ensure tool use supports learning rather than replacing it. A student who has used an AI to help debug code should be able to explain what the AI did, why its suggestion worked, and what alternatives might have been considered. A student who has used an AI to generate research questions should be able to evaluate those questions critically, identifying which ones are productive and which ones reflect limitations in the model’s training data.
This principle also builds critical digital literacy into every discipline. Students develop the habit of asking about the tools they use: What data was this trained on? What biases might be embedded in its outputs? When is this tool helpful, and when might it lead me astray? These questions are increasingly essential in a world where algorithmic systems mediate access to information, shape public discourse, and influence decision-making in domains from healthcare to criminal justice.
Together, these three principles create a framework that is both flexible and principled. They do not prescribe exactly how every professor must teach or exactly which tools students may use. Instead, they articulate the values that should guide those decisions: the primacy of cognitive development, the importance of a transparent process, and the necessity of understanding the tools we employ. A computer science professor might allow extensive AI use as long as students can explain their code and demonstrate the learning objectives of the course have been met. A literature professor might prohibit AI use in close reading exercises while allowing it for preliminary research. The specific applications vary, but the underlying principles remain constant.
These institutional principles provide the governance foundation for the classroom practices introduced in Chapter 10. Principle 1 (Cognitive Effort as the Metric of Value) validates the “Main Event” approach, giving faculty the institutional backing to design assignments that prioritize cognitive struggle over polished products. Principle 2 (Process Transparency) mandates and normalizes the “Sparring Logs” and process documentation that allow students to demonstrate their thinking. Principle 3 (Tool Disclosure and Contextual Appropriateness) creates the framework within which the “Reflection” component can help students develop metacognitive awareness about their tool use. The institution thus scales what the individual instructor practices, transforming isolated innovations into a coherent institutional culture.
The Resource Question: Budgets as Moral Documents
Institutional principles, however eloquently stated, mean nothing without the resources to enact them. This is where the conversation about AI in education often becomes most uncomfortable, forcing institutions to confront questions they would prefer to avoid: What do we actually value? Where are we willing to invest? What trade-offs are we prepared to make?
Dr. Beatrice Thompson, associate provost for academic affairs, sits in her office on a Tuesday afternoon staring at the budget spreadsheet on her screen. The university has just renewed its contract with a detection software vendor: $100,000 per year for a system that, as she has learned from countless faculty complaints, produces false positives at an alarming rate while missing actual misconduct. She thinks about the memo she’s been drafting, the one she’s been too cautious to send. She picks up her pen.
She knows that sending this memo is a political risk. The board loves the certainty of software contracts, the measurable metrics, the appearance of technological sophistication. They are skeptical of what they call “soft” investments like teaching assistants and writing tutors—human resources that don’t come with neat dashboards or quarterly performance reports. But she drafts it anyway, her pen moving across the yellow legal pad:
Subject: Reallocating Integrity Resources from Detection to Cultivation
The current expenditure on AI detection software creates an illusion of security while failing to address the underlying pedagogical challenges. I propose we redirect these funds toward hiring three additional teaching assistants for our highest-enrollment humanities courses, enabling smaller discussion sections and implementing oral examinations and process-based assessment.
The memo goes on to outline the staffing ratios that would make authentic assessment feasible, the reduction in class sizes necessary for faculty to actually know their students. It’s not a complete solution as $100,000 doesn’t transform an entire institution, but it’s a start, a pilot program that could demonstrate what becomes possible when we invest in human relationships rather than technological surveillance.
Beatrice hesitates before hitting send, knowing this challenges the institutional assumptions that have guided resource allocation for the past decade. But she sends it.
Consider the choice Beatrice is articulating. A university spends $100,000 on detection software, licensing access for all faculty and integrating it into the learning management system. This expenditure is easy to justify to a board of trustees or to legislators: it is a concrete action, a measurable intervention, a technological solution to a technological problem. It creates the appearance of institutional competence and control. The fact that the software does not reliably work, that it produces false positives that destroy student trust while missing actual misconduct, is less visible and therefore less politically costly.
Now consider an alternative: take that same $100,000 and hire three additional teaching assistants or adjunct instructors for the departments with the highest student-to-faculty ratios. This would allow for smaller discussion sections, more frequent writing assignments with substantive feedback, and the possibility of oral examinations or one-on-one conferences. It would create the conditions under which authentic assessment becomes feasible, where faculty can actually know their students well enough to recognize their authentic voice and intellectual capabilities.
This second option is pedagogically superior by almost any measure. It addresses not just the AI problem but a host of related issues: grade inflation driven by overworked faculty, the erosion of writing skills because students receive insufficient feedback, or the alienation students feel in massive courses where they are anonymous. Yet, this option is much harder to sell institutionally. It is ongoing rather than one-time, labor-intensive rather than automated, and it acknowledges that the core problem is not technological but relational: we have lost the human connections that make education possible.
The choice between surveillance technology and human mentorship is not merely practical; it is profoundly moral. It reveals what an institution actually believes about education. When a university invests in detection software, it implicitly endorses a model of education premised on distrust. It signals to students that they are seen primarily as potential cheaters who must be monitored and caught. It tells faculty that their professional judgment is insufficient, that they need algorithmic assistance to determine whether student work is authentic. This creates a self-fulfilling prophecy: students, treated as untrustworthy, behave untrustworthily; faculty, told they cannot distinguish student writing from machine writing, lose confidence in their own expertise.
When a university instead invests in reducing class sizes, increasing contact hours, and creating space for meaningful interaction, it endorses a different model entirely. It signals trust in students as developing scholars capable of intellectual growth. It respects faculty expertise and creates the conditions under which that expertise can be exercised. It acknowledges that education is fundamentally a human relationship, not an industrial process or an information transfer protocol. Most importantly, it makes authentic assessment possible without resorting to surveillance. When a professor has worked with a student throughout a semester, reading multiple drafts, discussing ideas in office hours, observing participation in seminars, they develop a rich understanding of that student’s capabilities, interests, and intellectual voice. They can recognize authentic work because they know the student.
The resource question extends beyond simple hiring decisions. It encompasses how we structure faculty work, what we reward, and what we marginalize. Currently, most universities operate with a research-teaching hierarchy where scholarly publication is the primary criterion for tenure and promotion, while teaching excellence is acknowledged but rarely determinative. This creates perverse incentives. Faculty who invest heavily in developing innovative pedagogies, who spend time creating authentic assessments, and who meet extensively with students, are implicitly punished for these activities because they detract from research productivity.
If institutions are serious about transforming education in response to AI, they must realign these incentives. Teaching innovation must become a legitimate path to professional advancement. Faculty who develop and share effective strategies for authentic assessment should be recognized and rewarded. Those who work to build departmental cultures of pedagogical excellence should see that work count toward tenure. Without this institutional realignment, even the most committed individual instructors will face pressure to revert to efficient but ineffective assessment methods.
The resource question also implicates our admissions practices and enrollment management. Many institutions have systematically increased enrollment without proportionally increasing faculty or support staff, reasoning that the marginal cost of an additional student is low. This logic treats education as a scalable commodity: lectures can be recorded and streamed, discussion sections can grow from fifteen to thirty students, papers can be graded by overworked teaching assistants. But education is not scalable in this way without fundamentally changing its nature. As classes grow larger, assessment becomes more superficial, feedback becomes more generic, and the possibility of a genuine intellectual relationship disappears. In this environment, the temptation to use AI becomes overwhelming—for students seeking to meet impossible demands and for faculty seeking to manage unmanageable workloads.
Addressing the AI challenge authentically might require institutions to cap enrollments, to refuse to grow beyond the scale at which genuine education is possible. This is an enormously difficult choice, particularly for public universities facing pressure from state legislatures to increase access and for private universities competing in rankings partially determined by selectivity metrics. But it may be necessary if we are serious about preserving what makes education valuable.
From Enforcement to Cultivation: Changing the Institutional Culture
Even with clear principles and adequate resources, institutional transformation requires a fundamental shift in culture. It requires changing how we talk about students, how we respond to challenges, and how we involve the community in governance. The current culture, particularly around academic integrity, is overwhelmingly punitive. It emphasizes detection, punishment, and enforcement. We see this in the language used: students “cheat,” faculty “catch” them, and institutions “discipline” them. The entire apparatus is oriented toward identifying and punishing violations rather than cultivating integrity.
This enforcement culture creates many problems. It positions students and faculty as adversaries rather than collaborators in the educational enterprise. It treats integrity as mere compliance with rules rather than as a positive intellectual virtue to be developed. It focuses attention on edge cases and violations rather than on the majority of students who are trying to learn honestly. Perhaps most perversely, it encourages students to view academic integrity instrumentally as a code to be followed to avoid punishment, rather than intrinsically as a set of practices that serve their own intellectual development.
A cultivation culture would operate differently. It would begin with the assumption that most students want to learn and want to maintain their integrity, but they need support, guidance, and clear expectations to do so successfully. It would view faculty not as police but as mentors responsible for inducting students into scholarly communities. It would emphasize education over punishment, restoration over expulsion, and development over enforcement.
This shift has implications for every aspect of institutional practice. Consider how institutions typically communicate about academic integrity. Most students encounter it first during orientation, usually in the form of a stern lecture from a dean about the consequences of plagiarism. The message is unambiguous: we do not trust you, and we are watching you. Students are required to sign honor codes or integrity pledges, performative rituals that research suggests have minimal impact on actual behavior. The entire experience is framed negatively: here are the things you must not do, here are the terrible consequences if you do them, here is how we will catch you.
Under a cultivation model, orientation would look entirely different. Rather than threatening students, institutions would invite them into scholarly communities and explain what membership in those communities entails. Students would learn about intellectual ownership not as a legal constraint but as a practice that enables genuine dialogue and cumulative knowledge building. They would be introduced to process documentation not as a surveillance mechanism but as a tool for metacognition and intellectual development. They would practice distinguishing their own voice from their sources, their own thinking from algorithmic outputs. The frame would shift from prohibition to possibility, from what you cannot do to what you are learning to do.
This cultivation approach extends to how institutions respond when students do misuse AI—and they will, because they are learning and because learning involves mistakes. The current model is overwhelmingly punitive. Many institutions have implemented “zero tolerance” policies where any use of AI, even for preliminary brainstorming, results in failing the assignment or the course. First offenses often lead to suspension or expulsion, academic capital punishment for what might be a misunderstanding of expectations or a moment of desperation under pressure.
These harsh responses are counterproductive on multiple levels. They treat all violations as equivalent, making no distinction between a student who uses AI to generate an entire essay while claiming it as their own work and a student who uses grammar-checking software without realizing it incorporates AI. They leave no room for educational response, for helping students understand what they did wrong and how to do better. They create an atmosphere of terror rather than trust, where students are afraid to ask questions or seek clarification about what is permitted. And they often fall most heavily on the most vulnerable students; those from backgrounds where academic conventions are less familiar, those struggling with language barriers, or those managing overwhelming personal circumstances.
A dialogic institution would employ restorative rather than purely punitive approaches to academic integrity violations. When a student misuses AI, the first response should not be automatic expulsion but investigation into the context and motivation. Why did the student turn to the tool? Was it a genuine misunderstanding of the policy? Was it panic driven by an overwhelming workload? Was it a lack of confidence in their own abilities? Was it ignorance about how to approach the assignment? Each of these situations calls for a different response.
For many violations, particularly first offenses or cases involving genuine confusion, the appropriate response might be educational rather than punitive. The student might redo the assignment with extensive process documentation, demonstrating their learning while proving their capability. They might write a reflective essay exploring what was lost by outsourcing the cognitive work, helping them understand why the prohibition exists. They might attend workshops on academic integrity, research methods, or effective AI use. They might work with a writing tutor to develop the skills they lacked. These interventions turn the error into a learning opportunity. They signal that the institution cares more about the student’s development than about the purity of its statistical reporting to accrediting agencies.
This is not to suggest that all violations should be treated lightly or that serious misconduct should go unpunished. Students who deliberately attempt to defraud the system, particularly in repeated instances after clear warnings, may indeed warrant serious sanctions. But the default response should be proportional and educational, reserving harsh punishment for cases where it is genuinely warranted.
The cultivation approach also requires rethinking how institutions involve students in policy development. Currently, most AI policies are developed through a top-down process. Administrators, perhaps consulting with faculty committees, draft policies that are then announced to students. This is the banking model applied to governance: students are empty vessels waiting to receive the new rules, expected to comply without meaningful input into their creation.
A dialogic institution would approach policy development differently. It would create forums for genuine community deliberation, what might be called Dialogic Councils or Community Inquiry Groups. These bodies would include not just administrators and faculty but students who actually use these tools daily, staff members who support student learning, librarians who understand information literacy, and writing center tutors who witness student struggles firsthand. The group would include both skeptics and enthusiasts, creating space for genuine disagreement and dialogue.
Instead of issuing edicts, such councils would pose questions to the broader community: What does it mean to be an author in an age of AI assistance? What skills do we want to preserve as central to education? What forms of drudgery are we comfortable automating? What capabilities must students develop through direct practice rather than through tool use? How do we balance efficiency with developmental necessity? These are complex questions without obvious answers, and engaging with them requires the kind of critical thinking that universities exist to cultivate.
This participatory approach serves multiple purposes. Practically, it produces better policies because it draws on diverse perspectives and expertise rather than the limited viewpoint of any single constituency. Politically, it creates buy-in and legitimacy; people are more likely to accept and follow policies they helped create. And educationally, it models the intellectual practices universities should embody: careful reasoning, respectful disagreement, evidence-based argument, and revision considering new information. It shows students that the institution itself is a learner, capable of acknowledging uncertainty and changing course when necessary.
The shift from enforcement to cultivation must permeate the entire student lifecycle, from orientation through graduation. Currently, academic integrity is treated as a discrete topic, usually addressed in a single session during orientation and then only revisited when violations occur. Under a cultivation model, it would be woven throughout the curriculum. First-year seminars would include units on scholarly practices, intellectual ownership, and effective tool use. Discipline-specific courses would address the particular integrity challenges of their fields: what constitutes collaboration versus collusion in computer science, how to attribute ideas in philosophy, or what counts as original analysis in history. Capstone experiences would include explicit attention to professional ethics and the responsibilities that come with expertise.
This curricular integration sends a clear message: integrity is not a bureaucratic requirement but a core intellectual value, essential to every field of study and every form of knowledge creation. It helps students understand that the principles they learn apply beyond the artificial context of academic assignments, that habits of careful attribution and transparent process will matter throughout their professional lives.
Operationalizing the Compact: The Mechanics of Visibility
All of these institutional changes—clear principles, adequate resources, cultivation over enforcement—converge in the practical challenge of implementation. Once an institution commits to process transparency, the question becomes: what does this actually look like in practice across different disciplines? How do we make the invisible work of thinking visible without creating an overwhelming bureaucratic burden?
The answer lies in designing process documentation that is discipline-appropriate, pedagogically meaningful, and proportional to the stakes of the assignment. This is not a one-size-fits-all mandate but a flexible framework that can be adapted to different intellectual contexts while maintaining the core principle: students should be able to trace and explain their intellectual processes.
History and the Humanities: In Maria’s history course on the French Revolution, process documentation might include an annotated bibliography showing engagement with primary and secondary sources, with brief notes about how each source shaped her thinking. She might submit a preliminary thesis statement with a paragraph explaining how her initial research led her to that argument, followed by a revised thesis showing how deeper reading complicated her understanding. Her final paper would be accompanied by a brief process statement explaining her methodological choices, the challenges she encountered, and how she resolved them. This documentation serves multiple purposes: it demonstrates her intellectual labor, provides material for the oral defense, and creates a record of her development as a historical thinker.
Mathematics and STEM Fields: In a calculus course, process visibility takes a different form. Students solving problem sets might be asked to photograph their whiteboard work or handwritten solutions, preserving the evidence of false starts, corrections, and the messy reality of mathematical reasoning. For more complex proofs, they might maintain solution journals where they explain their reasoning at each step, articulating why they chose particular approaches and how they knew to abandon unproductive paths. When programming is involved, version control systems like Git provide natural process documentation, showing the incremental development of code. The key is making explicit what is usually implicit: the thinking behind the solution, not just the solution itself.
Creative Writing and the Arts: In Maria’s creative writing workshop, process documentation might include multiple drafts with tracked changes showing revision decisions, marginal notes explaining why particular phrases were cut or rearranged, and reflective writing about craft choices. Students might maintain writer’s notebooks where they collect observations, experiment with techniques, and explore ideas that may or may not appear in finished work. For a short story, the portfolio might include character sketches, plot outlines, and a reflection on how feedback from peers influenced revision. This makes visible the iterative nature of creative work, countering the myth of spontaneous artistic production.
Computer Science and Engineering: For a software development project, students might maintain development logs documenting their design decisions, the problems they encountered, and how they debugged their code. They would include comments explaining the logic behind their implementations, not just what the code does but why they structured it that way. For collaborative projects, they might document their contributions and describe how they integrated their work with teammates. When AI coding assistants are permitted, students would log their prompts, explain why they used the tool for particular tasks, and demonstrate their understanding by explaining how the AI-generated code works and what modifications they made.
Social Sciences: In a sociology or psychology course involving empirical research, process documentation includes research design memos explaining methodological choices, data collection logs, analytical notes showing how patterns emerged from the evidence, and reflections on limitations and alternative interpretations. Students learn to distinguish between their observations and their interpretations, to acknowledge the constructed nature of their categories, and to explain why they chose particular analytical frameworks.
The common thread across all these examples is that process documentation serves learning, not just verification. When a mathematics student explains their reasoning, they strengthen their metacognitive skills. When a history student traces the evolution of their argument, they develop a more sophisticated understanding of how historical interpretation works. When a computer science student documents their debugging process, they become more systematic problem-solvers. The documentation is not administrative overhead; it is integral to the educational experience.
This approach also changes the conversation about workload. Students sometimes object that process documentation adds extra work on top of already demanding assignments. But this frames the issue incorrectly. The documentation does not add work; it makes existing cognitive work visible. A student who is genuinely doing their own thinking is already going through the processes we are asking them to document. Taking notes while researching, maintaining drafts while writing, sketching solutions while problem-solving—these are natural parts of intellectual work. We are simply asking students to preserve and reflect on what they are already doing rather than discarding it once the final product is complete.
For students who have been relying on AI to bypass cognitive work, the requirement will indeed feel like an additional burden. But that is precisely the point. Process transparency makes it prohibitively difficult to outsource thinking while preserving the possibility of legitimate tool use. A student who uses AI to generate research questions, then critically evaluates those questions and pursues the most promising ones, can document that process transparently. The AI use is visible, its role is clear, and the intellectual work done by the student is evident. The student who simply asks AI to write the essay has no process to document and will be immediately apparent.
Implementation requires institutional support. Students need clear guidelines about what constitutes adequate process documentation in different contexts. Faculty need training on how to design assignments that incorporate process components and how to evaluate process work fairly. Technology infrastructure should facilitate rather than complicate documentation by supporting learning management systems that make it easy to submit multiple drafts, store research notes, and attach reflective statements. And writing centers and academic support services should help students develop documentation practices as part of their overall academic skills.
The transition period will be challenging. Both students and faculty are accustomed to assessment models focused exclusively on final products. Shifting to process-inclusive assessment requires everyone to develop new habits and expectations. But universities have successfully implemented comparable shifts before, such as the widespread adoption of learning management systems, integrating information literacy into the curriculum, or the normalization of peer review in writing courses. With clear leadership, adequate resources, and sustained commitment, process transparency can become the new normal rather than an exceptional burden.
The Human Sanctuary
Let us return to Maria. Imagine now a semester that has unfolded differently, where the institutional transformation described in this chapter has actually begun to occur.
Beatrice Thompson’s memo was approved, though not without resistance. The board questioned the metrics, worried about accountability, and suggested a compromise that would keep some detection software in place. But Beatrice held firm, and eventually the provost backed her proposal as a pilot program. The $100,000 that would have been spent on unreliable detection software has instead funded the hiring of three additional teaching assistants for high-enrollment humanities courses, creating the possibility for a new approach to assessment.
The university has adopted the three core principles: cognitive effort as the metric of value, process transparency as standard practice, and tool disclosure with contextual appropriateness. The provost’s office, after extensive community consultation through a Dialogic Council that included Maria and students like her, has issued clear guidelines that apply across all courses while allowing for disciplinary variation.
Maria’s “Western Civilization: 1648 to Present” course looks different now. The zero-tolerance policy has been replaced with a clear statement of principles explaining why the professor values unassisted writing for particular assignments and how AI might be appropriately used for others. The syllabus no longer radiates suspicion; instead, it invites students into scholarly practices, explaining that historians document their research processes and that learning to do so is part of their education.
For her final paper on the French Revolution, Maria is not asked to submit her essay to a black-box detection algorithm that will generate a pseudo-scientific probability score of questionable validity. Instead, she knows from the first day of the semester that she will participate in a fifteen-minute oral defense—a pilot program funded by the reallocation of the surveillance budget—in conversation with one of the course’s new teaching assistants about her research process and her arguments. This knowledge shapes her entire approach to the assignment.
She prepares differently. Rather than seeking the “right” answer that will satisfy an algorithm, she engages genuinely with the material. She meets with the AI “sparring partner” her environmental science professor introduced her to, using it to test her arguments about economic causation in revolutionary France. She keeps a research journal documenting her engagement with primary sources, noting where she found evidence that challenged her initial assumptions. She maintains her drafts, allowing her to trace the evolution of her thinking from preliminary outline to final argument. She writes the essay, struggling with the synthesis, working through the difficulty of integrating conflicting interpretations, feeling the cognitive sweat we have discussed throughout this book.
When she walks into the oral defense, she is nervous. But it is the productive nervousness of intellectual challenge, not the paranoid anxiety of suspected criminality. The TA, herself a graduate student passionate about the period, asks probing questions: “You argue that the economic crisis was the primary driver of revolution, but how do you account for the role of the philosophical salons in shaping revolutionary ideology? How do we weigh material causes against intellectual currents?”
Maria pauses. She thinks. She remembers the debate she had with the AI about this very question, how the AI initially underweighted material factors before she pushed back. She recalls the historiographical argument she read between social historians and intellectual historians. She looks the TA in the eye and begins to speak. “Well, I think the salons provided the language and concepts that made revolution thinkable, but the hunger and the fiscal crisis provided the urgent necessity that transformed thought into action. Without both elements...”
In that moment, there is no algorithm mediating the truth of her knowledge. There is no probability score purporting to measure the authenticity of her thinking. There is only what education has always been at its best: two human beings engaging in the ancient, messy, beautiful act of thinking together. The TA follows up, challenges an assumption, and asks for clarification. Maria defends her position, concedes a point, and refines her argument. By the end of the conversation, both participants have learned something. This is education.
Maria leaves the defense with a deeper understanding not just of the French Revolution but of herself as a thinker. She has discovered that she can construct and defend an argument, that her ideas can withstand scrutiny, that intellectual challenge is energizing rather than threatening. She has learned that the process of thinking—the struggle with sources, the refinement of claims, the integration of criticism—is itself valuable, perhaps more valuable than the polished product that emerges at the end. These are lessons that will serve her throughout her life, in whatever domain she ultimately pursues.
This is what becomes possible when institutions have the courage to rebuild their foundations. The castle built on sand—the elaborate apparatus of standardized testing, algorithmic detection, and credential protection—has washed away. But in its place, something stronger has emerged: a genuine community of learning organized around human relationships rather than technological surveillance, around developmental process rather than credentialed product, around trust rather than suspicion.
This is the promise of the dialogic institution. It does not reject technology outright or pretend that AI does not exist. Instead, it contextualizes technology as one tool among many, useful for certain purposes, inappropriate for others, always subordinate to the fundamental goal of human intellectual development. It does not fear the machine because it has invested in the human, creating the conditions under which authentic teaching and learning can occur. It recognizes that in a world increasingly mediated by artificial intelligence, the most valuable asset a university can offer is not a credential but a community capable of cultivating wisdom, judgment, and intellectual integrity.
The choice before us is stark. We can continue down the path of technological escalation, investing in ever-more-sophisticated detection systems, expanding surveillance apparatus, tightening controls, and eroding trust until our institutions are hollow shells that confer credentials without providing education. This path is easier in many ways. It allows us to avoid difficult conversations about what we value. It provides the illusion of control through technological intervention. It requires no fundamental rethinking of our practices or priorities.
Or we can choose the harder path: acknowledging that our current model is broken, that authentic education has always depended on human relationships, and that preserving what is valuable in education will require substantial reinvestment in the human elements that technology cannot replace. This path demands more of us. It requires more time from faculty, more resources from institutions, more transparency from students, and more faith from everyone involved. It means accepting that there are no simple solutions, no technological fixes that will resolve the tensions created by AI without requiring us to change.
But this harder path is the only one that leads to a future worth inhabiting. It is the only path that preserves the university as something more than a credentialing factory, that maintains education as something more than information transfer, that treats students as developing humans rather than as potential cheaters to be monitored. It is the only path that honors the complexity of human learning and the irreplaceable value of human mentorship.
The infrastructure is crumbling not because AI is too powerful but because our foundations were always inadequate, built on assumptions about verification that could not withstand technological disruption. We can rebuild on firmer ground, creating institutions organized around principles rather than paranoia, cultivation rather than enforcement, and relationship rather than surveillance. We can construct a human sanctuary where thinking remains valuable precisely because it is difficult, where learning matters precisely because it cannot be automated, and where education thrives because it acknowledges and honors what makes us human.
The choice is ours. The moment is now. What we decide will determine not just the future of our universities but the kind of society those universities will help create.
Thank you for following Chapter 12 and joining me for this complete journey through ‘The Detection Deception’ . If this vision of institutional transformation from surveillance to cultivation, and from panic to principle, resonates with you, the conversation does not end here.
This concludes the book serialization, but the work of reimagining education in the age of AI continues. I publish weekly essays on this Substack exploring the evolving challenges and possibilities at the intersection of artificial intelligence and education. Whether you are a faculty member wrestling with syllabus policy, an administrator navigating budget decisions, or a student trying to understand what authentic learning means in this moment, there is more to explore. The castle built on sand has washed away. Now we must decide together what we will build in its place.
Thank you for reading, questioning, and imagining alongside me. I will see you in the weekly essays.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.


