Fellow Augmented Educators,
Are you caught between flawed AI detection software and the pressure to ensure academic integrity? You're not alone. The promise of AI in education is being overshadowed by an unwinnable technological arms race that pushes us toward a crisis of trust.
Today, I'm beginning a 12-week serialization of my new book, 'The Detection Deception: Reclaiming Education Through Human Connection in the Age of AI.' It offers a way out. Instead of trying to make AI use forbidden, this book provides a framework for making it irrelevant to authentic assessment.
Each Saturday, I will publish a new chapter exclusively for paid subscribers. I invite you to join me on this journey.
Whether you're an educator wrestling with these challenges daily, an administrator questioning your investment in detection tools, or someone committed to the future of learning, this book offers a practical path for reclaiming our classrooms from surveillance and preserving what makes education essentially human.
Michael G Wagner (The Augmented Educator)
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That's why I've recently published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.
Chapter 1: The Castle Built on Sand
Caroline had devoted three weeks to her essay on economic inequality. The junior economics major at a state university had interviewed local food bank directors, analyzed census data from her county, and revised her argument four times based on feedback from the writing center. She had documentation for every step: rough notes from interviews, spreadsheets of data analysis, tracked changes showing the evolution of her thesis. When her professor's email arrived on a Thursday afternoon, informing her that the university’s AI detection software had flagged her work as "89% likely AI-generated," she felt the floor drop away beneath her.
"I don't understand," she told her professor during their office hours meeting. She spread her evidence across the desk like a defense attorney presenting exhibits. Here were the handwritten notes from her interview with the food bank director. Here was the email exchange with the county data office. Here were the printed drafts with marginalia from the writing center tutor. Her professor, a decent man who had taught for fifteen years, looked at the evidence with visible discomfort. He wanted to believe her. But the institution had spent considerable money on this detection software. The administration had made it clear that scores above 80% should be treated as presumptive evidence of academic dishonesty.
"I believe you did the work," he said finally. "But I need something more definitive to override the software's assessment. Can you reproduce parts of your analysis in front of me? Can you explain your reasoning process in detail?"
Caroline could, and did, spending another hour walking through her methodology and defending her conclusions. The professor eventually accepted her work, but the damage was done. The trust between them had been poisoned. Caroline would later tell friends that she spent the rest of the semester second-guessing every sentence she wrote, wondering what algorithmic trigger she might accidentally trip.
This story, while fictionalized in its specifics, reflects a shared truth drawn from my own professional experience as well as numerous documented cases that have emerged since educational institutions began deploying AI detection tools at scale. One illustrative example: According to local media reports, a student at the University at Buffalo allegedly faced formal academic dishonesty proceedings based solely on a software score. No specific textual evidence was provided. No comparison to known AI outputs was offered. The burden of proof had silently shifted: rather than the institution needing to prove dishonesty, the student needed to prove honesty.
But Caroline’s ordeal reveals something more profound than a technological failure. It exposes the spectacular collapse of a castle built on sand. A castle built on an educational assessment system that was always more fragile than we cared to admit.
The Undetectable Crisis
The panic that gripped educational institutions following ChatGPT's release in November 2022 obscured a crucial truth: the crisis was not created by AI but merely revealed by it. The vulnerability that generative AI exposed had existed for decades, perhaps centuries, built into the very structure of how we assess learning. The take-home essay, that stalwart of academic assessment, emerged not through careful pedagogical design but through historical accident and institutional convenience. In the pre-digital era, it served multiple functions efficiently: testing research skills, critical thinking, synthesis, argumentation, and writing ability in a single scalable assignment. But this efficiency came with a hidden vulnerability that we chose to ignore.
Even in the nineteenth century, students with means could hire others to complete their assignments. Edgar Allan Poe was expelled from West Point in part for having another cadet write his assignments, a reminder that academic dishonesty is as old as academic assessment itself. What changed over time was not the existence of cheating but its accessibility and scale. By the 1970s, term paper mills advertised openly in college newspapers. The internet transformed this cottage industry into a global marketplace. By 2019, it was estimated that the essay mill industry generated hundreds of millions of dollars annually, with research suggesting that between 15% and 30% of university students had purchased academic work at least once.
Generative AI represents not the creation of a new problem but a massive democratization and acceleration of an existing one. What once cost hundreds of dollars now costs nothing. What once took days now takes seconds. What once required trusting a stranger now requires only a conversation with a machine. The barrier to entry for sophisticated academic dishonesty has been effectively eliminated.
But focusing solely on cheating misses the deeper revelation. If a language model with no genuine understanding, no consciousness, no intentionality, can produce work indistinguishable from that of a thoughtful student, what exactly are we measuring? The uncomfortable answer may be that we have been assessing surface features—grammatical correctness, structural coherence, apparent familiarity with sources—rather than genuine understanding all along.
For educators, the crisis cuts equally deep. Consider the professor evaluating Caroline’s work, knowing the student personally, having watched her develop ideas in class discussions and struggle with concepts during office hours. Every human instinct tells him the student did the work, but the software, which the institution has endorsed and paid for, says otherwise. To override it requires an act of professional courage that not all possess. Especially adjuncts or pre-tenure faculty who feel they have no choice but to defer to the machine.
This deference to algorithmic judgment represents more than a practical failure. It constitutes an ontological crisis in education. The very notion of authorship becomes incoherent when we cannot distinguish between human and machine expression. When students collaborate with AI systems—editing machine-generated text, incorporating AI suggestions, using AI to restructure arguments—where does the student end and the machine begin? The binary distinction between author and tool, fundamental to academic assessment, dissolves.
These are not theoretical edge cases but daily realities. When a student uses Grammarly's AI features to restructure a sentence, have they cheated? When they prompt ChatGPT to explain a concept they then paraphrase, is this different from paraphrasing a textbook? When they use AI to generate potential thesis statements and craft their own by combining elements, who is the author? The rules are unclear and constantly shifting.
The adoption of AI detection software represents an institutional response born of panic rather than pedagogy. Within months of ChatGPT's release, millions of dollars flowed from educational institutions to technology vendors promising to restore order. But this response misunderstood the nature of the problem. The challenge posed by generative AI is not primarily technological but philosophical and pedagogical. It forces us to confront questions that educational systems have avoided: What are we actually assessing when we grade an essay? Is it the student's ability to produce polished prose, or their capacity to think? Their skill at information synthesis, or their development of original ideas?
Detection software cannot answer these questions because they are not technological questions. They are questions about the purpose and meaning of education itself. By focusing on detection rather than these deeper issues, institutions have chosen to treat the symptom rather than the disease, introducing a technological "solution" that actively harms the educational mission it purports to protect.
The harm manifests in what can only be called a pedagogy of distrust. Where education once operated on a presumption of good faith, it now operates on a presumption of suspicion. Every submitted assignment passes through algorithmic scrutiny. The relationship between teacher and student, which should be one of mentorship and intellectual development, becomes adversarial. Students cannot take intellectual risks when every unconventional thought might trigger an algorithmic alarm. They cannot develop authentic voices when they must constantly perform compliance with machine-determined norms.
This transformation has occurred with remarkable speed and little public debate. Institutions have invested millions in detection software without rigorous evidence of its effectiveness. They have altered fundamental academic processes based on probabilistic assessments from black-box algorithms. The result is a system that manages to be both draconian and ineffective, creating massive collateral damage while failing to address the underlying challenges.
The most troubling aspect is how this crisis inverts educational priorities. Instead of asking how we can design better assessments that authentically measure learning, we ask how we can better detect cheating. Instead of considering how AI might enhance education, we focus solely on preventing its use. We have allowed the existence of AI to define our educational practices in purely defensive terms.
This defensive crouch is unsustainable. The technology will continue to improve. The detection tools will continue to fail. Students will suffer false accusations. Teachers will exhaust themselves playing algorithmic cat and mouse. Trust will continue to erode. Unless we fundamentally reconceptualize our approach, we risk destroying the very thing we seek to protect: the transformative power of education to develop human minds and human potential.
Our future depends on having the courage to acknowledge that the old model is broken beyond repair. The take-home essay, the traditional research paper, the standard written assignment—these assumed conditions that no longer exist: that students work in isolation, that producing text requires understanding content, that polished prose indicates intellectual engagement. These assumptions were always somewhat fictional, but AI has made the fiction impossible to maintain.
We have built our entire educational assessment infrastructure on a foundation that cannot bear the weight we have placed upon it. For generations, we constructed an elaborate edifice of grades, transcripts, degrees, and credentials, all ultimately resting on assessment methods we knew were vulnerable but chose not to examine too closely. We decorated the castle with the trappings of academic legitimacy while ignoring the fundamental instability of its foundation. AI is not the earthquake that destroys the castle; it is merely the tide that reveals the foundation was always sand.
But this shattering creates opportunity as much as crisis. It forces us to clarify what we value in education. It demands that we articulate what distinguishes human learning from machine processing. It requires us to design assessments that capture genuine understanding rather than surface performance. The crisis of the undetectable is really a crisis of purpose, challenging us to rediscover what makes education essentially human in an age of artificial intelligence.
The Fork in the Road: Surveillance or Pedagogy?
We stand at a crossroads. The path educational institutions choose in response to generative AI will determine not only how we assess learning but what kind of learning we value, what relationships we foster between teachers and students, and ultimately what we believe education is for. Two paths stretch before us, each representing a fundamentally different vision of education's future. The choice we make will reverberate for generations.
The first path, already well-worn by many institutions, leads toward an educational surveillance state. This path promises to solve the AI problem through technological escalation: more sophisticated detection algorithms, more comprehensive monitoring systems, more stringent policies, more severe consequences. Walking this path, we see vendors promising ever-more-advanced AI detection capabilities. We see institutions investing millions in software licenses. We see teachers spending hours analyzing detection reports. We see students living under constant algorithmic scrutiny, their every sentence potentially flagged as fraudulent.
This surveillance path has a seductive logic. If AI threatens academic integrity, then we must detect and punish its use. If current detection methods are imperfect, then we must develop better ones. If students find ways to evade detection, then we must close those loopholes. The solution to technology, this logic suggests, is more technology. The answer to algorithmic text generation is algorithmic text analysis. We will win through superior firepower in the technological arms race.
But follow this path to its logical conclusion, and the destination becomes clear. We arrive at an educational system that resembles a high-security prison more than a place of learning. Students submit work through locked browsers that monitor their keystrokes. Cameras watch them write. Screen recording software documents their entire composition process. AI detection algorithms analyze not just their final submissions but every draft, every revision, every hesitation. Machine learning models build profiles of each student's writing style, flagging any deviation as potential fraud. The classroom becomes a panopticon where students perform their learning under constant observation. Even if such comprehensive surveillance were technologically feasible—and current evidence suggests it is not—the cost would be catastrophic.
Education fundamentally depends on trust. The relationship between teacher and student requires a presumption of good faith, a belief that both parties are engaged in a genuine exchange of knowledge and ideas. When that relationship becomes adversarial, when teachers become enforcers and students become suspects, learning itself becomes impossible. Students cannot take intellectual risks when every unconventional thought might trigger an algorithmic alarm. They cannot develop authentic voices when they must constantly perform compliance with machine-determined norms. They cannot engage in genuine inquiry when the primary goal becomes avoiding false accusations.
The surveillance path also faces insurmountable technical challenges. Every advance in detection technology prompts corresponding advances in evasion techniques. For every new algorithm designed to identify AI-generated text, there emerges a new tool designed to humanize it. The companies selling detection software have a vested interest in perpetuating this arms race. Their business model depends on the threat continuing and escalating. Meanwhile, the underlying technology of language models advances at a pace that detection methods cannot match. The asymmetry is fundamental: generating convincing text is computationally easier than definitively identifying its origin. The defenders are always playing catch-up in a game where the rules constantly change.
More troubling still, the surveillance approach fundamentally misdiagnoses the problem. It treats AI use as primarily an issue of academic dishonesty, a behavioral problem requiring disciplinary solutions. But the challenge AI poses to education runs much deeper. It reveals that our assessment methods measure the wrong things, that our assignments can be completed without genuine learning, that the skills we claim to develop can be mimicked by statistical pattern matching. Surveillance addresses none of these underlying issues. It merely attempts to preserve a broken system through force.
The second path leads in an entirely different direction. Instead of doubling down on technological enforcement, this path involves fundamental pedagogical transformation. It requires us to abandon assessment methods that AI has rendered obsolete and to recover educational traditions that center on uniquely human capabilities. This is the path of dialogue, of process, of authentic engagement—the path that makes AI irrelevant rather than trying to make it illegal.
This pedagogical path draws on ancient wisdom that predates our current technological predicament by millennia. Socrates knew nothing of artificial intelligence, but he understood that genuine learning happens through dialogue, through the dynamic process of question and response that makes thinking visible. His method of systematic questioning does not produce artifacts that can be faked but reveals the actual structure and depth of understanding in real-time. A student cannot outsource Socratic dialogue to an AI because the value lies not in producing correct answers but in demonstrating the reasoning process itself.
Paulo Freire, writing in a different context about different challenges, nonetheless provides crucial insights for our current moment. His critique of the "banking model" of education—where teachers deposit information into passive student receptacles—perfectly describes the vulnerability that AI exploits. When education becomes mere information transfer, any sufficiently sophisticated information processor can mimic the appearance of learning. But Freire's alternative vision of problem-posing education, where students and teachers jointly investigate reality, creates learning experiences that no AI can replicate. When students engage with real problems in their own communities, when they connect abstract concepts to lived experience, when they work collaboratively to imagine and create change, they engage in irreducibly human acts of meaning-making.
Mikhail Bakhtin's philosophy of dialogism offers another crucial perspective. Knowledge, Bakhtin argued, does not exist in isolation but emerges through the interaction of multiple voices and perspectives. Understanding is not a possession but a process, not a product but a performance. A classroom that embraces this dialogic principle becomes a unique universe of meaning, shaped by the specific interactions of particular people at a particular moment. The discussions that unfold, the ideas that emerge, the understanding that develops—these cannot be replicated by any AI because they arise from the unrepeatable convergence of human consciousnesses in dialogue.
These philosophical foundations point toward practical pedagogical strategies that are already being implemented in classrooms around the world. Instead of take-home essays, students engage in oral defenses of their work. Instead of isolated writing assignments, they participate in collaborative projects that require real-time contribution and negotiation. Instead of standardized assessments, they complete authentic tasks that connect to their local contexts and personal experiences. Instead of producing polished final products, they document and reflect on their learning processes. These approaches do not attempt to detect AI use because they make it irrelevant. The assessment captures something AI cannot provide: the messy, contingent, deeply human process of learning itself.
This pedagogical transformation requires significant changes in how we structure education. Classes may need to be smaller to allow for meaningful dialogue. Assessment may need to be more varied and labor-intensive. Teachers may need professional development to implement new methods. Institutions may need to reconsider their metrics for success. These changes require resources, commitment, and courage. They cannot be purchased from a vendor or implemented through software. They demand that educators reclaim their professional expertise and that institutions prioritize learning over efficiency.
The contrast between these two paths could not be starker. The surveillance path promises certainty through technology but delivers only an escalating arms race that erodes trust and undermines learning. The pedagogical path acknowledges uncertainty and complexity but offers the possibility of educational experiences that develop genuine human capabilities. The surveillance path treats students as potential cheaters who must be monitored and controlled. The pedagogical path treats students as developing thinkers who must be challenged and supported. The surveillance path preserves the form of traditional education while hollowing out its substance. The pedagogical path may change educational forms but preserves and enhances education's essential purpose.
This book argues that the choice between these paths is not really a choice at all. The surveillance path is ultimately a dead end, a futile attempt to preserve through force what has already been lost. The technology will continue to advance. The detection methods will continue to fail. The arms race will continue to escalate. Trust will continue to erode. Eventually, the entire enterprise will collapse under the weight of its own contradictions. The only viable future for education lies in embracing pedagogical transformation, in designing learning experiences that develop capabilities AI cannot replicate, in creating assessments that capture what makes human intelligence irreplaceable.
This transformation will not be easy. It requires us to abandon familiar methods and comfortable assumptions. It demands that we articulate what we truly value in education beyond the production of text artifacts. It challenges us to design learning experiences that are meaningful, engaging, and resistant to technological shortcuts. It asks us to trust in human judgment over algorithmic verdict, in professional expertise over technological solutions, in the messy complexity of actual learning over the neat simplicity of surveillance.
But this transformation also offers unprecedented opportunities. It forces us to clarify our educational purposes and align our practices with our values. It pushes us to develop pedagogies that prepare students for a world where AI is ubiquitous but human judgment remains essential. It enables us to create learning experiences that are more engaging, more meaningful, and more transformative than what the traditional model ever allowed. The AI crisis, viewed from this perspective, is not a threat to be defeated but a catalyst for long-overdue educational renewal.
The argument ahead will explore both paths in detail, examining the technical failures and ethical problems of the surveillance approach while developing the theoretical foundations and practical strategies of pedagogical transformation. We will see how detection software fails not occasionally but systematically, how it harms not just those falsely accused but the entire educational enterprise. We will discover how dialogic pedagogies make thinking visible, how authentic assessments capture genuine understanding, and how AI itself can be transformed from a threat into a tool when properly integrated into sound pedagogical frameworks.
The stakes of this choice extend far beyond academic integrity. They touch on fundamental questions about human agency, creativity, and purpose in an age of artificial intelligence. If we choose surveillance, we implicitly accept that human and machine intelligence are interchangeable, distinguishable only through increasingly sophisticated detection methods. If we choose pedagogical transformation, we assert that human intelligence possesses unique qualities worth cultivating and celebrating. The path we choose will shape not just how we assess student work but how we understand human potential in relation to machine capability.
The fork in the road is before us. The choice is ours to make. The future of education—and perhaps the future of human intellectual development—hangs in the balance.
Facing the Detection Deception
This book unfolds in four interconnected parts, each building upon the last to construct a comprehensive response to the crisis generative AI has precipitated in education. The journey ahead moves from deconstruction to reconstruction, from critique to creation, from problem to possibility. Understanding the structure of this argument will help readers navigate the complex terrain we must traverse together.
Part I, "The End of an Era," performs necessary demolition work. We cannot build something new until we fully understand why the old structure has failed. This section examines both the historical vulnerabilities of our assessment practices and the spectacular failure of technological solutions that institutions have desperately embraced. We begin by tracing the ghost in the machine—not the AI itself, but the long history of academic dishonesty that AI has simply democratized and accelerated. The take-home essay, we will discover, was never the fortress of academic integrity we imagined it to be. We examine how the invention of the essay as our primary assessment tool created a bundled bet that is now spectacularly failing, and how the pre-digital cheating economy presaged our current crisis.
We then turn to the surveillance impasse, examining in detail why AI detection software represents not a solution but a compounding of the problem. Through technical analysis, we will see how these tools fundamentally misunderstand the nature of language and writing, systematically discriminate against vulnerable populations, and create a pedagogy of distrust that poisons the educational environment. The empirical evidence is damning: detection tools achieve accuracy rates barely better than random chance while exhibiting systematic bias against non-native English speakers, neurodivergent students, and those who use legitimate assistive technologies. Part I concludes by demonstrating that the technological arms race is not merely failing but is structurally doomed to fail, creating an ever-escalating conflict that enriches software companies while impoverishing education itself.
Part II, "The Philosophical Turn," shifts from critique to construction. Having shown that technological solutions are futile, we turn to philosophical traditions that offer genuine alternatives. We begin with Socrates, whose method of systematic questioning makes thinking visible in ways no essay ever could. The Socratic method does not produce artifacts that can be faked but reveals understanding through dynamic dialogue. We explore how productive destabilization through questioning creates the cognitive struggle essential for learning, and how this ancient practice provides a powerful framework for assessment in the age of AI.
We then examine Paulo Freire's distinction between the banking model of education and problem-posing pedagogy. The banking model, which treats students as passive receptacles for information deposits, creates precisely the vulnerability that AI exploits. Freire's alternative—education as collaborative investigation of reality grounded in real-world problems—creates learning experiences that are inherently AI-resistant. When students engage with concrete challenges in their communities, when they connect abstract concepts to lived experience, when they work together to imagine and create change, they engage in irreducibly human acts of meaning-making that no AI can replicate.
Mikhail Bakhtin's philosophy of dialogism provides our third philosophical foundation. Knowledge, in Bakhtin's view, does not exist in individual minds but emerges through the interaction of voices in dialogue. Every classroom becomes a unique universe of meaning, shaped by the specific interactions of particular people at a particular moment. The understanding that develops through these interactions cannot be prefabricated or imported; it must be performed and created anew. Part II concludes by synthesizing these philosophical traditions into a unified framework for dialogic pedagogy that makes machine substitution not illegal but irrelevant.
Part III, "The Dialogic University in Practice," translates philosophical principles into practical strategies. This section provides educators with a comprehensive toolkit of assessment methods that capture genuine learning while remaining resistant to AI substitution. We examine how these unified principles, while facing real challenges of scale, individual assessment, and power dynamics, can be implemented pragmatically across educational contexts.
We explore asynchronous and embodied models that make the learning process visible: requiring multiple drafts that show intellectual development, incorporating peer review that assesses students' ability to evaluate others' work, demanding reflection and metacognition that reveal the journey of understanding. Video responses capture embodied performances of understanding that no text generator can fake, while assignments embedded in local classroom context become impossible for AI to complete.
The practical section then demonstrates how dialogic approaches manifest across disciplines. In STEM fields, we explore the whiteboard defense and collaborative problem-solving. In social sciences, we examine mock trials, policy presentations, and ethnographic defenses. In arts and professional programs, we investigate the studio critique and case study method—time-tested practices that make thinking visible through dialogue and demonstration. These are not merely alternatives to traditional assessment but potentially superior methods for developing and evaluating the complex capabilities students need in the contemporary world.
Part IV, "The New Partnership," performs a crucial reversal. Having established pedagogical frameworks that make AI substitution impossible, we can now safely explore how AI might enhance rather than threaten learning. When students know they must ultimately perform their understanding without technological assistance, AI transforms from a crutch into a training tool. We explore how AI can serve as an intellectual sparring partner, providing counterarguments that strengthen student thinking, generating alternatives that expand creative possibilities, and identifying knowledge gaps that guide further learning.
We then address the crucial need for algorithmic literacy as a new core competency. Students must understand not just how to use AI but how it works, what it can and cannot do, and how it encodes biases and assumptions. This critical engagement with AI technology becomes essential not just for academic success but for navigating a world increasingly mediated by algorithms.
The book concludes by examining institutional transformation. Individual teachers cannot solve this crisis alone; it requires coordinated institutional response. We examine what it means to lead change from the classroom to the administration, how to develop coherent policies that support pedagogical innovation rather than surveillance, and how to reallocate resources from detection software to human-centered teaching. We explore the shift from compliance-based to cultivation-based approaches to academic integrity, making the case that money spent on the technological arms race would be better invested in smaller classes, professional development, and support for pedagogical innovation.
Throughout this journey, we will encounter resistance and skepticism. Some will argue that dialogic methods cannot scale, that oral assessments are subjective, that abandoning traditional essays means lowering standards. We will address these concerns directly, showing how thoughtful implementation can preserve rigor while enhancing authenticity. Others will insist that better detection technology is just around the corner, that the arms race can be won with sufficient investment. We will demonstrate why this hope is not merely false but dangerous, distracting from the real work of pedagogical transformation.
The argument ahead is both critical and constructive, both theoretical and practical, both urgent and timeless. It speaks to our immediate crisis while addressing perennial questions about the nature and purpose of education. It offers no simple solutions because the challenge we face is not simple. But it does offer a clear path forward, one that recovers ancient wisdom while embracing contemporary possibility, that acknowledges technological reality while asserting human value, that transforms crisis into opportunity for educational renewal.
The destination toward which we travel is not a return to some imagined golden age before AI but the creation of something genuinely new: an educational framework that develops uniquely human capabilities in full awareness of machine capabilities, that uses technology wisely while maintaining human agency, that prepares students not to compete with AI but to remain fully human alongside it. This is not a modest goal, but the crisis we face demands nothing less than fundamental transformation. The blueprint for that transformation lies in the pages ahead.
If this chapter resonated with you, I hope you'll continue this journey with me. Next Saturday brings Chapter 2, ‘A History of Academic Dishonesty,’ tracing how we've been building toward this crisis for decades.
The full twelve-week serialization is only available to paid subscribers. Reading these chapters as they unfold offers something different from a finished book: time to reflect between installments, space to wrestle with each idea, and the opportunity to share your own experiences in the comments.
Whether you subscribe or follow my regular weekly essays on 'The Augmented Educator', I'd value your thoughts. What's your experience with AI detection at your institution? Have you found approaches that actually work?
Real change won't come from better surveillance or smarter algorithms. It will emerge from honest conversations about what we're really trying to achieve when we teach and assess learning. That work starts here, with all of us willing to question what we thought we knew.