You Can’t Pause a Disruption
Why AI Development Won’t Slow and What Education Must Do Instead
I see educators calling for slowing down AI development with increasing frequency. The sentiment appears in faculty meetings, conference panels, and online discussions—a plea for more time to prepare, to understand, to regulate before the technology transforms education beyond recognition.
The most prominent expression of this impulse came in March 2023, when AI researchers and technology leaders signed an open letter calling for a six-month pause on training systems more powerful than GPT-4. The response was polarized. Some praised the ethical courage. Others dismissed it as performative. Meanwhile, in classrooms across the country, faculty instituted their own versions of the pause, prohibiting ChatGPT and doubling down on traditional assessment.
We can understand the impulse. AI capabilities have expanded faster than our ability to predict their consequences or govern them. The desire to purchase time seems reasonable. We want to align systems with human values, restructure institutions, and build meaningful safeguards.
But we face a structural challenge: we cannot slow down AI development because the forces driving it are systemic rather than discretionary. The velocity of AI follows the logic of disruptive innovation, a pattern that operates according to institutional incentives rather than individual or governmental choice. For educators, this understanding shapes whether we spend energy building temporary barriers or redesigning learning for a world where AI is already here.
The Innovator’s Dilemma, Rewritten by AI
Clayton Christensen’s The Innovator’s Dilemma describes how established organizations fail not because they’re poorly managed, but because they’re managed well according to the logic of their existing value networks.
Organizations pursue incremental improvements along established success metrics because that’s where their profits and legitimacy live. Disruptive technologies initially underperform on these metrics but offer different advantages: lower cost, greater accessibility, or new functionality. Christensen’s example shows steel mini-mills starting with cheap rebar (the lowest-quality steel product) that integrated mills happily ignored. The technology improved. Mini-mills eventually moved upmarket, bankrupting the giants who had ceded the low end.
AI follows this pattern. Large language models entered as unreliable chatbots, unsuitable for high-stakes applications. But they offered near-zero marginal cost, almost infinite scalability, and rapid accessibility. Now they’re moving upmarket. Systems that started writing limericks now assist with legal analysis, medical diagnostics, and scientific research. Capabilities like long-horizon planning will likely emerge before there’s a safe, regulated market for them.
Why Pausing Rewards the Wrong Actors
The problem with calls for an AI pause becomes clear when we examine who benefits from slowing down. The organizations and nation-states with the most to lose face different calculations than those with everything to gain.
For established players like tech companies, universities, and governments, a pause means sacrificing potential productivity gains to protect existing structures. Jobs, accreditation systems, copyright frameworks, and social norms are tangible and present. The benefit of caution is theoretical and future-weighted. Value networks rarely sacrifice the present for an uncertain future unless forced by existential competition.
For disruptors—startups, open-source collectives, and rival nation-states—the calculation is simpler. They have no legacy revenue to protect, no established reputations to guard, and no customer bases demanding adherence to old standards. When dominant players pause, fringe actors willing to accept higher risks gain ground.
This creates a race where the most cautious actors constrain themselves while the least cautious advance. If OpenAI and Anthropic pause development, open-source projects will continue. If the United States imposes strict limits, other nations will see a strategic opening. We’re already seeing this pattern. When major AI labs implement safety protocols, open-source models like DeepSeek and Mistral fill the void, ensuring that voluntary restraint by leaders cannot contain the technology.
Even acknowledging these coordination problems, some argue that international agreements could succeed where individual restraint fails. But the strategic incentives work against cooperation at every level. The verification problem is insurmountable. Unlike nuclear weapons, which require visible infrastructure and radioactive materials, AI development can occur in relatively small facilities using commercial hardware. No inspection regime could reliably verify compliance with compute limits or capability restrictions.
More fundamentally, AI is increasingly viewed as a critical national security capability. No rational state will accept permanent strategic inferiority based on promises that rivals will exercise similar restraint. For educational institutions, this geopolitical reality has direct implications. Universities and schools operate within national regulatory frameworks, but their students compete in global markets. If one nation’s educational system restricts AI integration while others embrace it, students from the restrictive system enter the workforce at a disadvantage.
Supply-Side Technology and the Limitations of Market Governors
Some argue that AI will naturally slow down because of demand-side limitations: enterprises struggling to integrate systems, disappointing returns on investment, or low consumer trust.
Christensen’s framework challenges this assumption. Disruption doesn’t wait for an invitation. It pushes capabilities forward if the market is ready. AI exhibits the same pattern. Exponential compute scaling drives capability improvements whether enterprises have figured out how to capture value. The computational power used to train frontier models has doubled roughly every six months since 2010, creating orders-of-magnitude improvements on timelines that compress traditional governance cycles.
Consider a university spending two years studying whether to allow AI tools, developing policies, and training faculty. By the time of implementation, the technology has already leaped forward. The tools being regulated no longer represent the state of the art. Meanwhile, students using these systems continue to do so, creating a widening gap between policy and reality.
The Limits of Regulation in Unstable Value Networks
Regulation works effectively in mature, stable value networks. Aviation safety regulation functions because all participants share common definitions of acceptable performance. The technology improves along a known trajectory within established parameters.
AI exists in a fundamentally different state. We’re still discovering what these systems can do, what risks they pose, and what applications will prove most valuable. When technology is young enough to control, we don’t understand it well enough to regulate effectively. By the time we understand it, the technology is too deeply embedded for meaningful control.
Educational institutions are experiencing this tension acutely. Policies banning AI writing tools attempt to preserve traditional notions of authorship and assessment. But these policies treat AI as a tool for academic dishonesty when it represents a more fundamental shift in capabilities.
The speed of model iteration compounds the regulatory lag. Policies developed for GPT-3 became obsolete when GPT-4 launched. This cycle will accelerate. Moreover, effective regulation requires national borders. But AI development is occurring simultaneously across multiple nations, each with different regulatory philosophies. Even if one country implements strict controls, development can simply migrate elsewhere. Capital and talent are mobile; restrictions are not.
What History Teaches About Technological Resistance
The historical record offers little encouragement to those hoping regulation can slow disruptive technology.
The Ottoman Empire banned printing in Arabic script in 1485 to protect calligraphers and scribes. The ban lasted until 1727. During those 242 years, European nations experienced the Renaissance, Reformation, Scientific Revolution, and early Enlightenment—transformations enabled by printed knowledge. When the Ottomans lifted the ban, they faced a technological and intellectual gap that proved impossible to close.
We should pause here. The parallel between that 242-year printing ban and today’s ChatGPT bans in classrooms is uncomfortably direct. In both cases, authorities banned a technology that threatened established gatekeepers of knowledge. In both cases, the ban aimed to protect traditional roles—calligraphers then, educators now. In both cases, the technology continued advancing elsewhere while the prohibiting institutions fell behind.
Universities possess historic authority over knowledge transmission and credentials. AI threatens both. The question we must ask ourselves is this: Are we, like the Ottoman scribes, defending roles that technology has altered? Students who master AI-augmented work will develop advantages that graduates of AI-resistant programs cannot match. Today, the gap may not take 242 years to become insurmountable. Given the speed of AI development, it may take less than five.
The Unprecedented Speed of AI Adoption
The data on AI adoption shows measurable differences from historical technologies. ChatGPT reached 100 million users in two months. Instagram took two and a half years; the iPhone took six years.
More significant is compute scaling and most concerning is the potential for self-improving systems. AI increasingly designs better architectures, optimizes training, and conducts research that leads to new capabilities. This feedback loop accelerates progress beyond human innovation alone. In education, curriculum development operates on three-to-five-year cycles, while technology improves every six months.
Education as the Established Player
Educational institutions occupy the role of the established player in Christensen’s framework. Universities have historically controlled knowledge transmission and credential issuance. AI represents a fundamental challenge to both functions.
Knowledge transmission was our core value proposition. This model assumed knowledge was scarce and access to experts was limited. AI inverts both assumptions. Knowledge becomes abundant—any topic explained at any level on demand. The credential monopoly faces similar pressure. If AI can tutor anyone to competency in most domains, the signaling value of degrees diminishes.
Our existing value network pulls us toward incremental improvements: better course delivery, updated content, and improved assessment. All worthwhile, but all within the old paradigm. The disruptive threat comes from outside: AI tutors that never sleep, personalized learning that adapts in real-time, and capability demonstration that bypasses credentials.
The failure mode is predictable. We’ll cede basic skills training and introductory courses to AI-augmented alternatives. We’ll focus on higher-order learning and critical thinking. And we may discover too late that AI has moved upmarket, eventually offering sophisticated intellectual development that rivals what we provide.
The Collapse of the Skill Half-Life
The traditional educational model front-loads learning: spend the first two decades gaining knowledge, then spend the next four decades applying it. This worked when skills lasted across careers.
AI collapses this timeline. The half-life of skills—the time it takes for half of what you learned to become obsolete—is shrinking. Recent estimates suggest it’s now measured in years for many technical domains, possibly months for fields at the AI frontier.
This demands a fundamental shift from education as preparation to education as continuous adaptation. The relevant skills are no longer primarily domain knowledge, which AI can supply, but meta-skills. These become the new learning objectives: learning how to learn and adapting to new tools and domains rapidly; evaluating AI outputs and recognizing when generated content is accurate, biased, or fabricated; recognizing when human judgment is essential and understanding the limits of algorithmic decision-making; and integrating AI capabilities with human insight by combining computational power with contextual understanding.
This doesn’t represent an abdication of educational responsibility, but it acknowledges that our responsibility has changed. We’re no longer primarily in the knowledge-transfer business. We’re teaching judgment, sensemaking, and reflective thinking. The educators who thrive will help students navigate an environment of abundant information and capabilities, not those who hoard scarce knowledge.
The Augmented Educator’s Mandate
If AI cannot be slowed, resistance becomes a failing strategy. This doesn’t mean uncritical acceptance. It means asking different questions: not “How do we keep AI out?” but “How do we prepare students for a world where AI is ubiquitous?”
This requires teaching different things. Rote knowledge loses value when AI can supply facts instantly. Basic skill execution matters less when AI can handle routine tasks. What remains irreducibly human centers on judgment: recognizing which problems matter, evaluating when AI outputs are trustworthy, deciding where values conflict, and creating genuinely novel insights.
What This Looks Like in Practice
The shift from evaluating products to evaluating thinking requires reconceiving assessment entirely. When AI can generate polished essays in seconds, traditional assignments reveal nothing about student understanding. The alternative centers on making thought processes visible and defensible.
Some institutions are moving toward oral examinations where students must explain their reasoning in real time, responding to questions that probe whether they understand the work they’ve submitted or merely managed its production. Others require students to document their collaboration with AI—submitting chat logs alongside final work, then analyzing critically where the AI succeeded, where it failed, and how they decided about integrating or rejecting its suggestions. This approach treats AI as a collaborator whose contributions must be evaluated rather than as a tool whose use must be hidden.
The challenge extends beyond individual assignments to credentialing itself. If AI can tutor anyone to competency in most domains, the signaling value of traditional degrees weakens. Educational institutions face pressure to show what their credentials represent that cannot be replaced through AI-augmented self-study. Some responses emphasize portfolios that show development over time—collections of work demonstrating how thinking grows across contexts. Others focus on competency demonstrations in authentic settings where the question is not whether students used AI, but whether they can perform effectively in environments where AI is available.
The deepest institutional challenge is self-disruption. This means questioning whether introductory survey courses still serve students when AI can provide customized explanations of any topic on demand. It requires designing programs around capabilities that remain distinctly human in AI-augmented contexts. Most fundamentally, it demands recognizing that teaching students to be sophisticated AI users—understanding when to trust outputs, how to evaluate quality, which tools suit which tasks—has become as essential as traditional literacies.
Stop Asking If We Can Slow AI
The question “Should we slow down AI?” reflects legitimate concerns about job displacement, societal disruption, and collective unpreparedness. But it assumes a control we don’t possess.
The mechanisms driving AI advancement are structural. Voluntary restraint benefits the least cautious actors. Supply-side dynamics mean the technology improves independently of institutional readiness. Regulatory attempts face coordination problems that border on the impossible. History shows that technological suppression succeeds only in the rarest circumstances.
The relevant question is not whether AI will slow down. It won’t. The question is how thoughtfully we can redesign education for a world where human and artificial intelligence work together. This presents an opportunity. Educators who thrive will help students develop capabilities that AI cannot replicate: nuanced judgment, ethical reasoning, creative synthesis, and the self-awareness to know when to trust AI outputs and when to override them.
The Innovator’s Dilemma has become humanity’s dilemma. But dilemmas present choices. We can resist changes we cannot stop, or channel that energy toward reimagining what education should become. The institutions that will matter are those willing to experiment, to preserve what’s genuinely valuable about human learning while letting go of practices optimized for scarcity.
For educators, watch how AI capabilities evolve. Design educational experiences for the world students will actually inhabit. Advocate for the resources and institutional flexibility to do this work well. Remember that AI augmentation differs from replacement.
This transformation will be difficult. It requires questioning the assumptions we’ve held for generations. But difficulty differs from impossibility. The educators reading this are already adapting. The question is whether institutions will support that work or lag behind reality. We cannot slow the technology, but we can shape how it’s integrated into learning. That agency is the leverage point we actually possess.
The images in this article were generated with Nano Banana Pro.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.






Good read. Thanks. The train is moving fast, and it won’t stop. In education, this isn’t new—calculators, word processors, Google—and now AI. We keep needing to rethink how we teach thinking, judgment, and creativity, but this time the stakes feel higher.