Beyond Resistance: Why “Motivated Illiteracy” Won’t Save Us From AI
On the Crucial Distinction Between Refusing to Use AI and Refusing to Teach About It
Across some academic communities, a particular stance toward AI has become increasingly visible: the categorical dismissal. Not the measured critique or the principled concern, but the absolute rejection. The declaration that AI is a “catastrophic failure,” that engaging with it legitimizes harm, and that the only ethical response is refusal. This position often comes from educators who rarely use these tools themselves and who seek out failure stories while actively avoiding direct experience with the technology.
In a recent discussion on one of my LinkedIn posts about AI in education, someone introduced a term that perfectly captures this phenomenon: “motivated illiteracy.” The phrase itself appears to be new and not drawn from existing scholarship. Yet it perfectly describes the dynamic at play. The term combines agency with deficit, suggesting not an inability to understand but an active expenditure of energy to remain unknowing. These critics, the argument ran, aren’t making informed judgments. They’re threatened by AI—perhaps justifiably so—and hoping it fades like 3D TV. So they selectively consume information that confirms their desire for obsolescence while refusing the engagement that might complicate their position.
For educators and other professionals whose identity centers on cultivating literacy, such a position represents a fundamental paradox:
You cannot teach literacy if you are not literate yourself.
Educators who disengage from AI may believe they are protecting students from a harmful technology. But if that technology has become integral to the information landscape students will navigate, such disengagement becomes counterproductive rather than protective. The question becomes whether refusing to engage with AI represents principled resistance or a more troubling form of professional surrender.
The Architecture of Intentional Non-Knowing
To understand what makes illiteracy “motivated,” we need to distinguish it from simple ignorance. Simple ignorance is a void, an absence of knowledge that education can fill. Motivated illiteracy is structural. It is knowledge that gets actively repelled because its acquisition would prove costly, painful, or disruptive to one’s identity. It represents active shielding of the self from uncomfortable truths.
The philosophical roots of this concept run deep. Michele Moody-Adams, in her foundational work on moral philosophy, explored what she termed “affected ignorance”—the phenomenon of choosing not to know what one can and should know. She questioned the belief that cultural context could be used as a defense to excuse harmful behavior. Instead, she argued that what appears as cultural blindness often masks a refusal to inquire. It involves maintaining a rigorous lack of curiosity precisely because asking certain questions would lead to uncomfortable answers.
This distinction proves crucial for understanding educator resistance to AI. When a professor claims, “I just don’t understand this new technology” or “I am a humanist, not a computer scientist,” they may be engaging in this form of affected ignorance. Information about AI, including its capabilities, its limitations, and its mechanics, is not esoteric knowledge locked in technical vaults. It fills major publications, accessible courses, and public discussions. The ignorance, then, cannot be attributed solely to lack of access. It may instead function as a strategy to maintain a comfortable professional status quo.
If an educator refuses to understand how large language models work and what they can and cannot do, they protect themselves from several uncomfortable recognitions: that their current pedagogical methods may need fundamental redesign, that their assessment strategies have become obsolete, and that skills they have spent a career teaching may hold diminished market value. The “illiteracy” regarding AI is not a passive lack of training. It becomes a mechanism of professional self-preservation, a way to avoid the arduous work of reconstructing one’s identity as an educator.
Daniel Williams has formalized this dynamic in his work on rational motivated ignorance. He argues that ignorance often proves instrumentally rational. When gaining new knowledge incurs high psychological or social costs but offers low individual returns, agents rationally choose to remain ignorant. For overworked educators, the costs of AI literacy appear immense: learning prompt engineering, understanding non-deterministic computer systems, grappling with data ethics, and redesigning every assignment to account for generative tools. The benefits may seem negligible or even negative. It would just enable a tool they find threatening and facilitate automation that endangers their profession. Within their professional communities, expressing AI skepticism often signals valued in-group loyalty.
The “motivated illiteracy” thus functions as an economic calculation in an attention-scarce economy. Educators are not failing to learn; they are succeeding at conserving energy for what they value most: preserving pedagogical traditions they believe in. From this perspective, the motivation becomes comprehensible, even if its consequences remain problematic.
The Anatomy of Refusal: Three Distinct Stances
The above analysis, however useful, risks flattening a more complex landscape. Not all refusals to engage with AI take the same form. The research literature suggests at least three distinct stances, though careful examination reveals that only the first two represent genuine illiteracy.
The Deficit of Privilege represents what we might call the “ostrich stance.” Here, educators ignore AI because they remain protected by tenure, institutional inertia, or specialized roles that seem immune to disruption. They believe their methods are immortal and their expertise is permanently valuable. This form of motivated illiteracy stems from overconfidence in the stability of higher education’s traditional structures. It represents a genuine refusal to learn and therefore a failure of professional duty rooted in complacency.
The Deficit of Fear manifests as the “overwhelmed stance.” These educators ignore AI not out of vanity but from being genuinely overwhelmed by the cognitive load of relearning literacy at this stage of their careers. This represents a psychological defense mechanism, understandable but ultimately maladaptive. The refusal to engage functions like avoiding a medical diagnosis: the information might help, but confronting it feels unbearable. This too represents actual illiteracy, though it deserves more compassion than contempt.
The Strategy of Resistance operates differently and complicates our entire framework. While the first two types represent genuine deficits in knowledge, this third stance emerges from informed opposition. Here, educators study AI enough to develop a critical understanding, then strategically refuse to adopt these tools to create and defend “human-only” zones. This represents not illiteracy but what we might call “critical refusal.” The distinction matters enormously, yet from the outside—particularly from the student perspective—this stance can appear indistinguishable from the first two types.
Communication theorist Ethan Plaut has developed this third category in his work on “strategic illiteracies.” Plaut challenges the normative assumption that literacy always proves beneficial. He defines strategic illiteracy as “purposeful, committed refusals to learn expected communication and technology skills.” But in many of his historical examples, the refusal is not about learning but about adoption. Socrates understood writing well enough to critique it; he simply refused to practice it. Colonized populations who rejected imperial languages often understood those languages; they strategically performed incompetence as resistance.
The distinction between refusing to know and refusing to use becomes crucial for understanding contemporary AI resistance. The question is whether any educator’s stance represents genuine motivated illiteracy (the first two types) or informed critical refusal (the third type) performed in ways that appear as illiteracy. To answer that question, we need to examine what informed refusal actually looks like when practiced by those with the deepest literacy.
When Deep Literacy Produces Refusal
Consider the positions of computational linguists Emily Bender, Timnit Gebru and colleagues who coined the influential “stochastic parrots” critique of large language models. These researchers understand transformer architecture, training methodologies, and statistical pattern matching at levels few educators will ever reach. When they argue against deploying LLMs for certain tasks, their position emerges from deep technical knowledge about how these systems actually work. They refuse to use AI for information retrieval precisely because they understand how frequently such systems confabulate. Their refusal represents the opposite of illiteracy. It is literacy so complete that it reveals the technology’s fundamental limitations.
Similarly, in library science, Kay Slater and colleagues have articulated what they term “critical refusal”—a principled stance against adopting AI in libraries. This position is not illiteracy. Slater’s argument rests on a detailed understanding of how AI systems contradict core library values. Libraries are committed to intellectual freedom, but AI models trained on internet data automate and amplify bias. Libraries protect patron privacy zealously, but AI vendors operate through massive data harvesting. And libraries prioritize sustainability, whereas training large language models consumes extraordinary environmental resources.
Slater’s refusal emerges from a careful analysis of the technology’s material conditions and ethical implications. She understands perfectly well how to use these tools. She has studied their capabilities, examined their training processes, and evaluated their outputs. Her decision not to adopt them represents informed judgment, not defensive ignorance.
This kind of critical refusal deserves recognition as a legitimate intellectual position. When researchers or practitioners with deep technical knowledge conclude that a technology’s harms outweigh its benefits for their specific purposes, that judgment carries weight. Their literacy enables precise critique by identifying not that AI “doesn’t work” but that it works in ways that prove incompatible with certain values or goals.
The problem arises when this sophisticated stance gets adopted or performed by educators who lack the underlying literacy. The distinction between informed refusal and motivated illiteracy collapses when someone refuses to engage with AI not because they understand it too well but because they have deliberately chosen not to understand it at all. More critically, this collapse becomes damaging when educators who may possess informed refusal cannot recognize that their students need the literacy they themselves have gained to make similar judgments. Here is where even the most principled resistance can transform into something that is detrimental to education.
When Critical Refusal Becomes Educational Surrender
The informed critical refusal practiced by researchers like Bender, Gebru, and Slater represents legitimate intellectual work. These experts have earned the right to refuse adoption through deep engagement with the technology. Their literacy enables precise critique. The problem emerges when educators mistake this sophisticated position for a license to remain illiterate themselves, or worse, when informed educators cannot recognize the difference between their own refusal to adopt and their students’ need for literacy.
The crucial distinction lies between refusing to use AI and refusing to teach about it. An educator can legitimately decide not to employ AI tools in their own research or writing. They might conclude that the technology contradicts their values or proves inappropriate for their work. This represents informed judgment. But when that same educator extends their personal refusal into a ban on student use without providing education about the technology, informed refusal transforms into educational harm.
This dynamic reveals a category error that often goes unexamined. Educators conflate two distinct responsibilities: their right to refuse personal adoption with their duty to provide literacy education.
When faculty prohibit AI use without teaching students how these systems work, students don’t stop using the tools. They simply use them covertly, without guidance on limitations or ethical considerations. The prohibition reproduces motivated illiteracy in a new generation: students learn to perform ignorance while deploying tools they don’t understand and spending energy to conceal their use rather than developing critical understanding. What begins as principled resistance ultimately transforms into professional surrender to the technology.
Failing to maintain this distinction harms students in three compounding ways. First, it creates educational inequality: students from well-resourced communities gain AI literacy through family connections and expensive programs, while students from under-resourced communities depend on their formal education for new literacies.
Second, it leaves students vulnerable to manipulation in an era where propaganda is generated by AI at scale. Without critical frameworks for evaluation, they become defenseless against systems they cannot recognize or understand.
Third, the educator’s stance becomes a form of gatekeeping, defending a literacy club that excludes not only under-resourced students but also the millions of adults for whom AI tools represent accessibility lifelines rather than threats to creativity. When educators refuse to teach AI literacy, they inadvertently harm most those students who can least afford that gap.
Critical Engagement Over Strategic Withdrawal
If motivated illiteracy proves counterproductive, and if even informed refusal can morph into educational surrender, what alternatives exist for educators genuinely concerned about AI’s harms? The answer lies in replacing both motivated illiteracy and refusal to teach with what we might call “critical literacy,” a deep engagement with these systems precisely to understand and help students navigate their problems.
Critical AI literacy requires understanding both capabilities and limitations. This means learning how large language models actually work, not worshipping their sophistication but recognizing their fundamental constraints. It means grasping the difference between statistical pattern matching and reasoning, between plausible-sounding text and accurate information, between processing language and understanding meaning, and , perhaps most importantly, between deterministic and non-deterministic computing. This technical literacy enables more effective critique than refusal ever could.
Critical engagement also requires studying the material conditions of AI development. Who built these systems? Whose labor made the training possible? What environmental costs do these systems impose? What economic structures do they reinforce or disrupt? Understanding these dimensions transforms AI from an abstract technological threat into a concrete set of social and economic relations that can be analyzed, contested, and potentially transformed.
For educators, critical engagement means teaching with AI rather than teaching around it or against it. This involves designing assignments that require students to use these tools transparently, then reflect critically on the results. It means creating assessments where AI-assistance is permitted but insufficient and where students must show the thinking that leads to and extends beyond AI-generated content. It means developing curricula that help students understand when these tools provide value and when they prove inappropriate or harmful.
Critical engagement requires educators to model the behavior we expect from students: using powerful tools while maintaining human judgment, delegating appropriate tasks while preserving intellectual agency, leveraging efficiency while protecting depth of thought. This modeling cannot happen if educators remain either genuinely illiterate or performatively so. It requires becoming the most literate members of the educational community regarding both possibilities and perils. And this holds true even for educators who, through their own literacy, have concluded these tools demand cautious, critical, limited use or even no use at all.
The decision we face isn’t between defiance and surrender, but between meaningful critique built on understanding, and unproductive opposition based on an unwillingness to educate. Motivated illiteracy may feel like self-protection; informed refusal may feel like a principled stance. But when either prevents educators from providing the literacy students need to navigate an AI-saturated world, both become professional surrender.
Real resistance requires becoming literate enough to teach effectively, even about tools we choose not to use ourselves. That’s harder than staying comfortably unknowing. It’s also the only approach that serves our students.
The images in this article were generated with Nano Banana Pro.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.





