When Machines Fill In the Blanks
Why “Hallucination” Is the Wrong Word for the Right Phenomenon
This post follows my standard early access schedule: paid subscribers today, free for everyone on March 3.
I have never quite understood why the term “hallucination” causes so much confusion when applied to large language models. If you set aside the word itself and look at what the phenomenon actually involves, it turns out to be remarkably human in nature. Educators, in particular, should find it familiar.
Consider the following hypothetical scenario to illustrate the point. A teacher poses a question to a student. The student, not knowing the answer, does not respond with “I don’t know.” Instead, they reach for the most plausible answer available; the one that seems most likely to satisfy the teacher. The student fills in the gap with what feels correct, shaped by the context of the lesson and the tone of the question. This instinct to produce a probable response rather than admit uncertainty is so deeply embedded in classroom culture that we rarely stop to think about it.
This is, in functional terms, exactly what a large language model does when it “hallucinates.” It generates the most statistically likely continuation of a prompt, shaped by the vast patterns in its training data and the implicit expectation that it should produce an answer.
For a long time, this parallel seemed so intuitive to me that I assumed everyone interpreted the term the same way. That changed recently during a comment exchange on a LinkedIn post, where I found myself in a discussion with a fellow educator about AI reliability. As we went back and forth, I realized this colleague understood “hallucination” as something fundamentally different from what I meant by it. For them, a hallucination signaled an abnormality; a malfunction. If a brain hallucinates, the reasoning went, it is broken. Something has gone wrong at the level of the system itself. And if an AI hallucinates, by extension, the technology must be flawed in some deep, perhaps irreparable, way.
That conversation stayed with me. It made me realize that the problem with “hallucination” is not primarily technical. It is rhetorical. The word carries connotations that actively mislead educators about what these systems are doing and why.
The Term Has a Longer History Than Most People Think
The popular assumption is that “hallucination” was coined recently, perhaps by marketing departments looking to humanize chatbots and soften the perception of their errors. But the historical record tells a different story. The term has been in use within computer science for roughly three decades, and its origins have nothing to do with large language models.



