The confabulation argument is the real contribution here, and it reframes the user relationship more honestly than "hallucination" does. Filling in the blanks from incomplete patterns is precise, and it carries the right implication: these outputs require verification, not blanket distrust.
But the essay stops just before the uncomfortable room.
"Knowing to ask if that's actually true" assumes you have something to check against. The student's probable answer can be verified against the textbook. But most people use the model precisely because they don't have the textbook — because they don't already know. The verification loop requires prior knowledge the user may not have. Which brings it quietly back to Dunning-Kruger: if you knew enough to catch the confabulation you probably didn't need to ask in the first place.
The deeper problem is that the model gives no surface to read. With humans, uncertainty has tells. The slight pause before committing. The answer that restates your question back at you. The confidence pitched just a fraction too high for the complexity involved. You read those signals constantly without consciously registering them — decades of social calibration working in the background.
The model has none of that. Smooth is just the baseline. It's equally fluent whether it knows or is filling in the blanks, and the fluency itself is what makes the confabulation hard to catch. No hesitation, no micro-tells, no wobble in the delivery.
So the user ends up carrying all the epistemic weight. The tool that was supposed to reduce cognitive load redistributed it — from finding information to evaluating information. Which is the harder job. And the one that requires exactly the knowledge you went to the model to get.
I think that addresses a bigger question related to the hyperreality we find ourselves in. We all carry all the epidemic weight now constantly. Every time and everywhere. AI is just a symptom and not a cause. I wrote about that a while back.
Just brilliant! Unfortunately I have to agree that hallucination with all it's drug related baggage is too good a term to give up. However, confabulation is quite similar to a well-loved (by GenAI) term "conflation". So it may make it into common parlance by way of AI.
The confabulation argument is the real contribution here, and it reframes the user relationship more honestly than "hallucination" does. Filling in the blanks from incomplete patterns is precise, and it carries the right implication: these outputs require verification, not blanket distrust.
But the essay stops just before the uncomfortable room.
"Knowing to ask if that's actually true" assumes you have something to check against. The student's probable answer can be verified against the textbook. But most people use the model precisely because they don't have the textbook — because they don't already know. The verification loop requires prior knowledge the user may not have. Which brings it quietly back to Dunning-Kruger: if you knew enough to catch the confabulation you probably didn't need to ask in the first place.
The deeper problem is that the model gives no surface to read. With humans, uncertainty has tells. The slight pause before committing. The answer that restates your question back at you. The confidence pitched just a fraction too high for the complexity involved. You read those signals constantly without consciously registering them — decades of social calibration working in the background.
The model has none of that. Smooth is just the baseline. It's equally fluent whether it knows or is filling in the blanks, and the fluency itself is what makes the confabulation hard to catch. No hesitation, no micro-tells, no wobble in the delivery.
So the user ends up carrying all the epistemic weight. The tool that was supposed to reduce cognitive load redistributed it — from finding information to evaluating information. Which is the harder job. And the one that requires exactly the knowledge you went to the model to get.
Thanks for the comment and restack.
I think that addresses a bigger question related to the hyperreality we find ourselves in. We all carry all the epidemic weight now constantly. Every time and everywhere. AI is just a symptom and not a cause. I wrote about that a while back.
Just brilliant! Unfortunately I have to agree that hallucination with all it's drug related baggage is too good a term to give up. However, confabulation is quite similar to a well-loved (by GenAI) term "conflation". So it may make it into common parlance by way of AI.
Thanks! 🙏