The AI Mirror: What Do We See When We Look at Our Own Intelligence?
The debate over artificial intelligence reveals less about the limits of machines and more about the profound mysteries of the human mind.
In response to a Time article, a very well-respected creative professional recently posted the following thoughts on LinkedIn about AI’s presumed emergent ability to deceive in pursuit of its programmed objectives:
“This inflammatory nonsense about AI strategically lying as an act of self-preservation is getting too much press coverage. AI — ALL of AI — has no ‘sense of self’. It is not aware of its own mortality because it has no mortality. It has no ‘intentional consciousness’. It does not have a ‘selfish gene’ because he does not have a gene. AI is not even ‘intelligent’ in the human sense. And the sky is not falling.”
This perspective is intuitive, comforting, and widely shared. It places human cognition on a pedestal, safely beyond the reach of the statistical mimicry of Large Language Models (LLMs). The argument feels right because it aligns with our deeply personal, subjective experience of being human. We feel like we have intentional consciousness. We feel like we have a sense of self.
But there’s a hidden assumption in this line of thinking, and it’s a big one. It presumes that we have a clear, stable, and scientific understanding of what “human intelligence” actually is.
The problem is we don’t.
In a previous guest post on the AI EduPathways Substack, I explored how much of our lives is governed by processes outside our direct awareness, challenging the idea that our conscious self is always in control. The current debate about AI pushes this challenge even further. Before we can confidently declare what AI is not, we have to honestly ask ourselves:
What do we really know about the nature of our own intelligence?
The Century-Long Quest to Define “Smarts”
For over a hundred years, psychologists have tried to pin down and measure human intelligence, and the result has been a landscape of competing theories and unresolved debates. The journey began with Charles Spearman’s proposal of a single “general intelligence factor,” or ‘g’—a core mental capacity that influences performance on all cognitive tasks. This idea, that some people are just generally “smarter” than others, still forms the basis of most IQ tests.
But this one-size-fits-all model quickly seemed too simple. Theorists like Raymond Cattell argued that ‘g’ was really made up of two major components: fluid intelligence (the ability to reason and solve new problems) and crystallized intelligence (the accumulation of knowledge and skills over a lifetime).
This was just the beginning of the great intellectual fracturing. As educators, we are all familiar with Howard Gardner’s hugely influential Theory of Multiple Intelligences, which proposed at least eight distinct, autonomous intelligences, including musical, bodily-kinesthetic, and interpersonal smarts. Gardner’s theory was a breath of fresh air in education because it validated the diverse talents we see in our students every day.
However, within the scientific community, Gardner’s theory has been heavily criticized for lacking empirical evidence, with many psychologists arguing that his “intelligences” are better described as talents or abilities. Some have even labeled it a “neuromyth,” pointing out that modern neuroscience doesn’t support the idea of eight independent brain systems corresponding to each intelligence.
The point isn’t to re-litigate these debates, but to highlight the fundamental uncertainty. From a single ‘g’ factor to multiple intelligences to theories of emotional and practical intelligence, the only real consensus is that there is no consensus. Our scientific understanding of our own minds is far more limited and contested than we like to admit.
Is Your Brain Just a Prediction Machine?
While psychometricians debated the structure of intelligence, neuroscientists have been trying to understand its mechanics. One of the most powerful paradigms to emerge from this work is the idea of the predictive brain.
This framework proposes that the brain is not a passive organ that simply reacts to information from the senses. Instead, it is a proactive, prediction-generating machine. Your brain is constantly building a model of the world and using it to guess what’s going to happen next. What we experience as perception is the result of the brain comparing its predictions to the actual sensory input it receives. When there’s a mismatch—a “prediction error”—the brain updates its model.
As philosopher Andy Clark describes it, perception is a form of “controlled hallucination”. Your brain is essentially hallucinating your reality, and this hallucination is constantly being reined in by the senses. This is why we can read messy handwriting or understand a conversation in a noisy room; our brain is filling in the gaps based on its predictions.
If this sounds familiar, it should. Because this is strikingly similar to how an LLM works.
The Digital Mirror
At its core, an LLM is a next-token predictor. It is trained on a colossal amount of text, and its entire function is to calculate the most probable next word in a sequence. This is often the basis for dismissing it as “not real intelligence.”
But neuroscience shows our brains do something remarkably similar. When we listen to someone speak, our brain is constantly anticipating the upcoming words. This is visible in our brainwaves; an unexpected word in a sentence will produce a distinct neural signal (known as the N400) that reflects a prediction error. In fact, studies have shown a direct alignment between the word probabilities generated by LLMs and the neural signals in a human brain listening to speech.
So, if both the brain and the LLM are prediction machines, what’s the difference? This is where the debate gets really interesting, and it’s best captured by two pioneers of AI, Geoffrey Hinton and Yann LeCun.
The Hinton View: Hinton argues that the process of learning the statistical relationships between words on such a massive scale is a form of understanding. For him, LLMs are “very like us”.
The LeCun View: LeCun, in contrast, believes current LLMs are a “dead end” on the path to true artificial general intelligence. He argues they lack a crucial component: a world model. Humans build their predictive models not just from text, but from embodied interaction with the physical world. We have bodies. We trip and fall. We learn physics by watching a ball roll off a table, not by reading about gravity. LLMs are, in his view, “brains in a vat,” disconnected from the reality that language describes. They can’t truly reason or plan because they don’t understand the world they’re talking about.
LeCun’s famous challenge is this: we can build an AI that passes the bar exam, but we can’t build one that’s as smart as a cat in navigating the physical world. This highlights the core difference: the human brain predicts to guide adaptive action in the world. An LLM predicts to complete a linguistic pattern.
The Humility of Not Knowing
So where does this leave us? We return to the opening quote, which dismisses AI for lacking a “sense of self” and “intentional consciousness.”
It’s true that LLMs almost certainly don’t have these things. But the predictive processing framework suggests that our own “self” is not a mysterious ghost in the machine either, but rather another set of predictions the brain makes about its own bodily and mental states. And our sense of free will might be the story our brain tells itself after unconscious neural processes have already set actions in motion.
The rise of AI is a mirror. It reflects not only our technological progress but also the vast, uncharted territory of our own minds. The uncomfortable truth is that confident dismissal of AI’s intelligence often just reveals a deeper misunderstanding of our own.
As educators, this is a profound moment. It challenges us to move beyond simplistic definitions and embrace the complexity of cognition, both human and artificial. Instead of closing the book with definitive statements about what AI can’t do, perhaps we should open it to the humbling and exciting possibility that we have only just begun to understand what it means to be intelligent at all.