The Murder of Reality: When the Real Becomes Unbelievable
How Baudrillard’s Hyperreal Demands a New Approach to Truth in Education
I’ve watched the same pattern repeat itself dozens of times over the past few months. Someone shares a video of humanoid robots performing martial arts, recovering from violent kicks, or dancing with fluid precision. Within a few minutes, the comments pour in: “Obviously CGI.” “Totally fake.” “AI slop!” “Nice try, but the physics are off.” The skeptics point to what they perceive as telltale signs of computer rendering or AI generation. They only trust their own judgment, dismissing those who disagree.
But the videos are real. The robots exist. The company behind them, Unitree Robotics in Hangzhou, China, manufactures physical humanoid robots that can be purchased, tested, and verified by independent observers. Whether any demonstration video shows cherry-picked successful runs or includes staging is a reasonable question. But the persistent claim of the skeptics is much more fundamental: that these are not robots at all, but just computer or AI-generated imagery.
This phenomenon represents more than a failure of media literacy. It signals something far more troubling about the epistemic environment we now inhabit. We have entered what the French theorist Jean Baudrillard called “the hyperreal”—a condition where the distinction between reality and simulation has not merely blurred but collapsed entirely. If Baudrillard is right, then we are witnessing not just confusion about what is real, but the murder of reality itself.
When Reality Looks Rendered
The Unitree robots present an unusual challenge to human perception. These machines move with mathematical precision that biological systems cannot match. Their motors update thousands of times per second, allowing them to stop without the oscillation or settling that characterizes human movement. When pushed or kicked, they recover so instantaneously that the corrective motion appears scripted. The robots maintain perfectly level head positions even while their legs adjust rapidly to maintain balance—a remarkable achievement of control engineering that looks uncanny to eyes trained on biological norms.
The videos themselves compound the problem. Shot at high frame rates to showcase the robots’ agility, they lack the motion blur that cinema has conditioned us to associate with physical reality. This hyper-smooth aesthetic, sometimes called the “soap opera effect,” paradoxically signals artificiality to viewers accustomed to traditional film. The lack of blur makes fast movements appear crisp in ways that resemble video game rendering rather than objects captured through lenses interacting with light.
The result is a crisis of recognition. The robots are so technically advanced that they violate our embodied understanding of how physical objects should behave. Faced with this violation, the brain opts for the explanation that requires the least conceptual revision: the video must be fake. This is a rational response to an environment saturated with synthetic media. The problem is that it insulates us from recognizing genuine technological progress when it appears before us.
Baudrillard’s Prophecy and the Collapse of the Real
Jean Baudrillard, primarily known for his groundbreaking 1981 work “Simulacra and Simulation,“ described a progression through which media representations—images, text, video—relate to reality. These representations begin by faithfully reflecting the world, like a map that corresponds to actual territory. They then move to masking or distorting reality, as altered photographs or edited footage misrepresent events. In a third phase, media mask the absence of reality itself, creating appearances where no original exists. Baudrillard then identified a fourth and final phase where the representation bears no relation to any reality whatsoever. It becomes its own pure simulacrum, generating what he termed the “hyperreal.”
In the hyperreal, the distinction between real and simulated becomes meaningless because simulation has become more real than reality itself. The map precedes the territory. The model becomes the thing modeled. We no longer ask, “Is this real or fake?” because those categories have ceased to function. Instead, we ask only, “Does this conform to my expectations of what real or fake should look like?”
This is what the Unitree skeptics are showing us. They are not evaluating evidence; they are comparing the video against an internalized model of what advanced robotics “should” look like. When the actual robot exceeds this model, it fails the test of reality. The simulation—the internalized mental model of what robots can do—has become more authoritative than the physical object itself.
Jean Baudrillard argues that this collapse represents a kind of murder. Reality is not simply hidden or distorted; it is rendered irrelevant. The phrase “the murder of reality” captures this violence: something foundational to human knowing has been destroyed, and we are left navigating a world of signs that refer only to other signs, never to any external truth.
The Evidence: We Cannot Tell Anymore
The Unitree phenomenon is not isolated. Research on the human ability to detect AI-generated content reveals a consistent and troubling pattern: people perform barely better than chance, and often worse.
A recent study by iProov found that only 0.1% of participants could reliably identify AI-generated deepfake videos. Even among those who claimed confidence in their judgments, accuracy rates hovered near random guessing. Another study by Robin S.S. Kramer and colleagues showed that people frequently misidentify authentic images as AI-generated when those images contain elements that seem “too perfect” or “too strange.” This is the same perceptual trap we see with the Unitree videos.
Research published in MDPI’s Journal of Imaging examined human interpretation of AI-generated versus human-made images across diverse contexts. The findings confirmed that participants could not distinguish between the two with statistical reliability. More troubling, participants often expressed high confidence in incorrect judgments, suggesting that subjective certainty provides no protection against error.
This pattern extends beyond images to text, where AI detection tools struggle to maintain accuracy above 60%, and human readers perform even worse. The fundamental problem is that people rely on heuristics that are rapidly becoming obsolete as generative systems improve. We look for telltale artifacts, unnatural textures, impossible shadows, or stilted language. But as these systems advance, the artifacts disappear, and our detection strategies fail.
The implications are profound. We cannot trust our eyes, our ears, or our intuitions about authenticity. The sensory apparatus that evolution designed to navigate physical reality provides no reliable guidance in an environment where simulations achieve fidelity indistinguishable from originals. This is not a temporary problem that better education will solve; it is a structural condition of the hyperreal.
The Liar’s Dividend and the Weaponization of Doubt
The collapse of reliable detection creates what legal scholars call the “Liar’s Dividend”—the benefit that liars gain when the public loses faith in evidence itself. If authentic videos can be dismissed as deepfakes, then politicians can deny uncomfortable footage of their words or actions. If people believe that seeing is no longer believing, then all visual evidence becomes suspect, and anyone can claim “that’s not real” about content they find inconvenient.
We have already seen this strategy deployed. When audio recordings emerge of problematic statements, public figures now routinely claim the recordings are AI-generated, regardless of authentication. The mere possibility of synthetic media provides plausible deniability for actual reality. The consequence is not only that we believe lies, but that we lose the capacity to adjudicate between truth and falsehood at all.
Baudrillard anticipated this collapse. In the hyperreal, the question is not “Did this happen?” but “Does this match the narrative model we have constructed?” Reality becomes whatever story has the most compelling presentation, the widest circulation, and the strongest emotional resonance. Evidence loses its anchoring function. We drift in a sea of competing simulations, none more “real” than any other.
Critical AI Literacy as Epistemic Self-Defense
The educational response to this crisis cannot be simple detection training. Teaching students to spot AI-generated content by looking for specific artifacts is a losing strategy, because those artifacts change with each new model release. What worked to identify AI output six months ago fails today. What works today will fail tomorrow.
Instead, education must develop what we might call critical AI literacy—a deeper understanding of how generative systems operate, what they can and cannot do, and most importantly, how to verify claims through multiple independent sources rather than relying on a single piece of evidence.
Critical AI literacy involves several interconnected capabilities. First, students need to understand the probabilistic nature of generative AI. These systems produce outputs that seem plausible because they have learned statistical patterns from training data. They do not “know” truth; they generate convincing text or images by predicting what should come next based on what came before. This understanding helps students recognize that plausibility is not evidence of truth.
Second, students must learn to trace provenance. Where did this image originate? Who first posted it? What other sources corroborate or contradict it? In the hyperreal, single pieces of evidence—no matter how convincing they appear—cannot establish truth. Only patterns across multiple independent sources provide reasonable confidence.
Third, and perhaps most fundamentally, students need to develop what I would call epistemic humility alongside epistemic agency. Epistemic humility means accepting that we cannot always know with certainty what is real. The perceptual confidence we feel when viewing a video or image is not a reliable guide. We must hold our judgments lightly, remaining open to correction. Epistemic agency, conversely, means refusing to surrender entirely to uncertainty. It means actively seeking evidence, comparing sources, and constructing provisional understandings that we revise as new information emerges.
This combination of being humble about certainty but active in inquiry represents a fundamental reorientation of how we approach knowledge. It acknowledges the collapse Baudrillard described while refusing to accept total epistemic nihilism. We cannot return to naïve realism, where seeing equals believing. But we need not embrace complete relativism either. We can develop practices of verification that provide reasonable grounds for belief, even when absolute certainty remains unattainable.
Classrooms as Reality Anchors
If reality has indeed collapsed in the way Baudrillard described, then educational institutions face a responsibility they may not have fully recognized: they must serve as anchors to reality in an environment where such anchors are increasingly rare.
This does not mean clinging to outdated pedagogies or refusing to engage with AI tools. Rather, it means deliberately creating spaces where students encounter unmediated reality through direct experience. Laboratory experiments allow students to observe phenomena firsthand; fieldwork places them in contact with physical environments. And primary source analysis requires wrestling with original documents rather than summaries, while collaborative projects demand navigating real human interactions with all their complexity and unpredictability.
The value of these experiences lies not in the subjects they cover, but in their ability to develop students’ capacity to recognize and navigate reality itself. When a student conducts an experiment and observes results that contradict their expectations, they encounter something that no simulation can replace: the stubborn objectivity of physical law. And when they interview a community member for an oral history project, they uncover the irreducible particularity of individual human experience that generative AI can only approximate.
Classrooms can also serve as spaces for collective verification. When students encounter confusing or contested content, teachers can guide them through collaborative investigation. What sources can we find? Do they agree or disagree? What might explain the discrepancies? This process models the epistemic practices that adults must employ to navigate the hyperreal: refusing to accept single sources, comparing accounts, reasoning about reliability.
Perhaps most importantly, educational institutions must resist the temptation to treat all digital content as equally valid or invalid. Not everything is a simulation. Reality has not vanished completely; it has become harder to identify and verify. This distinction matters enormously. If educators adopt a position of total skepticism they abandon students to epistemic chaos. If they adopt naïve trust they leave students vulnerable to manipulation. The difficult middle path requires teaching students to make careful judgments about reliability while acknowledging that those judgments remain provisional.
Developing Epistemic Agency in an Age of Uncertainty
In this context, epistemic agency deserves particular attention. Agency implies the capacity to act rather than merely react. It means students must learn not just to consume information critically, but to actively construct knowledge through inquiry.
This differs from traditional information literacy, which often focuses on evaluating the credibility of sources. That remains important, but insufficient in the hyperreal. When authoritative sources can be perfectly simulated, when credentials can be fabricated, when even video evidence becomes unreliable, students need something more than checklists for assessing trustworthiness.
Epistemic agency requires developing the confidence and capability to pursue truth through systematic investigation. It means formulating questions, identifying what evidence would answer those questions, seeking that evidence across multiple channels, and synthesizing findings into coherent understandings. And it means recognizing when available evidence is insufficient and having the intellectual courage to say, “I don’t know yet” rather than accepting convenient answers.
Cultivating epistemic agency requires a significant shift in how we structure learning experiences. Too often, education presents knowledge as settled. But in the hyperreal, knowledge is necessarily provisional. Students need practice with genuine uncertainty, with questions that have no definitive answers, and with situations where they must weigh competing evidence and make reasoned judgments despite incomplete information.
The Educator’s Responsibility in the Hyperreal
For those of us who work in education, the collapse of reality presents both a crisis and an opportunity. Our students will navigate a world where distinguishing truth from fabrication requires constant vigilance and sophisticated judgment. They will encounter political deepfakes, synthetic academic content, and fabricated histories. But the hyperreal also forces us to confront questions that education has often avoided: What does it mean to know something? How do we distinguish reliable from unreliable information? What obligations do we have to the truth in a world where truth has become slippery?
If we take Baudrillard seriously, education cannot continue with business as usual. We cannot rely on artifact detection or traditional source evaluation. Instead, we must help students develop a new relationship with truth: more skeptical than previous generations could afford to be, more active in verification, more comfortable with uncertainty while remaining committed to the pursuit of understanding. They must learn to build knowledge collaboratively, to check and cross-check, to hold conclusions provisionally while taking them seriously enough to act upon.
This requires intellectual humility from educators, who must model uncertainty even as they guide students. It demands time in curricula already strained by competing demands. It pushes against assessment systems designed to measure mastery rather than navigation of ambiguity. But the alternative is abandoning students to the hyperreal without the epistemic agency they need to pursue truth actively rather than merely consume information passively.
The murder of reality is not complete. Despite Baudrillard’s apocalyptic rhetoric, reality persists, stubbornly asserting itself even when we misidentify it as simulation. What has collapsed is not reality itself but our unreflective confidence in our ability to perceive it directly.
Perhaps that collapse creates space for a more mature relationship with truth—one that recognizes both the difficulty of knowing and the necessity of trying. Our classrooms can become laboratories for that maturity, spaces where students learn to navigate uncertainty without surrendering to nihilism, to question confidently without becoming paralyzed by doubt. This may be the most important work education can do in the age of the hyperreal.
The images in this article were generated with Nano Banana Pro.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.






