When Algorithms Become Oracles: Faith, Human Nature and the Future of Learning
Why our oldest human impulses might be the key to navigating our newest technology
The following post represents a slight departure from my usual focus on the purely pedagogical implications of AI. I recently encountered an idea so compelling that it pulls me toward the fringes of my academic comfort zone into the realms of sociology, theology, and religious studies. This idea fundamentally reframes our entire conversation about the relationship between humans and technology, and offers crucial insights for how we should approach AI in our classrooms.
Despite venturing into unexpected theological territory, this isn't entirely foreign ground for me. In Game Studies, one of our foundational texts is Johan Huizinga's “Homo Ludens,” a classic that, as few people know, also serves as fundamental reading in certain Protestant theologies. While that has been the extent of my formal contact with religious studies, it sparked a curiosity that was recently reignited by a fascinating video I stumbled upon.
The video, by the YouTube creator "Genetically Modified Skeptic," presented a thesis on the rise of "AI religion." When most people hear that term, they likely picture the more extreme examples: the "tech bro" culture that treats code as gospel and CEOs as prophets, or the social media influencers who elevate AI chatbots to the status of digital oracles. But the creator's argument was far more nuanced and, frankly, much more profound and relevant for educators.
He argued that the emergence of religious-like thinking around AI isn't merely about glorifying a new tool. Instead, it represents a deeply human, almost predictable response to the disorientation that accompanies any transformative technology. It functions as a coping mechanism. When a new force enters our world—one that is powerful, poorly understood, and capable of fundamentally changing our lives—we instinctively reach for frameworks of meaning that resemble religion. We do this because we need to make sense of the confusing, cope with our fears, feel a sense of purpose, and believe in the possibility of salvation from our problems.
A Search for Meaning, Not Miracles
This idea struck a deep chord with me. As someone who falls somewhere between agnostic and atheist, I've nevertheless always held a keen interest in religious thought. I've had the privilege of knowing many deeply religious people, and our conversations have consistently ranked among the most intellectually and emotionally rewarding experiences of my life. Their faith has never been a barrier to our connection; rather, it has served as a window into the diverse and powerful ways humans construct meaning.
It's precisely this idea—religion as a fundamental process of constructing meaning—that makes the Genetically Modified Skeptic's thesis so powerful. He suggests that this meaning-making impulse operates as a core component of our human operating system, activated not by supernatural claims, but by functional human needs. This perspective allows us to see the quasi-religious fervor around AI not as a strange new cult, but as the latest chapter in a very old human story.
The implications become even more significant when we consider how this pattern manifests in educational settings. Students encountering AI for the first time often approach it with a mixture of awe and uncertainty that mirrors religious consultation. They pose questions and receive answers that seem to emerge from an intelligence beyond their comprehension. The speed, breadth, and apparent authority of AI responses can create an almost mystical experience—one that bypasses critical evaluation in favor of faithful acceptance.
A Practical Guide to Demystifying AI in the Classroom
When we bring this lens into the world of education, the implications are therefore immediate and urgent. The perceived power of AI, with its superhuman scale and disembodied nature, naturally tempts users to treat it as a modern-day oracle—an infallible source of knowledge and truth. This represents perhaps the single greatest pedagogical danger of AI: that it will be used not as an instrument for inquiry, but as a dispenser of final answers, short-circuiting the learning process itself.
An uncritical, faith-based adoption of AI in our schools is not merely a missed opportunity; it actively undermines our core mission. It deskills critical thinking by removing the "desirable difficulties" that build genuine competence. When students receive instant, polished answers to complex questions, they miss the cognitive struggle that transforms information into understanding. The messy process of research, synthesis, and revision—where real learning occurs—evaporates in the face of algorithmic immediacy.
This dynamic also risks amplifying the societal biases embedded in AI's training data, perpetuating inequity under a veneer of technological objectivity. When students accept AI outputs as gospel truth, they internalize not just information but the prejudices and limitations baked into these systems. The appearance of neutrality makes these biases even more insidious, as they arrive wrapped in the authority of advanced technology.
Furthermore, over-reliance on AI can erode the human-centered, social, and embodied experiences that cognitive science tells us are essential for deep learning. The collaborative discussions, the physical manipulation of materials, the social negotiations of meaning—all these crucial elements of education risk being replaced by isolated interactions with a digital oracle.
Our role as augmented educators, then, is to demystify the oracle. The goal is to transform the student's relationship with AI from that of a supplicant to that of a philosopher engaging a "Socratic partner". This requires a fundamental pedagogical shift that values the quality of a student's questions over the quality of the AI's answers.
To make this more tangible, here are four practical strategies to make that shift a reality in your classroom.
1. Model Critical Engagement and Transparency
Students are more likely to adopt a critical stance toward AI if they see us demonstrate it first. We cannot be experts in this rapidly changing technology, and being open about our own learning process is crucial. This vulnerability, far from undermining our authority, actually strengthens our credibility as guides through uncertain terrain.
Thinking Aloud: When using an AI tool in front of the class, verbalize your entire process. Say things like, "That's an interesting starting point, but it missed the nuance of X," or "I'm going to rephrase my prompt to be more specific because the first answer was too generic." Model the iterative nature of AI interaction—show students how you refine prompts, cross-reference responses, and maintain skepticism even when answers seem plausible. This running commentary transforms AI use from magic to method, revealing the human judgment required at every step.
Co-Create Policies: Instead of handing down a rigid set of rules about AI use, facilitate a dialogue with students about both benefits and limitations. Work together to develop class guidelines that address questions like: When is AI use appropriate? How should AI contributions be cited? What constitutes academic integrity in an age of AI? Giving students agency in this process fosters a sense of shared responsibility and investment in ethical use. They become partners in navigating this new landscape rather than subjects of imposed restrictions.
2. Make the AI the Object of Study
The most effective way to reveal that AI is a tool, not an oracle, is to design assignments where the AI itself becomes the subject of critique. These exercises transform AI from an unexamined authority into a specimen under the microscope.
A Tale of Two Critiques: Ask students to analyze a primary source—perhaps a historical document or literary text. Then, provide them with two critiques of that source: one generated by an AI and one written by a human expert. Their task is not just to evaluate the source, but to compare the two critiques, identifying the AI's formulaic patterns, its lack of deep insight, and its potential biases versus the human's nuanced analysis. Students learn to recognize the telltale signs of AI generation: the confident tone that masks uncertainty, the surface-level observations that miss deeper meanings, the tendency toward safe, middle-ground interpretations.
Fact-Check Fridays: Institute a regular classroom routine where you pose a question to an AI and task students with verifying its response using reliable, vetted sources. Start with straightforward factual queries, then progress to more complex analytical questions. This simple, repeatable activity builds healthy skepticism and reinforces essential research skills. Students discover that AI can be confidently wrong, mixing accurate information with plausible-sounding fabrications. They learn to recognize "hallucinations" and understand that AI's fluency doesn't equal accuracy.
Illustrate a Hoax: Challenge students to use AI image generators to create "evidence" for a fictional historical event or a scientific hoax. Have them generate convincing images of events that never happened—perhaps "photographic evidence" of medieval smartphones or Victorian-era space travel. This hands-on activity powerfully demonstrates how easily these tools can be used to create compelling misinformation. Follow with a vital discussion on digital literacy and source verification. Students viscerally understand that in an age of AI, "seeing" can no longer be synonymous with "believing."
3. Shift Focus from Product to Process
If we only assess the final product (e.g., an essay), we create an incentive for students to use AI to bypass the learning process. By assessing the process of inquiry itself, we reward the human skills that AI cannot replicate.
Teach the AI: This powerful metacognitive exercise flips the traditional dynamic. Assign students to explain a complex concept from your course to an AI chatbot. They must break down difficult ideas, provide examples, and correct the AI's misunderstandings. Students then submit the chat transcript along with a reflection answering questions like: "Where did the AI seem to understand? Where did it get confused? What did you have to re-explain, and how did that clarification deepen your own understanding?" This reframes the AI as a non-judgmental partner for elaborative interrogation while revealing that true understanding requires more than information retrieval—it requires the ability to teach, clarify, and adapt explanations.
Assess the Prompts: Require students to submit not just their final work, but also the complete series of prompts they used to generate ideas or drafts. Ask them to write a brief reflection on their prompting strategy: Why did they start with a particular prompt? How did they refine it based on the AI's responses? What dead ends did they encounter? This makes their thinking visible and values the skill of effective inquiry. Students learn that working with AI is itself a skill requiring critical thinking, strategic planning, and iterative refinement.
4. Integrate Ethics into Everything
AI ethics should not be relegated to a single lesson or a computer science elective. It is a fundamental literacy that must be woven into the fabric of our curriculum, appearing naturally wherever AI touches our subject matter.
Use Real-World Scenarios: Discuss current events involving AI as they arise. Should AI be used in hiring decisions? What are the privacy implications of facial recognition in schools? How should we respond when AI systems perpetuate racial bias? These aren't abstract philosophical questions—they're urgent practical challenges that students will face in their personal and professional lives. By engaging with these scenarios across disciplines, students develop the ethical reasoning skills necessary for navigating an AI-infused world.
Explore the Hidden Costs: Go beyond the immediate user experience to examine the broader implications of AI systems. Use a project-based learning unit to investigate the environmental impact of the massive data centers that power AI, or the labor practices involved in training these models. Students might research the carbon footprint of training a large language model, interview gig workers who label data, or investigate the extraction of rare earth minerals for computing hardware. This teaches students to see technology not as magic, but as a product with a complex and often hidden supply chain, with real consequences for real people and the planet.
This work of demystification—of transforming the oracle back into a tool—is more than just a new item on the educator's checklist. It is a vital act of cultural and intellectual stewardship. It represents our response to a powerful and predictable pattern in human history, one that reveals a great deal about who we are and how we make sense of the world. Resisting the temptation to worship the machine is not a new challenge; it is a recurring one. And understanding this pattern is the key to navigating the profound changes that lie ahead.
A Future of Faith and Machines
We are living through a moment of radical technological change, and it's natural to feel a sense of whiplash. But as we navigate this new terrain, it's helpful to remember that this isn't the first time humanity has been here. The impulse to imbue new technology with spiritual significance is a recurring pattern in our history.
Consider the arrival of the printing press in the 15th century. The Catholic Church initially hailed it as a "divine art," a miraculous tool for mass-producing Bibles and extending the glory of God. But this enchantment quickly took on a revolutionary form. Martin Luther brilliantly harnessed the press to distribute his Ninety-five Theses, making his ideas the first bestsellers of the modern age and fueling the Protestant Reformation. A technology that began as a tool for the established faith became the engine for its schism, demonstrating how quickly we project our deepest spiritual conflicts onto our newest inventions.
Centuries later, the invisible force of electricity sparked a similar spiritual resurgence. In the 19th century, it was seen as a "sacred" force that fueled the rise of Spiritualism, a movement centered on communicating with the dead. The electric telegraph, transmitting messages invisibly across vast distances, became both a powerful metaphor and perceived medium for this contact. The movement captivated even inventors like Thomas Edison, who announced plans for a “ghost machine” to speak with spirits. The very architects of the electrical age couldn't resist interpreting their creation through a spiritual lens.
From the printing press to the telegraph, and later to the cyber-utopian dreams of a digital Eden that accompanied the early internet, the pattern remains consistent. A transformative technology emerges that is powerful and poorly understood. It evokes a sense of awe and disorientation. In response, we project our deepest hopes and fears onto it, creating new narratives of salvation, damnation, and transcendence.
This historical pattern also suggests that technological advancement doesn't necessarily work against religious thought; in many ways, it can catalyze it. The more powerful and mysterious our tools become, the more we may feel the need for frameworks of meaning to help us understand our place in the world. Each new technology that exceeds our immediate comprehension creates a vacuum of understanding that we fill with transcendent narratives.
“So Say We All”
This dynamic reminds me of the universe depicted in Battlestar Galactica, where a society capable of interstellar travel and creating sentient artificial life was also deeply religious. The characters navigated advanced technology while grappling with fundamental questions of meaning, purpose, and faith—their ritualistic phrase "So Say We All" serving as their equivalent of "Amen," affirming shared beliefs in the face of existential uncertainty. That fictional world might be more prophetic than we think. As we move further into an age defined by artificial intelligence, we may not be heading toward a sterile, secular future. Instead, we might be entering an era where our most advanced creations trigger our most ancient human responses, leading to a world where faith and technology are not at odds, but are inextricably intertwined.
This potential future is not something to be feared, but to be navigated with intention. The human impulse to create meaning, to seek purpose, to construct narratives that help us understand our place in the cosmos. These are not weaknesses to be overcome but fundamental aspects of our humanity. The question is not whether we will construct transcendent narratives around AI, but whether those narratives will enhance or diminish our capacity for critical thought and human agency.
How are you navigating the tension between AI's promise and its potential to short-circuit critical thinking? Whether you're an educator, student, or simply someone thinking about AI's role in learning and meaning-making, I'd love to hear your perspective. What patterns are you seeing? What concerns you? What gives you hope?
Share your thoughts in the comments below. This conversation about demystifying AI is one we need to have together.