Six Presentations, Six Perspectives: Rethinking Education in the Age of AI
Insights from Six International Conference Presentations
I’ve spent the past six weeks presenting research at various international conferences, focusing on the dynamic intersection of artificial intelligence and education. These presentations and the accompanying research papers, partly co-authored together with my brilliant PhD student Darya Ramezani and delivered between May and June 2025, collectively paint a picture of an educational landscape in profound transition. Technological acceleration challenges traditional structures, requiring a redefinition of human agency, while new inequalities develop despite advancements.
The timing of these presentations feels significant. We’re no longer in the early experimental phase of generative AI in education; we’re now grappling with systemic questions about how our institutions, pedagogies, and fundamental assumptions about learning must evolve. Each presentation addresses a different facet of this transformation, from the deeply philosophical questions about human and machine reasoning to the practical challenges of curriculum development in an era where course content can become obsolete very quickly.
What strikes me most about this collection is how it reveals both the opportunities and the tensions inherent in our current moment. While AI tools offer unprecedented possibilities for creativity, productivity, and accessibility, they also challenge us to rethink what it means to learn, to create, and to think critically. These presentations don’t provide simple answers, as none are available, but they offer structures to manage the complexities.
We Are Not in the Driver’s Seat: How Post-Hoc Storytelling Shapes Minds—Human and Machine Alike
This presentation, delivered at the MAD 2025 Year-Long (Un)Conference, was based on a guest article I wrote for the AI EduPathways SubStack blog. The work explores perhaps the most fundamental question raised by AI reasoning models: how similar are human and machine thinking processes? Drawing on research about the “readiness potential”, the finding that our brains begin preparing actions a third to half a second before we become consciously aware of making decisions, I argue that both human and AI reasoning involve post-hoc storytelling.
We experience our lives as narratives constructed by our brains about events that have already occurred, much like how chain-of-thought AI models generate reasoning through internal monologue. Not that human and machine cognition are identical, but that we may overestimate the complexity of human reasoning while underestimating the sophistication of AI processes. The implications for education are profound: if both humans and machines engage in narrative-based reasoning, we need a more nuanced understanding of what makes up authentic learning and thinking.
Cultural Identity in Large Language Models: Implications for Educational Applications
Presented virtually at EDULEARN 2025, this research reveals that large language models develop distinct cultural cognitive patterns that emerge from their training rather than explicit programming. Through experiments with the flower categorization task, a classic test distinguishing analytic versus holistic reasoning styles, I discovered that different AI models consistently show either Western analytical or Eastern holistic approaches to problem-solving. Claude, for instance, showed holistic reasoning patterns, while ChatGPT showed more analytical thinking. Crucially, both models insisted their approaches were objectively correct, mirroring human cultural confidence in cognitive styles.
This has significant implications for educational applications, as students working with AI may unknowingly encounter reasoning patterns that differ from their own cultural cognitive styles. Rather than viewing this as problematic, I propose treating it as a learning opportunity; a chance to develop metacognitive awareness and cultural literacy by comparing perspectives across different AI models and reflecting on why they might reach different conclusions.
Redefining Student Agency in the Age of GenAI
This second EDULEARN 2025 presentation was largely the work of Darya Ramezani, with me contributing as co-author. Darya also gave the virtual presentation. Her research proposes that meaningful AI adoption in education begins not with policy, but with emotions, recognizing that faculty often experience a grief-like process when confronting AI’s capabilities. Her three-dimensional framework addresses this challenge by: first, establishing distributed creative agency between learner and AI, where students provide context and critical judgment while AI supplies rapid ideation; second, redesigning pedagogy to make AI use explicit and process-focused, requiring students to submit chat logs and justified edits rather than just final products; and third, cultivating metacognitive awareness so students can decide when, why, and how to collaborate with AI.
This framework views AI as a tool to boost human creativity, not replace it, highlighting AI’s strength in probabilistic creativity (combining existing patterns) while acknowledging its limitations in genuinely groundbreaking, rule-redefining creativity. By making both human and AI contributions visible in the learning process, students maintain agency while benefiting from enhanced capabilities.
The AI Productivity Divide: Emerging Inequities in AI-Enhanced Education
Also authored primarily by Darya and presented by me at END 2025 in Budapest, this research identifies a new form of digital divide emerging in education. Unlike the traditional digital divide’s focus on access and skills, the productivity divide centers on the disparity in outcomes between students using AI effectively and those who aren’t. Drawing from our teaching experiences, where AI-proficient students created work of unprecedented quality not by using AI as a shortcut, but by using it to go beyond standard requirements, this research reveals how AI can amplify existing inequalities.
The divide manifests across three levels: infrastructure barriers (including new geopolitical restrictions on AI access), skill-level inequalities (requiring understanding of probabilistic rather than deterministic computing), and outcome-level disparities (where some students achieve dramatically superior results). Perhaps most concerning is the rising cost of premium AI tools, with top-tier subscriptions now costing $200-250 monthly, creating financial barriers that could stratify educational opportunities. The presentation calls for balanced approaches that harness AI’s benefits for accessibility and creativity while addressing emerging inequalities.
Rethinking Academic Review Processes: Institutional Agility in an Era of Accelerated Change
Delivered at END 2025 in Budapest as well, this presentation addresses the fundamental tension between higher education’s deliberately slow, rigorous processes and the accelerating pace of technological change. Based on my experience, I’ll discuss three major hurdles: the two-year curriculum approval process, outdated peer review systems in rapidly evolving fields, and institutional decision-making that can’t keep pace with AI advancements.
The solution I propose involves adopting agile governance principles: decentralized decision-making, iterative development, collaborative structures, and parallel rather than sequential processes. This might mean publishing course catalogs every term rather than annually, adopting dynamic peer review models where publication precedes review (as computer science has already done), and restructuring academic governance to balance traditional rigor with necessary responsiveness. The COVID pandemic provided some experience with rapid adaptation, but the AI revolution demands more systematic institutional transformation.
The Evolving Role of the Design Educator: New Pedagogical Approaches for AI-Enhanced Studios
Presented at the International Conference on Visual and Performing Arts in Athens, this presentation argues that design education holds the key to pedagogical transformation across disciplines. As AI increasingly automates the creation of artifacts, from writing to visual design, the traditional studio pedagogy centered on critique becomes more valuable than ever. The design critique represents the one educational interaction that cannot be automated: the human-to-human dialogue about process, intention, and reasoning. Although AI creates impressive work, it’s unable to explain its creative reasoning or thought process.
This means the role of educator shifts from evaluating final products to facilitating process exploration, requiring students to document and articulate their creative journey. I propose that the design critique model should become the universal pedagogy across disciplines, whether teaching writing, engineering, or computer science. This is because of its concentration on the inherently human elements of learning, such as critical thinking, reflection, and authentic communication. The challenges are significant, including compressed development timeframes and the need for extensive faculty development, but the opportunity is equally profound: design education’s human-centered approach offers a template for maintaining educational relevance in an AI-enhanced world.
These six presentations, spanning philosophy, culture, agency, equity, governance, and pedagogy, collectively illustrate the multifaceted nature of AI’s impact on education. They reveal that our challenge isn’t simply about integrating new tools, but about reconsidering how we understand learning, creativity, and human development in an age of artificial intelligence. The questions raised are complex, the solutions uncertain, but the conversation—and the transformation—is well underway.