The Einstein AI Panic
Why is no one asking how educators missed the capabilities of agentic AI?
I need to vent a little.
I find the recent discussion about the autonomous homework completion bot “Einstein AI” deeply problematic. Not because of what the system claims to do. The advertised capabilities are neither new nor unexpected. When I first heard about Einstein, I did not even give it much attention because it described nothing that was not already achievable with existing tools. What I find deeply concerning is the sheer number of educators who were caught off guard by its existence.
I want to be precise about who I mean. I am not talking about the classroom teacher who spends every working hour with students and barely has time to eat lunch, let alone follow developments in AI research. AI is developing at a pace that makes staying informed a significant commitment. That teacher’s surprise is understandable and forgivable.
I am talking about educators who publish about AI use in education, who present at conferences on this topic, and who position themselves as informed voices in this space. How is it possible that even these supposedly AI-knowledgeable experts were blindsided? The answer, I think, is troubling. What Einstein exposed is not primarily an education crisis. It is a foundational AI literacy crisis among educators.
What Einstein AI actually claimed to do
For those who missed the initial wave of coverage, here is what happened. In late February 2026, a product called Einstein AI, developed by companion.ai under the leadership of Advait Paliwal, went viral in education circles. Unlike the AI writing assistants and study aids that educators have spent years debating, Einstein was marketed as something qualitatively different: a total digital proxy for the student. According to the companion.ai website and reporting from numerous outlets, the system claimed it could autonomously log into Canvas, the learning management system used by roughly half of college students in North America and complete all academic work from end to end.
The scope of the claimed automation appeared striking. Once provided with a student’s login credentials, Einstein would reportedly monitor course pages daily, watch recorded lectures, analyze assigned readings, participate in discussion boards with context-appropriate replies, write essays with citations, and submit assignments before deadlines. The marketing copy encouraged students to “set him up and forget about it.” Subscription tiers were priced at forty, one hundred, and two hundred dollars per month, commodifying academic dishonesty as a subscription service.
Between February 23 and 27, the companion.ai/einstein page began returning 404 errors. The takedown resulted from a cease-and-desist order due to trademark infringement. The site vanished as quickly as it had appeared.
Why the shock should concern us more than the technology
The educational community’s response was swift and visceral. Educators flooded Reddit, Bluesky, and professional forums with expressions of disbelief, urgent demands that IT departments find a way to “block” the agent, and lamentations about the death of academic integrity. The reaction was overwhelmingly adversarial and, in many cases, panicked.
This reaction concerns me far more than Einstein itself, because it exposes a fundamental knowledge gap. The technology Einstein claimed to use has been openly discussed in software developer communities for months. The architectural components are documented, open-source, and freely available. Educators who follow AI development even casually should have seen something like this coming. The large number of people who didn’t highlights a significant problem regarding AI literacy in our profession.
The architecture that makes Einstein unremarkable
This is where we need to shift the conversation from alarm to technological understanding. Einstein AI is not a noteworthy innovation. It is a straightforward application of existing agentic AI frameworks to the education sector. To understand why this makes a difference, educators first need to grasp the distinction between the generative AI tools they have been debating for the past two years and the agentic AI systems that are quietly overtaking them.
Traditional generative AI operates on a simple request-and-response model. A user types a prompt; the system generates text. The human remains in control at every step, deciding what to ask, evaluating the output, and manually transferring content to wherever it needs to go.
Agentic AI operates on a fundamentally different principle. These systems use what developers call a ReAct agent architecture: reason, act, observe. The agent receives a high-level goal, visually or structurally parses a digital environment, formulates a multi-step plan, executes actions such as clicking buttons or entering text, observes the results, and adjusts its approach iteratively until the goal is achieved. The human sets the objective and walks away. It is the machine that handles everything else.
The most prominent open-source framework powering these agents in early 2026 is OpenClaw, formerly known under the developmental names Moltbot and Clawdbot. Within weeks of its release, OpenClaw became one of the fastest-growing repositories in GitHub’s history. Paliwal himself described Einstein AI as essentially “OpenClaw as a student,” explicitly linking his commercial product to this open-source foundation.
OpenClaw’s architecture distinguishes between “tools” and “skills.” Tools are the agent’s basic capabilities: reading files, executing system commands, and interacting with web pages. Skills are community-built instruction sets that teach the agent how to combine those tools for specific tasks. The OpenClaw ecosystem already includes thousands of skills hosted on registries like ClawHub, and crucially for educators, this includes dedicated Canvas skills that grant an agent detailed control over the LMS interface, including the ability to navigate to specific URLs, evaluate JavaScript within the browser context, and capture screen snapshots for visual analysis.
Because OpenClaw can run persistently on a cloud server costing less than six dollars per month, it functions as an always-on digital proxy. A scheduling system wakes the agent at configurable intervals, allowing it to check for new assignments and execute academic workflows with no human input whatsoever.
Why shutting down Einstein changes nothing
Here is the point I want educators to sit with: the disappearance of the Einstein AI website is functionally irrelevant. The commercial product may be gone, but the underlying technology is open-source, well-documented, and within the reach of any reasonably skilled student willing to spend an afternoon setting it up.
A student does not need companion.ai’s subscription service. They need a cloud server, an agentic browser framework, and a Canvas skill set from an open-source registry. The total cost would be a few dollars per month. The technical barrier is modest. Step-by-step tutorials are already on YouTube.
This knowledge is important because the institutional response to Einstein has largely focused on shutting down the specific commercial product. But that approach misses the structural problem entirely. You cannot shut down an open-source framework. You cannot send a cease-and-desist letter to a GitHub repository that anyone can fork and redeploy. The capability exists in the wild, permanently, and it will only become easier to use over time.
The safeguards that no longer safeguard
I have written extensively in previous essays on this Substack — in my serialized book The Detection Deception, my essay on agentic browsers, and my series on AI-resistant assessment — about why traditional digital safeguards are failing and what pedagogical alternatives exist. The Einstein case brings those arguments into sharp focus. Let’s have a look at these safeguards and why they are ineffective in the age of agentic AI.
Lockdown browsers are perhaps the most widely trusted and least effective safeguard against agentic AI. Tools like Respondus LockDown Browser and Digiexam work by enforcing a restrictive environment on the student’s local device: preventing other applications from opening, disabling developer tools, and restricting virtual machines. The critical assumption is that the cheating originates from the student’s physical computer.
Agentic systems bypass this assumption entirely. They run on remote cloud-based virtual machines with their own browser instances. The student’s personal device is not involved. In principle, it does not even need to be powered on. The agent establishes a direct, cloud-to-cloud connection with the Canvas servers. From the LMS’s perspective, it sees standard HTTP requests, valid authentication cookies, and normal interaction patterns. Instructure’s own engineering teams have acknowledged this vulnerability in community forums, noting that because the agent interacts with the user interface through cloud-based browser automation rather than the API, server-side detection is “nearly impossible.”Multi-factor authentication fares only marginally better. While MFA effectively prevents traditional credential-stuffing attacks, it requires only a single human action to be defeated: the student approves the initial login request on their phone. Once the agent’s virtual browser session is authenticated, it inherits the session cookies and can maintain persistent access without re-authentication.
API access restrictions represent Instructure’s most direct policy response. In late 2025, Canvas implemented stricter controls on user-generated access tokens, including mandatory expiration limits and administrative oversight. These measures effectively block crude, API-dependent third-party tools. But they do nothing against agentic browsers, which never touch the API. The agent navigates Canvas exactly as a human would, clicking through the graphical interface and parsing visible text.
The deeper issue is that agentic browsers are specifically engineered to evade detection. These systems deploy dynamic IP proxy rotation through residential addresses, spoof hardware configurations to pass advanced fingerprint scans, and include native CAPTCHA-resolving mechanisms. From the perspective of institutional IT infrastructure, the traffic generated by an AI agent is indistinguishable from a student logging in from a campus dormitory.
What this demands of educators
If the technological arms race is unwinnable, the response must be pedagogical rather than technical. The Einstein case adds urgency to arguments I have been making on this Substack for months, but it does not change their fundamental logic.
When an autonomous agent running on a remote server can bypass a lockdown browser, read a prompt, generate a formatted essay, and submit it before the deadline, the problem is not the agent. The problem is an assessment design that made the agent’s task trivially easy. If a machine can complete an assignment without ever having attended a class, participated in a discussion, or experienced the learning process, what exactly was that assignment measuring?
The shift that educators need to make is from evaluating polished artifacts to evaluating the cognitive process that produces them. Oral defenses and Socratic questioning, where students must verbally explain and defend their written work in real time, remain beyond the reach of autonomous agents. Process-based evaluation, which grades iterative drafts, peer reviews, reflections, and version histories rather than final products alone, anchors assessment in the lived experience of learning. Video logs, which I have written about as an AI-resistant assessment method and which I currently use in my classes, preserve the embodied reality of thinking in ways that text submissions cannot. And in-class demonstrations, debates, and collaborative problem-solving sessions demand the spontaneous, situated cognition that agentic systems simply cannot fake.
The literacy imperative
Anthropic’s 2026 Education Report on the AI Fluency Index draws an important distinction between AI adoption and AI fluency. Adoption means using the tools. Fluency means understanding them well enough to collaborate with them critically, recognizing their limitations, questioning their outputs, and maintaining intellectual agency throughout the interaction. The report’s data suggest that when AI produces complete artifacts, users are measurably less likely to question the reasoning behind them or identify missing context. Systems designed to remove the human entirely from the loop, as Einstein was, push users toward the most dangerous form of AI interaction: total delegation without critical oversight.
This is why AI literacy for educators is not optional and not merely a professional development checkbox. It is a structural prerequisite for the continued functioning of education in its current form. And the data suggest we are nowhere close to meeting that prerequisite.
According to Coursera’s 2026 report, only 25 percent of educators feel confident in their ability to use AI effectively, and just 28 percent report that AI literacy has been formally incorporated into their curriculum. A 2025 Literacy Trust study found that nearly 67 percent of teachers say they need more training and resources for generative AI. Data indicates that 81 percent of educators lack the time and 75 percent lack the knowledge to develop AI training curricula. Perhaps most tellingly, 56 percent of both students and educators believe higher education is unprepared to manage AI.
The Einstein panic confirmed what the surveys had already told us. Educators who do not understand how agentic systems work cannot design assessments that resist them. And if they have never engaged with the open-source ecosystem powering these tools, they cannot accurately assess the threat landscape, advise their students on the risks, or advocate for meaningful institutional policy.
The U.S. Department of Labor’s Employment and Training Administration published a comprehensive AI literacy framework in early 2026, recognizing that AI literacy is no longer a niche technical skill but a foundational requirement for professional participation across sectors. Education is no exception. Fortunately, states are beginning to enact legislation requiring school districts to formulate clear AI policies. These are necessary steps, but they will remain insufficient without sustained investment in the educators who must implement them.
What comes next
Einstein AI will not be the last system of its kind. The underlying technology is maturing rapidly, the open-source ecosystem is expanding, and the economic incentives for building student-facing automation tools are substantial. The next iteration may not announce itself with a viral marketing campaign. It may simply appear as a GitHub repository with a helpful README, indistinguishable from thousands of other open-source projects, quietly adopted by students who understand the technology better than their instructors do.
The question facing the educational community is whether we will meet that moment with the same shocked disbelief that greeted Einstein, or whether we will have done the hard work of building genuine AI literacy among the professionals responsible for guiding students through an increasingly automated world. Technology will not slow down for us. The tools are not going to become less capable. And the students are not going to stop finding them.
What we can control is how well we understand these systems, how thoughtfully we design our assessments in response, and how seriously we take the obligation to prepare ourselves for what is already here. The survival of meaningful education in the agentic era depends not on finding better ways to block machines, but on becoming the educators who render such blocking unnecessary.
The images in this article were generated with Nano Banana 2.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.






