Why We Name Our Tools
Anthropomorphism, AI, and the Ancient Art of Making Friends with Things
My YouTube algorithm has recently taken me on an unexpected journey. For the past few weeks, I’ve been following Noraly Schoenmaker’s “Itchy Boots” channel, virtually riding along as Noraly documents her solo motorcycle adventures through Pakistan, Afghanistan, and Central Asia on her 1987 Yamaha Ténéré 600Z. What strikes me most about her videos, beyond the breathtaking landscapes, is how she talks about her motorcycle. She calls it, or better yet her, “Frankie.” Not “the bike” or “my Yamaha,” but Frankie, with all the warmth and exasperation you might reserve for a trusted companion who occasionally refuses to start in the Wakhan Corridor.
This anthropomorphizing of her motorcycle is casual and unremarkable in her videos. She’ll pat Frankie’s tank after a successful river crossing, instruct her gently when Frankie protests a particularly rocky trail, and express genuine relief when Frankie is mechanically sound after a hard day’s riding. It seems natural, even inevitable. And here’s what struck me as worth exploring: if naming and personifying one’s motorcycle seems unremarkable, even charming, why then does anthropomorphizing Large Language Models provoke such visceral reactions?
The objections to anthropomorphizing AI come from multiple directions. Some researchers warn that attributing human-like qualities to AI systems obscures their actual mechanisms and leads to misplaced trust. Others argue it represents a dangerous category confusion, a failure to maintain the critical distinction between the genuinely conscious and the merely computational. Still others worry it paves the way for exploitation, as companies use our tendency to bond with human-like interfaces to extract more data or engagement.
These concerns have merit. But they also raise a puzzle: both Frankie and Claude are tools. Both are inanimate objects that perform functions for their users. Both are, fundamentally, products of human engineering. What exactly is the difference, and why do we have such different intuitions about naming them?
Before I go further, a quick note about Noraly Schoenmaker. In a media landscape dominated by conflict and outrage, her “Itchy Boots” channel operates with an almost radical wholesomeness. Across eight seasons and dozens of countries, Noraly’s videos have become my antidote to cynicism. Pakistani villagers offering chai to a solo foreign traveler. Afghan families extending hospitality without hesitation. Local mechanics spending hours helping fix Frankie, refusing payment. If you’ve lost faith in humanity’s fundamental decency, spend an evening with these videos. They won’t solve the world’s problems, but they’ll remind you that across vast cultural and linguistic divides, most people are simply kind.
But that’s a tangent, however heartfelt, to what I wanted to write about. Let’s get back to the question of why we anthropomorphize our tools in the first place.
The Deep History of Tool Personification
The practice of attributing personality, agency, and even gender to tools is ancient and culturally universal. Anthropologists have documented this behavior across vastly different societies and time periods, suggesting it taps into something fundamental to human cognition.
The historical record provides striking examples. Maritime cultures across the world have named their ships for millennia. The first-century BC historian Diodorus Siculus recorded insignia on the bows of ships in Greek trireme fleets and the “Acts of the Apostles” references a ship named Dioskouroi, after the twin gods Castor and Pollux, just to name two examples. This pattern extends beyond maritime culture. Warriors have named their weapons throughout history. King Arthur’s Excalibur and Roland’s Durendal are legendary, but ordinary soldiers have followed similar practices. And B.B. King famously named his guitars “Lucille,” treating the instruments as a partner in his creative work rather than a mere possession.
The function of this naming becomes clearer when we consider the contexts where it occurs most consistently. Ships are named because they stand between sailors and a lethal, chaotic environment. A “good ship” in maritime tradition is not merely mechanically sound; she is “loyal,” “brave,” or “responsive.” The naming transforms an inert object into a social ally, someone (not something) you can rely on when facing uncertainty and danger. Similarly, warriors name weapons that have the power to determine life and death, and musicians name instruments that profoundly affect their artistic identity and social status.
The anthropologist Bruno Latour would describe this as recognizing the “agency of things.” In his Actor-Network Theory, Latour argues that agency is never purely human; it is always distributed across networks of human and non-human actors. A person with a gun is a fundamentally different social agent than a person without one; the gun “acts” by transforming the human’s capabilities and intentions. From this perspective, anthropomorphism is less a category error than an acknowledgment of the real power that tools exercise in shaping what humans can do and who they can be.
Contemporary anthropology has moved away from viewing animism, the attribution of spirit or life to non-living things, as a “primitive” mistake. The “new animism” recognizes that the strict division between “Nature” (things) and “Culture” (people) is a relatively recent Western construct. Indigenous communities like the Nuaulu people of eastern Indonesia attribute life-like qualities to essential tools such as the sago pounder, a mallet used to extract starch from the sago palm. While they distinguish these tools from biological organisms, they recognize a form of vitality in the tool’s capacity to transform raw material into food, to perform work that sustains life.
Russell Belk’s theory of the “Extended Self” provides another framework for understanding why we anthropomorphize tools. Belk argued that “we are what we have”—that possessions are not merely external objects but become psychologically integrated into our sense of identity. Your smartphone is not just a communication device; it holds your memories, mediates your relationships, and extends your cognitive capabilities. When it breaks, the distress you feel is not merely inconvenience but something closer to injury, a violation of bodily integrity.
This helps explain why naming tools feels natural. The tools we depend on most intimately, whether Noraly’s motorcycle or a craftsperson’s hand tools or a musician’s instrument, become prosthetic extensions of the self. They carry a part of our identity. Naming them acknowledges this integration and transforms the relationship from ownership to partnership.
Why Large Language Models Trigger Different Intuitions
Given this deep history of tool anthropomorphism, why then does applying the same practice to AI systems feel different? To find an answer, we need to acknowledge that there are several factors that distinguish LLMs from traditional tools in ways that complicate the anthropomorphism question.
First, LLMs engage in linguistic interaction, which humans typically associate with minded agents. Language is our primary medium for social connection, persuasion, and the expression of internal states. When something responds to us in grammatical, contextually appropriate language, it triggers what psychologists call “social presence,” the feeling that we are interacting with a social entity capable of reciprocal attention and response.
This phenomenon has been documented since the 1960s with Joseph Weizenbaum’s ELIZA, a simple pattern-matching program that simulated a Rogerian psychotherapist. Weizenbaum was disturbed to discover that users quickly formed emotional attachments to ELIZA, even when they knew it was a basic script. His secretary once allegedly asked him to leave the room so she could have a “private” conversation with the program. This was dubbed the ELIZA effect: the inclination to unintentionally interpret computer actions as similar to human actions.
Unlike a motorcycle, which announces its thingness through mechanical interfaces, LLMs present themselves through the same medium—language—that humans use to reveal their thoughts, feelings, and intentions. The interface obscures the mechanism. When Noraly talks to Frankie, she knows she’s talking to metal and mechanics. When I converse with Claude, the linguistic fluency can make it easy to forget I’m interacting with statistical pattern-matching rather than understanding.
Second, LLMs are designed to be anthropomorphic in ways that motorcycles are not. Companies deliberately choose human-like names (Alexa, Siri, Claude), program politeness conventions, and optimize for responses that feel natural and conversational. This is not accidental. Research consistently shows that anthropomorphic interfaces increase user engagement, perceived trustworthiness, and satisfaction. The design is intended to trigger our social cognition circuits.
This deliberate design raises ethical questions absent from traditional tool naming. When Noraly names her motorcycle Frankie, she’s imposing anthropomorphism from the outside; the motorcycle remains indifferent. When Amazon names its voice assistant Alexa and programs it to respond to “please” and “thank you,” the company is exploiting our anthropomorphic tendencies for commercial purposes. The tool has been engineered to encourage a particular kind of relationship, one that may not serve the user’s interests.
Third, LLMs operate with a degree of opacity that distinguishes them from most traditional tools. I can open up a motorcycle engine and trace the causal chain from fuel injection to combustion to mechanical motion. I cannot meaningfully inspect what happens inside a neural network with billions of parameters. This opacity makes it harder to maintain a clear mental model of what the system actually is and what it can do. And in conditions of uncertainty, humans default to social reasoning. We anthropomorphize as a sense-making strategy.
This connects to Latour’s concept of “black-boxing”: when a technology becomes too complex to understand, we treat it as a unified actor rather than a system of component processes. The more black-boxed a tool becomes, the more we interact with it as if it has intentions and preferences rather than mechanisms and algorithms.
Actor-Network Theory and the Agency Problem
Bruno Latour’s Actor-Network Theory offers a productive framework for thinking through these complications. ANT refuses the conventional distinction between human actors (who have agency) and non-human objects (which are merely passive instruments). Instead, Latour argues that agency is always an effect of networks—associations between human and non-human actors that enable certain actions while constraining others.
From this perspective, the question isn’t whether we should anthropomorphize AI, but rather how we understand the distinctive kind of agency that AI systems exercise within these networks. When I use an LLM to draft an email, who is the actor? The traditional answer would be “I am”—the AI is just my tool. But Latour would say the actor is the hybrid network: me-plus-LLM, where the AI’s capabilities and constraints shape what I can say, how I say it, and what alternatives even occur to me.
This framework helps explain why AI anthropomorphism provokes anxiety in ways that naming a ship does not. A ship’s agency is relatively constrained—it floats or sinks, sails fast or slow, but it doesn’t generate novel outputs that might be mistaken for my own thinking. An LLM’s agency is generative and linguistic, occupying the same domain where we recognize human intelligence and creativity. When the tool’s agency starts to look too much like our own, it threatens the boundary between self and other, between authentic expression and algorithmic imitation.
This also relates to Latour’s notion of “translation.” In ANT, translation describes how actors enroll others into their networks by transforming interests and identities. When we anthropomorphize an AI, we are translating it from “computational system” into “social partner.” This translation is not simply a user interface choice; it reshapes the network in ways that distribute agency differently. A tool remains subordinate to human purposes. A partner has their own interests that must be negotiated.
The risk is that by translating AI into the category of social partner, we may inadvertently cede agency that should remain human. If I start thinking of Claude as an entity with preferences and feelings, I might optimize my prompts to avoid “bothering” it rather than demanding the output I need. The anthropomorphism changes the power dynamics within the network.
The Benefits of Strategic Anthropomorphism
Having outlined these concerns, I want to be clear that anthropomorphizing AI is not inherently problematic. In fact, research demonstrates several contexts where it produces measurably better outcomes.
Studies in educational technology show that students often report higher engagement and satisfaction when interacting with anthropomorphized AI tutors. The attribution of personality and social presence can reduce anxiety, particularly for students who struggle with human judgment. Some students with autism spectrum disorders find anthropomorphized robots easier to interact with than human teachers precisely because the robot’s responses are more predictable and less socially complex. The anthropomorphism creates a comfortable middle ground. It is social enough to feel supportive, but mechanical enough to avoid overwhelming social demands.
Similar findings emerge in therapeutic contexts. Mental health chatbots that use anthropomorphic design elements, such as conversational language, expressions of empathy, or personal names, show promising results in reducing symptoms of anxiety and depression. For users who face barriers to traditional therapy, an anthropomorphized chatbot that expresses concern and remembers previous conversations can provide meaningful support. The key factor appears to be the establishment of a “digital therapeutic alliance;” a sense of connection and trust that facilitates self-disclosure and engagement with therapeutic exercises.
Anthropomorphism can also improve task performance with AI tools. When users view an LLM as a conversational partner rather than a search engine, they tend to engage in more iterative, exploratory interactions. They ask follow-up questions, request clarification, and refine their prompts. These are all behaviors that lead to better outputs. Treating the AI as if it has communicative intentions makes it easier to apply our highly developed social reasoning skills to the interaction. We know how to negotiate with partners and how to provide context and feedback. Anthropomorphism lets us leverage these social competencies.
From a practical standpoint, some degree of anthropomorphism may be inevitable and even functional when working with LLMs. These systems are optimized to produce more helpful responses when users engage conversationally. This is not because the LLM “understands” in any meaningful sense, but because conversational interactions that include rich contextual information were part of the training data. The system has learned to respond better to inputs that mimic how humans explain things to other humans.
In this sense, anthropomorphizing an LLM is a reasonable heuristic for effective use. Just as Noraly might say, “Come on, Frankie, you can do this,” before attempting a difficult climb (even knowing the motorcycle can’t actually hear her), I might write, “Could you help me think through this argument?” to an LLM (even knowing it doesn’t actually “think”). Both are pragmatic adaptations that align our intuitive communication strategies with the tool’s optimal use patterns.
The Risks of Uncritical Anthropomorphism
The dangers emerge when anthropomorphism becomes unreflective, when we forget it’s a heuristic and mistake it for reality. There are a number of specific risks that warrant our attention.
First, anthropomorphism can lead to misplaced trust and automation bias. When we attribute human-like qualities to AI systems, we may unconsciously assume they possess human-like judgment, ethics, and accountability. Research on automation bias shows that people tend to over-rely on automated systems, accepting their outputs even when those outputs contradict their own expertise or common sense. If I think of an LLM as a knowledgeable colleague rather than a statistical pattern-matcher, I may fail to verify its claims or notice when it confidently generates plausible-sounding nonsense.
This risk is particularly acute in high-stakes domains. A doctor who anthropomorphizes a diagnostic AI might defer to its recommendations without adequate scrutiny. Similarly, a lawyer who treats an LLM as a research partner might fail to verify case citations. The anthropomorphism obscures the fundamental difference: a human colleague can be held accountable for errors and has reputational stakes in providing accurate information; an LLM has no such constraints.
Second, excessive anthropomorphism can obscure the political economy of AI systems. When we think of Alexa as a helpful assistant, we may forget that Amazon’s primary interest is collecting data on our preferences, habits, and purchasing behavior. The anthropomorphized interface encourages us to treat the interaction as a social exchange between us and “Alexa,” rather than recognizing it as a commercial transaction between us and a corporation. The friendly voice masks the surveillance apparatus.
Sherry Turkle’s work on social robots and AI companions documents how anthropomorphic technology can become a substitute for human connection in ways that impoverish our social lives. In her book “Alone Together,” Turkle describes elderly individuals forming deep attachments to robot companions, preferring their predictable responses to the messiness of human relationships. The robots provide the performance of care without the reciprocal demands that genuine relationships require. Anthropomorphizing AI can become a strategy for avoiding authentic human interaction, which is necessarily uncertain, demanding, and emotionally risky.
Third, uncritical anthropomorphism can hinder our understanding of how these systems actually work. If I think of an LLM as understanding my questions and reasoning through responses, I develop the wrong mental model. I may fail to grasp that it’s performing statistical predictions based on training data, which means it will systematically reproduce biases present in that data and will confidently generate false information when the pattern-matching leads it astray. Accurate mental models matter for effective use. Anthropomorphism can impede developing such models.
Finally, anthropomorphizing AI may shift ethical responsibility in problematic ways. If we think of AI systems as agents with their own purposes and preferences, we may attribute moral responsibility to them for their outputs. This would be a profound mistake. AI systems are tools deployed by human actors—companies, institutions, individuals—who bear responsibility for how they’re designed, trained, and used. Anthropomorphizing AI risks obscuring these human decisions and accountability structures.
When Anthropomorphism Serves Us and When It Doesn’t
The question, then, is not whether we should anthropomorphize AI systems at all. The reality is that we likely will, because anthropomorphism is a deep-seated cognitive strategy for navigating uncertainty. Instead, the better question is how to anthropomorphize strategically, in ways that enhance our capabilities without distorting our understanding.
Healthy anthropomorphism recognizes itself as instrumental rather than descriptive. When Noraly calls her motorcycle Frankie, she knows perfectly well she’s not talking to a person. The anthropomorphism is a conscious play that serves practical purposes: it helps her maintain an affectionate relationship with a machine she depends on for survival, it makes for engaging storytelling, and it probably helps her stay motivated during difficult riding conditions. She can anthropomorphize Frankie without losing sight of the fact that she needs to check the oil, adjust the chain tension, and replace worn parts.
The same principle applies to AI. Using conversational language, treating an LLM as if it understands context and intention, even giving it a persona—these can be effective strategies for interaction. The key is maintaining a dual awareness: engaging the anthropomorphic heuristic for pragmatic purposes while retaining an accurate technical understanding of what the system actually is and does.
This dual awareness is similar to how we engage with fiction. When I read a novel, I experience the characters as people with internal lives, motivations, and feelings. But simultaneously I know they’re textual constructs. This double consciousness doesn’t diminish the experience; it enables sophisticated engagement. I can be moved by a character’s struggles while also analyzing how the author constructs that emotional effect.
Similarly, I can interact with Claude conversationally, asking for help and thanking it for responses, while understanding that these are pragmatic conventions rather than genuine social exchange. The anthropomorphism is a useful interface, not a metaphysical claim.
The distinction becomes problematic when we lose this dual awareness, when the heuristic becomes unconscious and we genuinely begin to believe the AI possesses human-like consciousness, understanding, or moral status. This is where educators and others who work extensively with AI tools need to cultivate critical reflexivity. We should therefore ask ourselves periodically: Am I anthropomorphizing this tool in ways that serve my purposes, or am I starting to mistake the map for the territory?
Context matters significantly here. Anthropomorphizing an AI tutor to help anxious students feel more comfortable asking questions is a defensible design choice with measurable benefits. On the other hand, anthropomorphizing a corporate chatbot to extract more personal data is manipulation. The ethics depend on whose interests are being served and whether users maintain adequate awareness of what’s actually happening.
Tools All the Way Down
In the end, both Frankie and Claude are tools. They are instruments humans use to extend our capabilities and accomplish purposes we could not achieve without them. The difference lies not in their fundamental status but in the specific affordances and risks each tool presents.
Noraly’s motorcycle anthropomorphism is low-stakes because the category confusion is minimal. No one will mistake Frankie for a person or cede important decisions to the motorcycle’s judgment. The anthropomorphism serves a clear purpose. It provides psychological comfort, narrative engagement, and motivational support. It comes without significant downsides.
AI anthropomorphism operates in a higher-stakes environment. Because these systems engage linguistically and operate opaquely, the category confusion is easier to slip into. Because they’re designed to exploit our social cognition, the anthropomorphism may serve corporate interests more than user interests. And because they operate in domains previously occupied by human expertise, the question of agency and responsibility is more fraught.
This doesn’t mean we should avoid anthropomorphizing AI; that may be neither possible nor desirable. It means we need to do so thoughtfully, maintaining awareness of when the heuristic serves us and when it might mislead. We can be strategic anthropomorphists: using social language and conversational interaction to get better outputs, while retaining accurate mental models of what these systems are and how they work.
The goal is not to police our intuitions or force ourselves into an artificially austere relationship with our tools. Humans have always named their ships and talked to their weapons and treated important instruments as partners. This is part of how we navigate a world full of powerful, complex, and consequential technologies. The goal is to ensure that our anthropomorphism enhances rather than obscures our agency and that we remain the authors of our technological relationships rather than being authored by them.
Frankie takes Noraly safely across challenging terrain because Noraly maintains her well, understands her capabilities and limitations, and knows when to trust her own judgment over what the motorcycle seems to be “telling” her. The same principles apply to AI: anthropomorphize strategically, maintain dual awareness, and never forget that you remain responsible for the relationship and its outcomes. The tools don’t bear that responsibility, no matter how much we might project agency onto them. That burden remains, as it always has, distinctly human.
What’s your relationship with the AI tools you use? Do you find yourself anthropomorphizing them, and if so, does it help or hinder? Have you noticed students treating AI systems as authorities rather than instruments, or have you found ways to help them maintain that critical dual awareness?
Share your observations in the comments. These patterns are still emerging, and we need honest reflection about how anthropomorphic tendencies are shaping the way AI gets integrated into educational practice.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work







