Why We Name Our Tools
Anthropomorphism, AI, and the Ancient Art of Making Friends with Things
This post follows my standard early access schedule: paid subscribers today, free for everyone on December 16.
My YouTube algorithm has recently taken me on an unexpected journey. For the past few weeks, I’ve been following Noraly Schoenmaker’s “Itchy Boots” channel, virtually riding along as Noraly documents her solo motorcycle adventures through Pakistan, Afghanistan, and Central Asia on her 1987 Yamaha Ténéré 600Z. What strikes me most about her videos, beyond the breathtaking landscapes, is how she talks about her motorcycle. She calls it, or better yet her, “Frankie.” Not “the bike” or “my Yamaha,” but Frankie, with all the warmth and exasperation you might reserve for a trusted companion who occasionally refuses to start in the Wakhan Corridor.
This anthropomorphizing of her motorcycle is casual and unremarkable in her videos. She’ll pat Frankie’s tank after a successful river crossing, instruct her gently when Frankie protests a particularly rocky trail, and express genuine relief when Frankie is mechanically sound after a hard day’s riding. It seems natural, even inevitable. And here’s what struck me as worth exploring: if naming and personifying one’s motorcycle seems unremarkable, even charming, why then does anthropomorphizing Large Language Models provoke such visceral reactions?
The objections to anthropomorphizing AI come from multiple directions. Some researchers warn that attributing human-like qualities to AI systems obscures their actual mechanisms and leads to misplaced trust. Others argue it represents a dangerous category confusion, a failure to maintain the critical distinction between the genuinely conscious and the merely computational. Still others worry it paves the way for exploitation, as companies use our tendency to bond with human-like interfaces to extract more data or engagement.
These concerns have merit. But they also raise a puzzle: both Frankie and Claude are tools. Both are inanimate objects that perform functions for their users. Both are, fundamentally, products of human engineering. What exactly is the difference, and why do we have such different intuitions about naming them?
Before I go further, a quick note about Noraly Schoenmaker. In a media landscape dominated by conflict and outrage, her “Itchy Boots” channel operates with an almost radical wholesomeness. Across eight seasons and dozens of countries, Noraly’s videos have become my antidote to cynicism. Pakistani villagers offering chai to a solo foreign traveler. Afghan families extending hospitality without hesitation. Local mechanics spending hours helping fix Frankie, refusing payment. If you’ve lost faith in humanity’s fundamental decency, spend an evening with these videos. They won’t solve the world’s problems, but they’ll remind you that across vast cultural and linguistic divides, most people are simply kind.
But that’s a tangent, however heartfelt, to what I wanted to write about. Let’s get back to the question of why we anthropomorphize our tools in the first place.
The Deep History of Tool Personification
The practice of attributing personality, agency, and even gender to tools is ancient and culturally universal. Anthropologists have documented this behavior across vastly different societies and time periods, suggesting it taps into something fundamental to human cognition.
The historical record provides striking examples. Maritime cultures across the world have named their ships for millennia. The first-century BC historian Diodorus Siculus recorded insignia on the bows of ships in Greek trireme fleets and the “Acts of the Apostles” references a ship named Dioskouroi, after the twin gods Castor and Pollux, just to name two examples. This pattern extends beyond maritime culture. Warriors have named their weapons throughout history. King Arthur’s Excalibur and Roland’s Durendal are legendary, but ordinary soldiers have followed similar practices. And B.B. King famously named his guitars “Lucille,” treating the instruments as a partner in his creative work rather than a mere possession.
The function of this naming becomes clearer when we consider the contexts where it occurs most consistently. Ships are named because they stand between sailors and a lethal, chaotic environment. A “good ship” in maritime tradition is not merely mechanically sound; she is “loyal,” “brave,” or “responsive.” The naming transforms an inert object into a social ally, someone (not something) you can rely on when facing uncertainty and danger. Similarly, warriors name weapons that have the power to determine life and death, and musicians name instruments that profoundly affect their artistic identity and social status.
The anthropologist Bruno Latour would describe this as recognizing the “agency of things.” In his Actor-Network Theory, Latour argues that agency is never purely human; it is always distributed across networks of human and non-human actors. A person with a gun is a fundamentally different social agent than a person without one; the gun “acts” by transforming the human’s capabilities and intentions. From this perspective, anthropomorphism is less a category error than an acknowledgment of the real power that tools exercise in shaping what humans can do and who they can be.
Contemporary anthropology has moved away from viewing animism, the attribution of spirit or life to non-living things, as a “primitive” mistake. The “new animism” recognizes that the strict division between “Nature” (things) and “Culture” (people) is a relatively recent Western construct. Indigenous communities like the Nuaulu people of eastern Indonesia attribute life-like qualities to essential tools such as the sago pounder, a mallet used to extract starch from the sago palm. While they distinguish these tools from biological organisms, they recognize a form of vitality in the tool’s capacity to transform raw material into food, to perform work that sustains life.
Russell Belk’s theory of the “Extended Self” provides another framework for understanding why we anthropomorphize tools. Belk argued that “we are what we have”—that possessions are not merely external objects but become psychologically integrated into our sense of identity. Your smartphone is not just a communication device; it holds your memories, mediates your relationships, and extends your cognitive capabilities. When it breaks, the distress you feel is not merely inconvenience but something closer to injury, a violation of bodily integrity.
This helps explain why naming tools feels natural. The tools we depend on most intimately, whether Noraly’s motorcycle or a craftsperson’s hand tools or a musician’s instrument, become prosthetic extensions of the self. They carry a part of our identity. Naming them acknowledges this integration and transforms the relationship from ownership to partnership.
Why Large Language Models Trigger Different Intuitions
Given this deep history of tool anthropomorphism, why then does applying the same practice to AI systems feel different? To find an answer, we need to acknowledge that there are several factors that distinguish LLMs from traditional tools in ways that complicate the anthropomorphism question.
First, LLMs engage in linguistic interaction, which humans typically associate with minded agents. Language is our primary medium for social connection, persuasion, and the expression of internal states. When something responds to us in grammatical, contextually appropriate language, it triggers what psychologists call “social presence,” the feeling that we are interacting with a social entity capable of reciprocal attention and response.
This phenomenon has been documented since the 1960s with Joseph Weizenbaum’s ELIZA, a simple pattern-matching program that simulated a Rogerian psychotherapist. Weizenbaum was disturbed to discover that users quickly formed emotional attachments to ELIZA, even when they knew it was a basic script. His secretary once allegedly asked him to leave the room so she could have a “private” conversation with the program. This was dubbed the ELIZA effect: the inclination to unintentionally interpret computer actions as similar to human actions.
Unlike a motorcycle, which announces its thingness through mechanical interfaces, LLMs present themselves through the same medium—language—that humans use to reveal their thoughts, feelings, and intentions. The interface obscures the mechanism. When Noraly talks to Frankie, she knows she’s talking to metal and mechanics. When I converse with Claude, the linguistic fluency can make it easy to forget I’m interacting with statistical pattern-matching rather than understanding.
Second, LLMs are designed to be anthropomorphic in ways that motorcycles are not. Companies deliberately choose human-like names (Alexa, Siri, Claude), program politeness conventions, and optimize for responses that feel natural and conversational. This is not accidental. Research consistently shows that anthropomorphic interfaces increase user engagement, perceived trustworthiness, and satisfaction. The design is intended to trigger our social cognition circuits.
This deliberate design raises ethical questions absent from traditional tool naming. When Noraly names her motorcycle Frankie, she’s imposing anthropomorphism from the outside; the motorcycle remains indifferent. When Amazon names its voice assistant Alexa and programs it to respond to “please” and “thank you,” the company is exploiting our anthropomorphic tendencies for commercial purposes. The tool has been engineered to encourage a particular kind of relationship, one that may not serve the user’s interests.
Third, LLMs operate with a degree of opacity that distinguishes them from most traditional tools. I can open up a motorcycle engine and trace the causal chain from fuel injection to combustion to mechanical motion. I cannot meaningfully inspect what happens inside a neural network with billions of parameters. This opacity makes it harder to maintain a clear mental model of what the system actually is and what it can do. And in conditions of uncertainty, humans default to social reasoning. We anthropomorphize as a sense-making strategy.
This connects to Latour’s concept of “black-boxing”: when a technology becomes too complex to understand, we treat it as a unified actor rather than a system of component processes. The more black-boxed a tool becomes, the more we interact with it as if it has intentions and preferences rather than mechanisms and algorithms.
Actor-Network Theory and the Agency Problem
Bruno Latour’s Actor-Network Theory offers a productive framework for thinking through these complications. ANT refuses the conventional distinction between human actors (who have agency) and non-human objects (which are merely passive instruments). Instead, Latour argues that agency is always an effect of networks—associations between human and non-human actors that enable certain actions while constraining others.
Keep reading with a 7-day free trial
Subscribe to The Augmented Educator to keep reading this post and get 7 days of free access to the full post archives.




