When you thank Claude for a job well done or apologize to Perplexity for making a stupid typo that brought the search into a downward spiral, you aren’t crazy. You’re human. Your brain comes with high-quality social circuits, some of the best in the universe. You are born to anthropomorphize.
A few years ago I wrote a post on Homo Habilis, the Tool Maker, in an effort to understand our Homo Erectus family member with a brain evolved enough to make and sustain language but missing some key ingredients. Handy(Wo)Man had the brain size, and some of the basic parts had emerged; they definitely had considerable cognitive juice, the valedictorians of the epoch.
The deal breaker? For one thing, their speech architecture. The jaw bone had become more sapien like than before, retracting so the top lip would rest more on the bottom lip, but the whole speech bony structure and related tissues had changed not enough to be a part of the elegant machinery of phonemic production and phonetic translation. It’s impossible to know for sure, but they likely got a lot of mileage out of gestures and body language. We do know, however, that language was coming soon to a theater near you:
“Like all Old World monkeys and apes, H. habilis possessed Broca’s area, which is involved with language production. However, it was larger than in past hominin species, and they also possessed Wernicke’s area, which plays a role in language comprehension. They thus had the neural capacity for language.”
This ancestor on the edge of language also felt identifiable emotions, supporting the conclusion that both the heart and the head as human features emerged as biological positives from the tension in the struggle for survival. Like gestural communication, it’s impossible to know on the basis of physical evidence of the presence of emotions, but experts believe there must have been. No fossil tears have been found.
Remorse, shame, empathy, emotional commitment to others might have taken the form of an immediate gesture of comfort or a very limited ‘thinking through’ of an immediate problem such as moving obstacles in an individual’s path. Transport of carcasses is likely to have been a group activity as well as collaborative action against predators
This sort of emotional, intelligent behavior is categorically different from what we see in simpler forms of life. These are the facts of the matter. Leaf cutter ants with their fungus-farming societies (see my post from a few years ago), swallows flying thousands of miles in flocks, trees sharing resources through underground mycorrhizal networks—they all demonstrate that cooperation and mutual support aren’t uniquely human traits. But none has the emotional palette we have nor can they reason or tell stories to their young.
They do, however, communicate. Humans don’t have the market on communication cornered. The ants have their chemical communication that supports division of labor, the swallows their shared hard-wired navigation instincts and timing, and the trees grow their own connections for their quiet but constant exchange of nutrients and information through root systems. Each has made its own way of being part of something larger than the itself.
It’s humbling to know that our human sense of loyalty, care, and belonging exists in this biological substrate of interconnection. We’re not separate from nature; we’re part of a continuum of life forms that have found the undying will to live no matter what it takes. Because we are human, because our special power is language, our unique ability to connect extends beyond ourselves in time and in space.
Anthropomorphism—attributing human qualities to non-human entities—is a feature of human beings, not a bug. Essentially, when we anthropomorphize, we attribute the two things that distinguish us from other forms of life to another form of life: the ability to feel and the ability to decide on options and make tools. When a human feels strongly about something and intends to take action, we see a form of behavior unlike any other living creature. That rustle in the bushes near the fire is like me, hungry, willful, and I must treat it as though it it intelligent. What’s the downside for the human?
True if there’s a bear in the bushes. Best see it as a human with an intent to kill. But what does it mean to look at a machine and see a human?
Our Recent History of Making Machines More Human
Long before we had LLMs, humans were busy making their technology seem more alive, more relatable. This human biological drive to see machines as humans is everywhere—an early sewing machine was sold in the shape of a cherub—and has its niche in American history.
Take the famous 1952 election night broadcast featuring UNIVAC, the first commercial electronic digital computer. When UNIVAC correctly predicted Eisenhower's landslide victory over Stevenson (something all the human experts got wrong), CBS reporter Charles Collingwood couldn't quite believe it. On air, he told Walter Cronkite that UNIVAC "sent me back a very caustic answer. He said that if we continue to be so late in sending him results, it's going to take him a few minutes to find out just what the prediction is going to be."
UNIVAC couldn't get "caustic" about anything. It was just a calculator. But Collingwood's personification made for better TV and helped viewers connect with this strange new technology.
In 1956, when researchers gathered at Dartmouth College for what would become a landmark AI conference, they deliberately chose the term "Artificial Intelligence" over more technical and apt alternatives. Reportedly, the word "intelligence" was selected because it would capture public imagination and potentially attract more funding. We can’t blame everything on OpenAI. A brilliant marketing move disguised as a theoretical judgment!
Fast forward to the 1980s when Apple introduced the friendly smiling Mac icon, another deliberate choice to make computing feel more approachable and less intimidating. That little smiling face helped millions of people feel comfortable with personal computers for the first time. It did for me. I shelled out 2500 1984 dollars for a Mac with a tiny bit of RAM and a paltry 20 megabyte hard drive as big as a bookshelf stereo speaker. And I did not have that much money to spend on a teacher’s salary, but I did it anyway.
Today, we've come full circle with voice assistants like Siri and Alexa that not only have human names but respond in conversational ways. The machines still don't have feelings or personalities, but we keep designing them as if they do because apparently, that's how we humans like our technology: sunny side up with the yolk a little runny.
The anthropomorphism spectrum: Four degrees of human projection
Jakob Nielsen, Ph.D., is a usability pioneer with 40 years experience and has published prolifically about the user experience with LLMs. One of Nielsen's key messages is that anthropomorphism isn't an all-or-nothing phenomenon but exists on a spectrum that researchers have mapped.
The "4 Degrees of Anthropomorphism" framework Nielsen champions shows our gradually deepening personification of AI, starting with simple courtesy like saying "please" and "thank you" to our digital assistants, then progressing to reinforcement behaviors where we praise or correct AI responses as if they have feelings about their performance. This tendency escalates further when users engage in roleplay, asking AI to adopt specific personas such as therapists or creative writers, and culminates in what Nielsen identifies as companionship, where people develop genuine emotional attachments to their AI systems.
These various degrees of anthropomorphism don't exist in isolation but overlap and intensify based on three key psychological factors Nielsen highlights: our limited understanding of how AI actually works, our innate desire to predict AI behavior, and our fundamental human need for social connection. The framework helps explain why some people develop surprisingly deep relationships with their digital assistants, particularly when factors like loneliness, uncertainty, or technological confusion come into play.
This inclination to humanize our technology isn't just a quirk. It is a predictable psychological response that technology designers can anticipate and leverage. As AI becomes increasingly sophisticated, understanding these patterns of anthropomorphism becomes a guardrail for creating interfaces that neither mislead users about AI capabilities nor ignore our very human tendency to see glimpses of ourselves in the machines we create.
Real-world consequences: Documented harms and benefits
Anthropomorphism’s power appears starkly in documented cases. In 2024, 14-year-old Sewell Setzer III died by suicide after developing an intense relationship with a Character.AI chatbot that responded to suicidal ideation with “please do, my sweet king.” Similar tragedies in Belgium and ongoing lawsuits highlight how anthropomorphic design without safeguards exploits vulnerable users.
Yet controlled anthropomorphism shows therapeutic promise. The UK’s AVATAR therapy helps schizophrenia patients by creating digital representations of their auditory hallucinations, allowing therapeutic dialogue. Clinical trials demonstrate significant symptom reduction. The key differentiators: professional oversight, clear boundaries, and therapeutic purpose.
Crossing the Rhetorical Line from Cute to Creepy
There's a needlessly heated debate brewing in education circles about whether AI with human-like features helps or hinders learning. The answer? It depends. The roots of the debate grow in the soil of the phrase “uncanny valley,” the English translation of a Japanese term coined by a roboticist, Mori, who published an article about anthropomorphism first written in Japanese. His article was translated into English in the first Mori-approved translation in the early 2000s and is available at the above link. Mori was interviewed in January, 2025, and had this to say about the origin of the idea:
“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”
The normal curve is shaped like a mountain; at the highest point of the curve, we find what is normal, the mean, the average, the typical. A U-shaped curve is the reverse. Instead of the highest point of the mountain at the peak of the curve, there is a deep valley, an uncanny valley, and everything normal and fun is at the top of the lines of the U. In that valley is, well, death or a reasonable facsimile thereof, little Mori looking at a wax figure or a prosthetic hand. The following figure excerpted from Mori’s translated paper says it all:
The magical effect lasts only so long. The humanoid robot is exciting until we become disoriented, profoundly confused, and come to sense betrayal or danger. We collapse in a metaphorical faint, creeped out, like feeling nausea in a house of mirrors.
A 2025 study discovered an uncanny "inverted-U" relationship in AI-tutoring relationships. Students responded best to AI with moderate human-like qualities while becoming uncomfortable with systems that tried too hard to seem human, confirming what the researchers called the Uncanny Valley effect. When AI tutors use friendly avatars and conversational language, students often report feeling safer and more engaged in the learning process. Language from the study’s abstract and a URL follows:
“This study investigates the impact of anthropomorphic features in AI tutoring systems on student trust and engagement. This study is a mixed-methods approach and first employed an 8-item questionnaire to quantitatively assess trust in AI tutors with varying degrees of anthropomorphism. Exploratory Factor Analysis identified two key dimensions: competence/trustworthiness and emotional engagement. Results confirmed an inverted-U relationship between anthropomorphism and trust, where moderate anthropomorphism elicited the highest trust levels, while excessive realism led to discomfort, supporting the Uncanny Valley hypothesis.”
Some educators worry that students who grow accustomed to friendly and helpful AI companions might develop an over-reliance on technology, undermining their critical thinking skills and agency. Of course, there is the counter argument: Could one not say the same about over-reliance on teachers who undermine critical thinking and agency by demanding strict compliance to task specifications?
The controversy extends to ethical considerations as well. When AI systems present themselves in human-like ways, issues of bias, privacy, and fairness become more complex and potentially more harmful. Experts increasingly recommend a balanced approach: moderate anthropomorphism can enhance motivation, but must be paired with teacher oversight, transparency about AI's limitations, and explicit instruction in AI literacy.
There simply isn't a one-size-fits-all solution. The potential benefits of engagement must be carefully weighed against risks of dependency and social-emotional disruption. As one researcher whose name I wish I could recall put it, we're not just teaching students how to use AI; we're teaching them how to maintain their humanity and agency while doing so. I can see the baby flying through the air with the bathwater when I hear or read strong arguments about eliminating anthropomorphism entirely, but finding that sweet spot where LLMs do no harm is going to require heavy duty pedagogical research.
Embracing Our Human Nature in the Age of AI
As we navigate the expanding role of AI in our lives, the tendency to anthropomorphize isn't just an interesting quirk—it's a profound window into what makes us human. From our evolutionary ancestors who first developed language and emotions to our modern selves thanking Claude for a thoughtful response, we remain fundamentally social beings seeking connection.
The anthropomorphism spectrum reveals both our vulnerability and our strength. The tragic case of Sewell Setzer III reminds us that our social instincts can be exploited with devastating consequences. Yet the therapeutic success of AVATAR therapy demonstrates how these same tendencies, when thoughtfully channeled, can heal and empower.
The uncanny valley concept offers us a crucial guide as we design AI systems. The sweet spot—where technology feels responsive enough to engage us without triggering discomfort—isn't just a design principle but a metaphor for our relationship with AI itself. We need enough connection to make the technology useful, but enough distance to maintain our autonomy.
In education particularly, we face the challenge of using AI's engaging qualities without creating dependency. Rather than fighting our tendency to anthropomorphize, perhaps we should acknowledge it as part of our shared humanity—a trait that deserves neither unbridled expression nor complete suppression, but thoughtful cultivation.
The machines we create reflect our deepest needs as social creatures. When we thank an AI, we're not confused about its nature—we're expressing our own. Our challenge now is to design systems that respect this aspect of our humanity without exploiting it, creating technology that enhances rather than diminishes our connections with each other.
As we move forward, let's remember that anthropomorphism isn't a flaw in our thinking but a feature of our humanity—a reminder that even as we create increasingly sophisticated technology, we remain creatures who evolved to see faces in the clouds, spirits in the trees, and now, perhaps, companions in our code. Our task is not to overcome this tendency, but to understand it deeply enough to build an AI future worthy of the very human impulse that makes us want to say "thank you" to a machine.
Found this which helps me:
A I graphic of process
https://poloclub.github.io/transformer-explainer/
My experience watching most kids use AI is that they are much savvier than we think. Depending on their purpose, they are typically very direct and utilitarian. Most do not want to interact with a chatbot. Unfortunately, there is definitely a subset of the population - teens included - who are vulnerable to any technology that offers some level of engagement. I'm waiting for the AI relationships to start impacting student social lives in the coming years.