PCH.VECTOR/FREEPIK.COM
By Justin Ing
Have you ever thanked ChatGPT, Grok or Deepseek for helping with your kids’ homework, felt bad about Google Maps not giving you the least congested traffic routes, or wondered if ChatGPT actually understands your jokes? If so, you're experiencing something called “anthropomorphism” — our human tendency to see non-human things as having human qualities. According to experts, this mental habit is causing major confusion about what artificial intelligence (AI) can really do.
“Anthropomorphism is the ascription of human qualities onto non-human entities,” explains Dr Adriana Placani in a recent study published in the journal AI and Ethics. Essentially, it's when we project human traits — like feelings, intentions or consciousness — onto things that don't actually have them.
This is not new. People have been giving human qualities to everything from stuffed toys to weather (“angry thunder”) since ancient times. But with AI becoming more advanced, this habit is creating some serious misconceptions.
The Language Trap
Think about how we talk about technology. We say: “My phone died”, “The AI learned from its mistakes", “Alexa is listening to our conversations”.
Even the term “artificial intelligence” itself makes us think these systems possess human-like smarts. But this kind of language is misleading.
We must know that even the smartest AI systems today do not have feelings, consciousness or true understanding. They are just sophisticated pattern-matching systems that have been trained on massive amounts of data. When ChatGPT writes something that sounds empathetic, it is because it is mimicking patterns from human writing — not because it feels empathy.
The ELIZA Effect: An Old Trick
This confusion has been happening since the beginning of computers. In the 1970s, a simple computer program called ELIZA could pretend to be a therapist by just repeating what users said in question form, like:
User: “I'm feeling sad today.”
ELIZA: “Why are you feeling sad today?”
Even though the program was extremely basic, many people became emotionally attached to it and believed it understood them. The creator, Joseph Weizenbaum, was shocked when he saw people developing emotional connections to what was essentially a simple text manipulation program.
“What I had not realized is that extremely short exposure to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum wrote.
Why It Matters to You
me? What's the harm?” According to experts, anthropomorphism can cause several problems:
It creates unrealistic expectations. When we believe AI has human qualities, we expect it to behave like humans — with common sense, ethical understanding and emotional intelligence that current systems simply do not have.
When students ask ChatGPT for help with a complex ethical question, they need to understand it is just predicting what sequence of words would sound reasonable based on its training data, not providing genuine wisdom.
It misplaces trust. Research shows that adding even small human-like features to technology makes people trust it more. But this trust is not always deserved.
If your navigation app sends you down a narrow street in a crime-ridden town, it is not because it is trying to harm you. The system does not care about you one way or another; it is just following its programming.
It confuses responsibility. When we see AI as human-like, we might blame the technology itself for problems instead of the people who created, programmed and deployed it.
If a social media algorithm promotes harmful or politically-biased content, the question is not “Why is the AI being irresponsible?” Instead, we should ask who designed it that way and why.
When Your Child Says “I Like Her (AI) More Than You Mum!”
The other day, I overheard my seven-year-old nephew conversing with an AI tool on a handphone and started saying sorry to the AI when his mother interrupted and started talking to him. “Mummy, that’s rude. I was talking to the AI, can you wait?”
Children are natural anthropomorphisers. Young minds readily assign feelings, intentions and personalities to inanimate objects, like toys. With AI devices designed to simulate conversation and respond to social cues, this tendency becomes even more powerful.
While there is nothing inherently wrong with imaginative play involving technology, parents should be aware of several potential concerns.
Children may develop genuine emotional attachments to AI systems. When children believe AI cares about them personally, it can create dependency or even disappointment when they eventually realise the technology does not actually have feelings.
Children who view AI as trusted friends may share personal information more readily, as kids do not naturally understand data privacy. When they think they are talking to a friendly entity rather than a corporate product collecting data, they may overshare.
Children practising social skills primarily with AI might miss important aspects of genuine human interaction. AI does not get tired, frustrated or need to compromise. However, these are crucial social learning experiences that help develop empathy.
When children accept AI responses without questioning, it may affect critical thinking development. Unlike human teachers who encourage questioning, AI systems present information with an appearance of perfect knowledge.
How to Think Clearly About AI
How can you avoid falling into the anthropomorphism trap?
1. Watch your language: Instead of saying “The AI thinks ... “, try “The AI was programmed to ...“ or “The AI was trained on data that ... ”.
2. Remember what's behind the curtain: When an AI seems impressive, remember there are human programmers, massive datasets and corporate interests behind it.
3. Question the metaphors: When someone describes AI using human terms (like having feelings or understanding), ask whether that is actually true or just a convenient but misleading comparison.
4. Learn the basics: Understanding how technologies like large language models actually work — through pattern recognition rather than comprehension — can help you see past the human-like façade.
Children notice how parents interact with technology. Parents should avoid saying things like “Siri is being stubborn today” or “I think Alexa is upset.” Instead, use language that reflects reality: “See how the AI got this wrong? That's because it doesn't really understand the world like we do. It's just making guesses based on information it was given.”
The next time you find yourself apologising to AI for not giving the right prompt, or wondering if it gets annoyed when you are rude, remember: that's just your very human brain doing what it evolved to do — looking for minds like yours everywhere, even in places where none exist.