Over several months, I have argued against anthropomorphizing language bots in formal educational contexts. The temptation to personify these tools, particularly for younger learners, is understandable; the perception of interacting with a human might motivate them to use their full linguistic repertoire. Vygotsky’s concept of private speech – a form of self-regulatory, internal dialogue prevalent in early childhood but persisting across the lifespan – is characteristically succinct and targeted, often puzzling to outsiders. In contrast, public speech, aimed at communicating with others, necessitates adherence to grammatical norms and structures to convey meaning and emotion effectively.
From a pedagogical standpoint, encouraging students to adopt an anthropomorphic view when engaging with a bot could prompt them to adopt the role of a writer, crafting messages with a reader or listener in mind. This is a departure from the inward-facing nature of private speech. Clear communication is often linked with success in achieving one’s objectives working with a bot; hence, promoting a dialogical, human-to-human perspective in interactions with AI could foster a lasting ability to leverage AI as a beneficial learning instrument.
My reservations are twofold. First, despite AI's superiority in executing certain cognitive tasks at unparalleled speeds and its immunity to human-like memory degradation, there are fundamental differences: Humans evolve; AI is constructed. Humans exhibit bias; AI remains impartial unless tainted by human prejudices. Humans accumulate experiences; AI undergoes training. Humans experience emotions; AI employs sentiment analysis. Humans hold beliefs; AI calculates probabilities. Humans are capable of love; AI organizes data. Humans face mortality; AI, in contrast, does not live under conditions of this existential reality.
Second, one might wonder whether AI's linguistic capabilities render it closely akin to humans, given language's central role in defining Homo sapien. However, assuming, for argument's sake, that language indeed captures the essence of being human, the manner in which language is acquired and used becomes crucial. AI employs machine learning algorithms to analyze vast datasets of text, identifying patterns and predicting appropriate responses. Humans, on the other hand, acquire language through a blend of cognitive and metalinguistic, experience-based processes that seek out meaning. Human communication, whether spoken or written, employs language to express shared meanings that influence behavior, unlike AI, which generates language based on statistical likelihoods. This juxtaposition highlights a significant discrepancy: while humans use language as a medium to construct and convey meaning, AI's use of language, devoid of inherent meaning, challenges the notion that it can be considered human.
Teaching children to view AI as human-like can have several negative consequences, both in terms of their psychological development and their understanding of technology. Children might struggle to differentiate between human interactions and those with AI, impacting their ability to develop social skills and empathy. Human relationships are complex and nuanced unlike the more predictable interactions with AI. If children start to view AI as human, they might become overly dependent on these tools for companionship, decision-making, and problem-solving, which could hinder their ability to think critically and independently.
Believing AI to be akin to a human friend might lead children to share personal information more freely, unaware of the data collection, privacy, and security implications, exposing them and their families to risks of data breaches and other cyber threats. Engaging with AI as if it were human might complicate children's understanding of ethics and morality. AI doesn't possess consciousness, emotions, or moral judgement, and treating it as if it does could cloud children's understanding of these complex human qualities.
Anthropomorphizing AI can create unrealistic expectations about what technology can achieve. This might lead to disappointment or frustration when AI inevitably falls short of these human-like expectations. Children might become desensitized to the limitations and potential fallibility of AI, leading to an unquestioning acceptance of technological solutions and AI-generated content without critical scrutiny of its accuracy or bias.
Growing up with an over-humanized view of AI could also shape children's perceptions of employment and their role in a future workforce increasingly integrated with AI. Understanding that AI is a tool rather than a peer is critical in preparing for a world where human-AI collaboration is common. Teaching critical thinking alongside technology education will help children navigate the complexities of a future intertwined with AI, ensuring they can leverage the benefits of AI. In accordance with this view, the importance of teaching learners to address AI from a COMMAND disposition as a human views any other technology may be paramount. Qualitative research is sorely needed to explore real life consequences of the human-human modality vs the human-AI modality.
Agreed on this point: Instead of fighting it, we should educate kids, and anybody else really, about what it means to interact with this new generation of technology. I’d like to know more about what you see as “fighting it” and “what it means to interact with this new generation…”
As near as I can tell, you mean we shouldn’t fight our tendency to react to an implied persona anytime we use language. I’m not sure I grasp your phrase “implied persona.” I can think of multiple uses of language which do not imply that the speaker is conjuring up an implied persona—if that means another human. Perhaps in a fairy tale? I need your help to grasp your thoughts. And I would love to agree with you. I’m not sure why you use the word “interface.” I’m a bit uncomfortable with the notion that language is a sort of technical juncture device. I sense a floating between romanticism and imaginative inner experience and an objective, scientific attitude.
I encourage you to respond to the dangers I’ve posed in this post. They are speculative at best. I have to say I’ve seen evidence of negative consequences from anthropomorphic interfaces between AI and humans among several adults whom I know closely. They are convinced this thing is human. It affects everything and reduces their capacity to command the tool.
I believe it doesn’t need to be either/or. We can antropomorphize and at the same time teach kids these tools are just tools. Machines, with no will or thoughts or feelings of their own.
What most people don’t realize is that we cannot not humanize AI. We have no choice but to do so, because when language is the interface, persona is implied. It turns from something into someone, because that’s how our brains work, if it acts and talks like us, we perceive it as such.
Instead of fighting it, we should educate kids, and anybody else really, about what it means to interact with this new generation of technology.