Should we be anthropomorphizing AI?
Written by
Daley Wilhelm
Art direction by
Manoel do Amaral
Photo (edited)
Humans anthropomorphize everything. We assign human traits and emotions to animals, inanimate objects, and even software.
“Gmail is acting finicky today.”
“I swear my cat threw up on the rug just to spite me.”
“Siri can be so dumb sometimes.”
The reality is that animal behavior doesn’t always correlate to human behavior. Software and AI don’t “behave” at all, but rather function in accordance with their code. Humans, as social animals, find it easy to interpret certain outputs as “behaviors.” Humanizing the tech we use makes it a little bit more understandable.
But anthropomorphizing things can go wrong. Rather than making complex systems like AI more understandable, anthropomorphizing tech can actually contribute to further mystification and misunderstanding.
ChatGPT helpfully (and ironically) defines anthropomorphism
During a qualitative usability study of ChatGPT, the Nielsen Norman Group observed four patterns of user behavior that assigned human traits to the AI.
1. Courtesy
2. Reinforcement
3. Roleplay
4. Companionship
Courtesy:
Most people are “guilty” of treating AI with basic courtesy. “Please” and “thank you” aren’t required, but out of habit and social conditioning, users will often phrase prompts politely. Voice assistants like Siri are designed to be conversational, and conversations typically involve social niceties like “thank you” after a reply to a query. Siri won’t be upset if we do not thank “her” for her help, but common courtesy is still given because of our social conditioning.
Reinforcement:
The Nielsen Norman Group describes reinforcement as praising, or scolding, a chatbot when it gives a correct, or incorrect, answer. In humans, we know that positive reinforcement is important — it reaffirms the behavior or results we want to see. We praise good grades and give awards for excellent work. We scold misbehavior and try to correct mistakes. But that’s human behavior. What’s the point in providing an AI bot with positive or negative reinforcement? In the study, participants described two different motivations behind their praising ChatGPT with a “good work!” The idea that positive reinforcement would help the AI to replicate similar results in the future, letting it know that it was producing “good” work. AI mirrors human attitudes and behaviors, so by being positive as a user, the bot would also be a positive, friendly interface. This is a step above common courtesy, but can still possibly be attributed to habits created by socialization in human society.
Roleplay:
Happens when users of a product like ChatGPT ask the bot to assume a role. For example, users can ask ChatGPT to “assume the role of an upbeat social media manager and write a newsletter for the release of the new game called…” According to Nielsen Norman, “Assigning roles to the chatbot is a frequently recommended prompt-engineering strategy.” Roleplay prompts ask AI to assume human traits like job titles (social media manager) and attitudes (upbeat). It’s a literal anthropomorphization of a product, but that is sometimes what the task demands. To meet a user’s needs, AI might be required to act more like a coworker than a tool.
Companionship:
The point in which users begin to indeed treat AI like a coworker and fellow human. Users befriend the AI, speaking to it with courtesy and even affection. This doesn’t mean that the user necessarily believes that the AI inherently has human traits like empathy and kindness, but chatbots like ChatGPT often mirror the input style of the users — treating a bot with kindness often means receiving kind replies in turn. Chatting with AI in a companionable way can help alleviate loneliness and be comforting in the same way that readers enjoy fictional characters. Even if they aren’t “real,” the feelings elicited from positive interactions with AI are.
Image: generated with Visual Electric
Why do we speak to AI as if it is human? Do we want AI to act in a certain way? Do we want it to be more human? Or do we just assign that trait anyway?
As mentioned before, humans assign human traits to inhuman animals and objects so as to understand them through a human lens. AI is especially mysterious, so we do what we can to demystify the technology.
A base misunderstanding of how AI works might actually be the motivation behind anthropomorphizing the technology. The Nielsen Norman study indicated that participants weren’t sure how to interact with platforms like ChatGPT and thus act on what they’ve heard about AI from other sources. “Thus, rumors spread about what makes AI work best, many of which include a degree of anthropomorphism.”
Now that we know why people approach AI the way that they do, namely with anthropomorphism on the mind, should we lean into humanizing AI? Should we encourage users to speak with AI like ChatGPT the way that they might speak with a coworker or friend?
No. In the same way that referring to AI as “magical” can prove problematic, acting as if it is human will lead to frustration and misunderstanding.
Examples of how simple verbage can contribute to the anthropomorhization of AI (Source)
Referring to the actions of a cat as having “ulterior motives” can create the understanding that cats are capable of scheming or malice. (It seems that way sometimes, but it’s not true!) Thanks to studies on animal behavior and beyond, we know that this is not the case. Again, animals behave based on instinct. Humans behave based on both instinct and social expectations. Digital products like ChatGPT function based on their code.
Therefore, thinking of AI in anthropomorphic terms can be misleading at best, a total misunderstanding at worst. By assigning human traits to AI, people can form an incorrect idea around how it works. If demystification is the goal–which it should be–then anthropomorphizing AI is working counter to the solution.
Raspberry Pi gives a few examples as to how to refer to AI without the use of anthropomorphic language.
“It listens/it learns” → “AI is designed to…/AI developers build apps that…”
This shifts the focus from AI as an independent entity to the fact that it is a piece of technology designed by humans for specific uses.
“see/look/create/recognize/make” → “detect/input/pattern match/generate/produce”
The initial list of verbs can be applied to people, inherently implying a human quality to AI. More accurate language, like “detect” in the place of “see” helps to establish AI as a technology rather than an entity.
Avoid using Artificial Intelligence/Machine Learning as a countable noun, e.g. “new artificial intelligences emerged in 2022” → Refer to ‘Artificial Intelligence/Machine Learning’ as a scientific discipline, similarly to how you use the term “biology.”
This roots AI/ML in the fact that it was something developed by humans rather than a force that emerged on its own/one that has its own motivations.
Billie can’t tell you the weather, or the local news, but “she” can… hype you up, girlfriend? (Source)
But wait, what about the companies that are clearly leaning into anthropomorphizing their AI chatbots? Meta has gone beyond referring to its chatbots in human terminology to creating 28 chatbots with unique, and very human, personalities. Some are even based off of real celebrities like Snoop Dogg and Kendall Jenner.
Each chatbot is meant to fill a certain role and have a specific area of expertise. This makes it easier to find which bot will help fulfill a user’s goals. For example, “Billie” (Kendall Jenner) is described as a big sister, so users looking for sisterly advice on life and love would turn to her. Other chatbots are based on athletes like Dwayne Wade, which will have more information on exercise and sports than “Billie” would.
This is an obvious example of roleplaying with AI, creeping into companionship. While sequestering information behind friendly faces with welcoming personalities might make sense from a design perspective, this unfortunately contributes to confusion about AI. Does Snoop Dogg endorse everything that the chatbot might say? If there is an inaccurate, or even offensive reply given to a user, they may attribute that to the celebrity personality rather than the shortcomings of the technology.
Image: generated with Visual Electric
We don’t think of electricity as magic. Nor do we assume that it has moods, feelings, or preferences. So why should we make those assumptions about artificial intelligence? Doing so causes a base misunderstanding of how the technology works, and can create untenable expectations of AI’s capabilities.
As humans, we are wont to anthropomorphize whatever we’re working with–animals, vehicles, tech, etc–so the anthropomorphization of AI seems unavoidable. This is observable in the Nielsen Norman Group’s ChatGPT study of courtesy, reinforcement, roleplay, and companionship.
But leaning too much into the anthropomorphization of AI can create misunderstandings around how the technology works, and indeed obscure the fact that artificial intelligence is just a technology rather than a sentient being with its own motivations.
There needs to be more robust education around artificial intelligence and machine learning if these technologies are meant to be such a big part of humanity’s future. Otherwise, users will have a basic misunderstanding of how AI works. This is a recipe for frustration in the making, something that designers are meant to alleviate and/or eliminate.