Awareness

Playing Human: The Psychological Pull of Human-like Robots

playing-human-the-psychological-pull-of-human-like-robots

Being a movie fanatic myself, I can say for certain that most of the aliens that ended up in our most beloved sci-fi movies were humanoids. I’m also pretty sure that if I go around asking people to paint me pictures of aliens, I’d more or less be left with amateur self-portraits at best. We’ve always had this urge to see a little bit of ourselves in these hypothetical extra-terrestrial beings – was this tendency leveraged by filmmakers to come up with reasonably believable ETs or was it the lack of imagination on the part of directors that paved the path to this collective impression of an ugly-looking-upright-mammal landing in a space-sauce in the US, however, is a question that warrants further investigation. Well, as much as I would like to encounter other-worldly species in my lifetime, I know that it is an unlikely event. But there is something that does come close – Artificial Intelligence

If I had to describe the year 2020 in a single word, I, like most would opt to go with ‘COVID’. If I had a chance to do the same in 2024, my top pick would be ‘AI’. Although Artificial Intelligence has been around for quite a while, the popularity of Generative AI, more specifically, ChatGPT and its wide applicability and access is what took it to the next level.

Turing test, popularly known as the imitation game is a test proposed by the British mathematician Alan Turing to assess the human-like intelligence of machines. The idea was for a human judge to have conversations with both a machine and a human, aiming to reliably identify which is which. Granted, the test is not that sophisticated and is not generally regarded as the most reliable indicator of an AI’s ability, but it did stand the test of time. However, recent LLMs (Large Language Models) have been able to pass this test, suggesting that the boundary between human and machine intelligence has become increasingly blurred (Jannai et al., 2023).

There is much to be said about the impact AI has on the society we live in today. However, I’d like to tap on something very specific. Through advances in artificial intelligence (AI) and robotics, machines are not only able to respond and interact with people in meaningful ways but are also designed to mimic human-like forms, voices, and even personalities. This raises fascinating, complex questions: what happens when we combine the human-like form with the ‘intelligence’ of a machine? Why do people respond to these machines as if they were human, and what does that mean for society? (By the way, most of the robots in the reel world also resemble homo-sapiens).

Read More: AI companions and Mental health: Can Virtual Companions reduce Loneliness?

The Pull of Anthropomorphism

At the heart of this issue lies anthropomorphism – the tendency to attribute human characteristics to non-human entities. When it comes to AI and robots, this effect goes beyond simple curiosity; it often leads to interactions that mirror those we would have with actual people. One notable example of this is the Eliza effect, a term derived from one of the earliest AI programs, ELIZA, which simulated conversation through simple text responses.

Despite the simplicity of ELIZA, users reported feelings and emotions during their interactions, convinced that they were conversing with something more than a machine. Decades later, this phenomenon persists and grows stronger as the capabilities of AI expand.

Anthropomorphism is a natural human instinct. We see faces in clouds, assign personalities to pets, and even form attachments to inanimate objects. When technology comes into the picture, this instinct can take on new dimensions. Think of Siri, Alexa, or customer service chatbots – people often talk to these programs as though they were real individuals, despite knowing on a rational level that these interactions lack actual comprehension. The human-like qualities of these machines – voice, responsiveness, and adaptability – play directly into our inclination to form social connections, even with things that aren’t truly capable of feeling or understanding.

Studies show that humans are more likely to trust and accept information from machines when they feel a connection to them (Makovi et al., 2023). This is especially evident in the case of robots with human-like appearances or voices. By bridging the gap between appearance and function, these machines create an illusion of personhood, making it easier for people to feel a sense of rapport or trust. While this is beneficial in contexts such as healthcare or customer service, where empathy and understanding are crucial, it also raises ethical questions about the manipulation of human emotions through technology.

Sophia, developed by Hanson Robotics, made global headlines in 2016. This ‘female’ social robot was even granted Saudi Arabian citizenship and a United Nations title (Wikipedia contributors, 2024). ‘She’ has even participated in high-profile interviews and public talks around the world (eg; Tonight Showbotics, interview with Will Smith, talk at IIT Bombay). Such robots’ human-like design, coupled with their ability to engage in conversation, creates a sense of connection that goes beyond mere functionality. People report feeling cared for and even develop a preference for interacting with these robots over human staff in some cases.

On the other side of the coin, there was this news article (Rissman, 2024) that shocked many parents a while back. The article describes a mother in the U.S. who believes a chatbot played a role in her son’s suicide. She claims that her son formed an unhealthy attachment to this AI chatbot, which caused him to withdraw from family and friends and spend increasing amounts of time isolated at home. She has since filed a petition against the chatbot’s developers, accusing them of failing to implement adequate safeguards that might have prevented this outcome.

Read More: Why Teaching AI Like a Toddler Could Be the Future of Machine Learning

The Benefits and Ethical Dilemmas of Human-Like Machines

The ability of machines to evoke human responses has enormous potential benefits. AI companions can offer comfort and companionship to individuals who feel isolated (Alotaibi & Alshahre, 2024), while humanoid robots can make healthcare settings more pleasant and welcoming (Andtfolk et al., 2021). In education, anthropomorphic AI tutors can engage students in learning experiences that feel personalized and responsive, which can improve engagement and retention (Ekström & Pareto, 2022).

However, the rise of anthropomorphic machines also raises ethical questions. If a machine can evoke emotional responses, should it be designed to avoid manipulating those emotions? In cases where users form emotional bonds with the bot, what happens when the software is updated or discontinued, potentially “severing” a perceived relationship? Additionally, as AI becomes more ingrained in healthcare and eldercare, there’s the risk of substituting genuine human contact with machine interactions. This could have adverse effects on mental health and well-being, as humans ultimately require real social connections for emotional health.

Another ethical concern is transparency. While anthropomorphic machines provide a sense of companionship and empathy, users might be unaware of the limitations and programmed nature of these machines. If people feel that a machine truly “cares” for them, it raises questions about whether users are being misled about the machine’s actual capabilities and intentions.

Read More: AI in Therapy: Complement or Competition for Human Counselors?

Where Do We Draw the Line?

The combination of human-like form and intelligence in machines has opened new frontiers in technology and society. Machines that mimic human qualities, whether through voice, appearance, or simulated conversation, tap into our fundamental social instincts, enabling new forms of interaction and connection. However, it is essential to approach these developments with caution. As we design and interact with these increasingly “human” machines, we must remain aware of the psychological effects on users and the ethical implications of these connections.

Anthropomorphic AI can enrich lives, provide comfort, and even offer companionship in times of need. But it also blurs the line between tool and companion, raising challenging questions about authenticity, emotional manipulation, and the nature of human relationships in an increasingly digital world. As machines grow ever more human-like, the question isn’t just about what they can do, but about what they are – and, more importantly, how we perceive them.

References +

Alotaibi, J. O., & Alshahre, A. S. (2024). The role of conversational AI agents in providing support and social care for isolated individuals. Alexandria Engineering Journal, 108, 273–284. https://doi.org/10.1016/j.aej.2024.07.098

Andtfolk, M., Nyholm, L., Eide, H., Rauhala, A., & Fagerström, L. (2021). Attitudes toward the use of humanoid robots in healthcare—a cross-sectional study. AI & Society, 37(4), 1739–1748. https://doi.org/10.1007/s00146-021-01271-4

Ekström, S., & Pareto, L. (2022). The dual role of humanoid robots in education: As didactic tools and social actors. Education and Information Technologies, 27(9), 12609–12644. https://doi.org/10.1007/s10639-022-11132-2

Jannai, D., Meron, A., Lenz, B., Levine, Y., & Shoham, Y. (2023). Human or not? A gamified approach to the Turing Test. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2305.20010

Makovi, K., Sargsyan, A., Li, W., Bonnefon, J., & Rahwan, T. (2023). Trust within human-machine collectives depends on the perceived consensus about cooperative norms. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-38592-5

Rissman, K. (2024, October 24). Sewell Setzer: The disturbing messages shared between AI Chatbot and teen who took his own life. The Independent. https://www.independent.co.uk/news/world/americas/crime/ai-chatbot-lawsuit-sewell-setzer-b2635090.html

Wikipedia contributors. (2024, October 29). Sophia (robot). Wikipedia. https://en.wikipedia.org/wiki/Sophia_(robot)

Exit mobile version