The Schwartz Reisman Institute for Technology and Society’s 2024 Absolutely Interdisciplinary conference transported audience members into a near future where artificial intelligence (AI) and humans coexist in workplaces, learning environments, and homes. 

During the session on May 7 titled “Designing Human-Machine Coexistence,” Huili Chen, a research fellow at the Berkman Klein Centre for Internet & Society at Harvard University, spoke to the audience about the socioemotional aspect of human-robot interaction. She explored overlooked design elements in social AI, which could be utilized to enhance AI agents as positive influences on humans.

What makes a successful social AI agent? 

AI agents are artificial intelligence systems capable of performing complex tasks without human interference. When designed for social settings, these agents can engage in entire conversations with humans, serve as learning tools, or even help us understand our own social behaviours. 

Chen stressed the importance of considering the socioemotional experience of humans when designing AI agents and introduced three key approaches for achieving positive interactions: creating an appealing physical embodiment, enabling the replication and expression of emotions, and modelling desired human behaviour. These approaches have the potential to help us, as humans, thrive in our own social and learning environments. 

AI embodiment makes it easier to form bonds

When introducing the concept of embodied sociability — giving a physical form to social AI — Chen posed a question she’s heard often, “Is it even necessary for a system to have a physical embodiment? What is the difference between a virtual agent and an embodied agent?” 

Studies have shown that humans find it much easier to form bonds with embodied systems, especially those that exhibit human mannerisms. This is largely due to the nature of human communication. Chen notes that a large part of inferred meaning in conversation comes from nonverbal cues, like gestures, volume of speech, and body language. Interacting with a well-designed AI embodiment that exhibits appropriate mannerisms can make one feel seen, heard, and understood — key elements of a satisfying and meaningful conversation.

Creating a body capable of meaningfully interacting with humans is complex. The “uncanny valley,” coined by Masahiro Mori in 1970, describes the phenomena where human-inspired robots become too lifelike and thus repulsive. This effect is triggered for some objects like porcelain dolls, early CGI humans, or mannequins. Avoiding the uncanny valley effect is important, as it can cause people to hesitate when interacting with embodied AI agents. 

Beyond embodiment: Emotional appeal

Designing fundamentally emotionless beings with emotion in mind may seem like a fruitless endeavour, but in certain robots, having human-like portrayals of emotion may appeal to humans. One example of this phenomenon is Kip, the ‘empathy object.’ Designed by the Media Innovation Lab at the Institute for Development and Communication, Kip aims to reflect emotion and help speakers become aware of their tone. 

Kip is shaped like a desk lamp and is designed to observe human conversation. Though Kip never speaks, it’s highly expressive and somewhat adorable. It cranes its ‘neck’ in interest when someone is speaking and shakes in fear when a conversation becomes hostile. This causes speakers to see the impact of their behaviour reflected in Kip and become more conscious of how they speak. 

AI as a role model

Human mimicry is the tendency for people to subconsciously ‘mirror’ the behaviour of a conversational partner. Examples of this psychological phenomenon include using similar hand movements, vocabulary, posture, and tone of voice. This urge in humans is so strong that we can even begin to mimic robot conversational partners, leading to the development of trusting bonds with robots that, potentially in turn, mimic us. 

This phenomenon is especially relevant when considering the teaching potential of AI. Chen cited the example of AI agents who played learning games with children. These agents were able to teach perseverance by simply displaying a growth mindset, which the children would eventually adopt. Children interacting with the ‘growth mindset’ AI persisted longer in challenging learning activities compared to those without their AI peers. 

In short, yes, AI agents can and likely will make us better humans. There is a place for AI agents in our future, but it is important to critically consider the roles they will take on. We will always see reflections of our human needs and desires in their design.