In November 2022, artificial intelligence (AI) research laboratory OpenAI launched ChatGPT, and the idea of revolutionary human-likeAI has become one of the forefronts of technological discussion. ChatGPT is an AI model that imitates human speech and engages with users in a conversational manner, and its potential for human mimicry and passing the Turing test has led many to inquire whether or not AI could experience sentience. A prevalent topic in computer science and neuroscience has been the development of AI applications that approach the possibility of experiencing feelings and sensations.
With Valentine’s Day around the corner, one question of interest is whether AI could ever fall in love.
The extent of love in machines
We start our investigation by asking what love is.
The Merriam-Webster dictionary defines love as “a strong affection for another” or “the object of attachment, devotion, or admiration.” This begs the question: can AI experience a feeling or an attachment towards something, or is it stagnant in its beliefs based on a series of programming?
To take the definition further, we can look into the science behind love, which anthropologist Dr. Helen Fisher, who studies behaviour, and her team at Rutgers University broke down. They look at love as three distinct categories, each with specific neurotransmitters and hormones that are responsible for it. Lust, which is driven by testosterone and estrogen; attraction, which involves dopamine, serotonin, and norepinephrine; and attachment, which is driven by oxytocin and vasopressin. By this definition alone, we obviously might conclude that AI will never experience love, at least not in the way that humans do.
However, this does not mean that it is impossible for users to interpret an AI as experiencing love. Humans might interpret its generated content as emotional. In fact, early last year, Google engineer Blake Lemoine published a transcript of his conversation with LaMDA, Google’s own internal AI chatbot, sharing his belief that LaMDA had gone sentient. He lost his job because of his decision to post the interview, but he firmly believed in the emotional and sentient capacities of the AI. If Lemoine’s beliefs are true, then the same could happen as more models become publicly available.
Do you love someone, ChatGPT?
I wanted to know if there was anything that ChatGPT loves, so I asked the bot, “What do you love?” The response was, “As an AI language model, I do not have personal emotions or opinions. My purpose is to assist users in generating human-like text based on the input provided to me. Is there anything specific you would like to know or discuss? I am here to help.” I got a similar response when I asked about its interests or its feelings about a certain topic.
To understand these responses, we can take a look at how ChatGPT learns and speaks. As a language processing model, ChatGPT undergoes thousands of human interactions that exist in the form of conversations. An article in the Atlantic expresses that ChatGPT cannot even understand how complex the human language and conversation truly is, and is only generating words in response to particular inputs. Through exploring these examples, the model understands various patterns, and can then extrapolate the necessary information and sentence structure to create the right response to your query.
Just looking at its program structure, it would be a far reach to say that ChatGPT has achieved sentience, despite how much it may seem that way. We know that at this current point in time, ChatGPT cannot display love, interest, or a particular feeling toward something, either due to its learning and training structure or because of checks put in place by its programmers.
Although the magnitude of what it can perform makes us feel as though we might be communicating with an actual human or some sort of decision-making tool that is capable of formulating its own opinions and beliefs, it is important to remember that AI bots such as ChatGPT are regurgitating sentences based on a set programming and not based on their own interpretation of a question in a given moment.
Should we let AI fall in love?
As AI develops, we also need to consider what capacities we give them. A lot of these decisions are entirely in our control, and these questions are fundamentally rooted in what-ifs and ethics.
As of now, it is not possible for a bot to have sentience. Nonetheless, AI has begun to change the way that we communicate. For instance, I have heard people express that they have used ChatGPT to refine emails for professionalism or social media captions for concision.
While these implementations of the tool might seem innocent, they may take away opportunities for us to better our own writing, and we might stop exercising the creativity that is innate to humans, sacrificing our skills for efficiency. Taking this further, there is a growing concern that, should we begin to rely on this bot for conversational skills, we might use AI as a means of communicating with others, losing human connection that is so important to human beings and our society as a whole.
While AI might not be able to experience love directly, this does not mean that it does not have the potential to facilitate many of our conversations that allow us to establish bonds with other people, and in so doing, it robs us of the chance to feel something — maybe even love.