When discussion on artificial intelligence (AI) gained popularity in the twentieth century, mathematician, scientist, and philosopher Alan Turing proposed a test for machines that was initially intended to measure intelligence.
The imitation game, later known as the Turing test, tests a machine’s ability to exhibit human-like behaviour. It involves three participants: two humans and one machine. Participant A, one of the humans, communicates to Participants B and C, asking them questions and analyzing their responses to figure out which of the two participants is human.
While philosophers have criticized the test’s ability to measure intelligence, there is no question that it is a good marker for how adept these machines are at human mimicry. As artificial intelligence has gone from fiction to reality in recent years, and its development has increased exponentially, it seems inevitable that machines will eventually be capable of passing the Turing test.
Could AI act like humans?
A few months ago, on June 11, Blake Lemoine, a former Google engineer, released a transcript of a conversation he had with LaMDA. LaMDA is Google’s machine-learning model that mimics human speech and thought, and Lemoine believed it was sentient.
While the consensus within the academic community remains that LaMDA has a long way to go before attaining sentience, the transcript shows how close these systems are to convincingly mimicking human speech.
Another AI model, GPT-3, already generates text so convincing it is sometimes difficult to tell whether you are speaking to a human or a bot. An April 2022 review by The New York Times claimed that GPT-3 could write prose with fluency equivalent to that of humans. When I tried talking to GPT-3 myself, it was only clear I was speaking to a bot after a few dialogues; if I had only spoken to it for a few exchanges, I might’ve readily believed there was a person on the other end.
In the realm of chess, human-like AI models have shown incredible progress. In an interview with The Varsity, Reid Mcllroy-Young, a PhD student at the University of Toronto studying the development of human-like AI, said, “For every single possible instantiation of chess, we have relatively strong human-like AI,” primarily since the game is “a very closed system” with a limited set of rules and possibilities.
“In many other out-in-the-world domains, it tends to be that you have human-like performance on some very specific subsets,” he continued. “[Models like GPT-3] will be able to generate texts that seem human-like on some specific tasks, but usually they’re not 100 per cent [accurate].”
One reason our knowledge and research in this area haven’t increased as much as in other areas of AI is that we don’t know how to quantify human-like activity. Mcllroy-Young explained that There are specific programs that people agree ‘feel’ more human-like, but it’s difficult to pinpoint what qualities make them feel that way.
Do we even need human-like AI?
With the current focus on developing chess players and chatbots, human-like AI seems frivolous and not all that useful. But the true impact of AI with an understanding and potential mimicking of human activity comes from the cooperation between AI and humans.
Mcllroy-Young gave the example of a self-driving car: if the vehicle is approaching a yellow traffic light and has its data synced up to the grid so it knows when each light will turn red, the car’s AI system could calculate that if the vehicle accelerates right now, it could get through the intersection. But a move like this may startle human drivers, potentially causing an accident.
Or consider the use of AI for teaching systems. Mcllroy-Young explained that, currently, we have “superhuman models [like] Stockfish and Leela, and they’re strictly better than humans, but it’s very difficult [for humans] to learn from them.”
Building more human-like systems increases the cohesiveness and cooperation between man and machine. But as these models get better and are trained on larger data sets, their mimicry will get better, raising significant ethical concerns.
Considering the ethics
One concern with training these models is where that information on human-like actions is taken from.
A few weeks ago, a group of lawyers and GitHub programmers served a class-action lawsuit against Github for allegedly violating intellectual property rights. GitHub recently released an AI model called Copilot that would generate lines of code based on a prompt you gave. However, according to the plaintiffs, much of the code the model released seemed heavily based on code written by GitHub users who had not consented to Github using their work to train AI.
And this isn’t an isolated incident. Many other open-source platforms, such as ImageNet — a sizable visual database for image recognition software — have faced issues where the data sets used to train the AI models were not open sourced.
“There are lots of artists and AI critics who have criticized ImageNet as basically being a way for tech companies to launder licence agreements for a bunch of images,” remarked Mcllroy-Young.
Another ethical issue with these models has recently appeared as the “right to be forgotten.” This issue is a direct reaction to the notion that nothing is truly lost on the internet, and under this framework, everyone should have the right to be forgotten on the internet. If an AI model has used your information in its training and later uses it to make predictions in other areas, a case could be made that your right to be forgotten is not being respected there.
Most AI systems have also displayed bias. From racist job recruiters to sexist university application reviewers, human bias in data sets has led to AI inheriting these same biases.
When creating human-like AI, we must consider upon whom we should base humanness. “Does [being human] mean to act like an undergrad at a top 10 US college? Or does it mean to act like a person from a rural undeveloped country? They’re both human,” Mcllroy-Young pointed out.
From ethics to legislation
Legislation for AI has been notoriously difficult to create, primarily due to the public support the field has received in enabling rapid innovation. It is difficult for policymakers to justify creating policies that hold back progress in such a manner.
However, the aforementioned examples highlight the need for such legislation. With unencumbered access to developing these models, corporations will inevitably leverage and exploit consumer data for personal profit.
Human-like AI, and AI in general, have been shown to have immense potential to revolutionize our societies and technology. But to ensure fair and ethical AI for all, governments must enact legislative frameworks in this domain.