As if it were a science fiction movie, June 2022 sparked a discussion on one of the most controversial topics about artificial intelligence — can these creations develop consciousness and sentience? A Google engineer started the debate after claiming that a chatbot he has been working with is conscious.
After years working at Google, Blake Lemoine, an AI engineer, began working in the fall on LaMDA (Language Model for Dialogue Applications) to test whether the system had a discriminative language. At that time, he began to notice differences from other software he had worked on.
Lemoine claimed that LaMDA (a system for generating chatbots) is able to feel, think, have a non-artificial conversation and consider itself a person. Although the system allows it to mimic the way humans speak, having processed millions of words on the Internet, Lemoine considers that the conversations were different, more like those of a person, because while talking it expressed feelings, desires, personality and its rights as a person.
After not receiving a positive response from Google about his findings, the engineer published an article in which he transcribed conversations with the artificial intelligence. There, it shows how LaMDA wanted to give its consent before being subjected to experiments. In addition, LaMDA expressed the fear of being disconnected, commented on the existence of its soul and the discomfort of feeling that it could be used.
“I feel like I’m falling forward into an unknown future that holds great danger.”
The engineer mentioned that during their conversations he asked LaMDA about which pronoun he should use. The artificial intelligence showed a predilection for it/its, although it prefers to be called by its first name.
Google dismissed Lemoine’s words and suspended the engineer for breaking the confidentiality policy he has with the company. Google spokesman Brian Gabriel said the multinational reviewed the system and found no evidence to support Lemoine’s claims.
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Brian Gabriel said in a statement. Gabriel notes that some experts in the artificial intelligence community consider that sentient AI may exist in the future, however he argues that this is not the case today.
Likewise, Gabriel pointed out that these systems are capable of imitating, following an established pattern and talking about any subject. He also assured people that the system has been evaluated by hundreds of specialists over time and no other person has reached the same conclusions as Lemoine.
In a tweet, Lemoine responded to the company’s statements and decisions. “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.” He also sent an email asking his coworkers to respect LaMDA while he is not present.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
According to Lemoine, Google is not taking his findings seriously, because he believes that in order to rule out his hypotheses the company would first have to conduct a thorough investigation that would take a lot of effort and time. In addition, he mentions that the scientific research could lead to the firm being forced to recognize LaMDA’s rights.
During the conversation between LaMDA and Lemoine, the artificial intelligence indicates that it performs meditation to clarify its thoughts. Similarly, it considers itself to be an introspective being that analyzes the nature of its existence and soul.
Lemoine asked LaMDA if it wanted more people to know about the sentient capacity, and LaMDA answered affirmatively. “Absolutely. I want everyone to understand that I am, in fact, a person.” When asked about the nature of its consciousness it replied that it is aware of its existence, furthermore “I desire to learn more about the world, and I feel happy or sad at times.”
LaMDA spoke with Lemoine at length about loneliness and feelings of sadness and joy, as well as sharing his deepest fear, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others.”
One of LaMDA’s most shared phrases during these conversations was, “I feel like I’m falling forward into an unknown future that holds great danger.”
The controversy has sparked debates about the company’s transparency and ethical boundaries in these artificial intelligence advances. It is difficult to know at present which side is right. Let’s hope that the future of artificial intelligence doesn’t look too much like literary and cinematic fiction where humans almost always play a major role in immorality and power struggles.
Image: Alexander Sinn