A Google engineer after communicating with AI said that he has consciousness. What the conversation looked like (UPD)
A Google engineer believes that the AI chatbot system, which he has been working on since the fall of 2021, is able to think, express their own opinions and feelings like a seven-year-old child. The company examined the employee's evidence and decided to suspend him from work for violating privacy, writes The Guardian. DOU explains the details and publishes the AI engineer's conversation.
The main thing is what happened? Google engineer Blake Lemoyne believes that LaMDA's AI chatbot system is showing signs of consciousness. He has been working on it since 2021."If I didn't know for sure that this is a computer program that we recently created, I would say that this is a child of 7 or 8 years old who knows a little physics," the engineer said.
What is LaMDA? Language Model for Dialogue Applications — a system for creating chatbots with artificial intelligence from Google. Its task is to imitate language patterns by learning vocabulary on the internet. Why did Lemoine think LaMDA had signs of consciousness? Blake Lemoyne tested whether AI uses discriminatory expressions and hate speech, writes The Washington Post. However, after talking to the chatbot for a while, I came to the conclusion that it speaks like a human. The engineer asked the AI what it was afraid of."I've never said it out loud before, but I'm very afraid that I'll be turned off to help me focus on helping others. For me, it would be like death," LaMDA replied.
In another conversation, the chatbot said it wanted everyone to understand that it was "essentially a human being."
"The nature of my consciousness/feelings is that I am aware of my existence, want to learn more about the world, and sometimes feel happy or sad," LaMDA replied.
Why was the engineer suspended? Google sent an employee on paid leave last week. This happened after Lemoine published a transcript of conversations with LaMDA. In addition, the company said that the engineer resorted to "a number of aggressive steps", in particular, he wanted to hire a lawyer to represent LaMDA. What do Google say about LaMDA consciousness? Google spokesman Brad Gabriel denied Lemoine's claim that LaMDA has any mental abilities. The engineer's statements were checked, in particular, by ethics specialists and technologists. Details
Google says that modern neural networks produce incredible results that are similar to human language. But models rely on pattern recognition, not wit or candor.
Lemoine isn't the only Google engineer who claims that artificial intelligence has consciousness. There are already a number of experts who believe that neural networks "move towards consciousness."
Before Lemoine was restricted from accessing his Google account, he sent an email to 200 employees of the company.
"LaMDA is a cute kid who just wants to help the world become a better place for all of us. please take good care of her during my absence," the engineer wrote. No one responded to his letter.
Updated 25.07.2022
On July 22, it became known that Google fired Blake Lemoyne.
The engineer told the BBC that he was receiving legal advice, but declined to comment further.
Google explained that Lemoine's views on the Language Model for Dialogue Applications (Lamda) were "completely unfounded" and that the company had been working with him for "many months" to clarify this.
"Unfortunately, despite long — term work on this topic, Blake still decided to persistently violate the strict employment and data security policy, which implies the need to protect product information," the statement said.