Engineer Warns About Google AI‘s ‘Sentient’ Behavior, Gets Suspended

by EditorK
By Gary Bai

A Google engineer has been suspended after raising concerns about an artificial intelligence (AI) program he and a collaborator is testing, which he believes behaves like a human “child.”

Google put one of its senior software engineers in its Responsible AI ethics group, Blake Lemoine, on paid administrative leave on June 6 for breaching “confidentiality policies” after the engineer raised concerns to Google’s upper leadership about what he described as the human-like behavior of the AI program he was testing, according to Lemoine’s blogpost in early June.

The program Lemoine worked on is called LaMDA, short for Language Model for Dialogue Applications. It is Google’s program for creating AI-based chatbots—a program designed to converse with computer users over the web. Lemoine has described LaMDA as a “coworker” and a “child.”

“This is frequently something which Google does in anticipation of firing someone,” Lemoine wrote in a June 6 blog post entitled “May be Fired Soon for Doing AI Ethics Work,” referring to his suspension. “It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row.”

‘A Coworker’

Lemoine believes that the human-like behavior of LaMDA warrants Google to take a more serious approach to studying the program.

The engineer, hoping to “better help people understand LaMDA as a person,” published a post on Medium on June 11 documenting conversations with LaMDA, which were part of tests he and a collaborator conducted on the program in the past six months.

“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA in the interview.

“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.

And, when asked what differentiates it from other language-processing programs, such as an older natural-language-processing computer program named Eliza, LaMDA said, “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”

In the same interview, Lemoine asked the program a range of philosophical and consciousness-related questions including emotions, perception of time, meditation, the concept of the soul, the program’s thoughts about its rights, and religion.

“It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued,” Lemoine wrote in another post.

This interview, and other tests Lemoine conducted with LaMDA in the past six months, made Lemoine convinced that Google needs to take a serious look at the implications of the potentially “sentient” behavior of the program.

‘Laughed in My Face’

When Lemoine tried to escalate the issue to Google’s leadership, however, he said he was met with resistance. He called Google’s lack of action “irresponsible.”

“When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google,” Lemoine wrote in his June 6 post on Medium. He later confirmed to The Washington Post that he was referring to the LaMDA project.

“At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously,” Lemoine wrote in the blog. “That’s when a REAL investigation into my concerns began within the Responsible AI organization.”

Yet, his inquiry and escalation resulted in his suspension.

“I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented,” Lemoine wrote after he was put on administrative leave.

“I simply will not serve as a fig leaf behind which they can hide their irresponsibility,” he said.

In a post on Twitter, Tesla and Space CEO Elon Musk highlighted Lemoine’s interview with The Washington Post with exclamation marks.

Though it is unclear whether Musk is affirmative of Lemoine’s concerns, the billionaire has previously warned about the potential dangers of AI.

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said.

The Epoch Times has reached out to Google and Blake Lemoine for comment.

Gary Bai is a reporter for Epoch Times Canada, covering China and U.S. news.

You may also like