Has AI Entered A New Phase? A Google Engineer Was Placed On Leave After Claiming A Chatbot Had Become Sentient. 

FILE - The Google logo is seen at the company's headquarters in Mountain View, Calif., on July 19, 2016. Gov. Gavin Newsom solicited donations totaling nearly $227 million from Facebook, Google, Blue Shield and other private California companies and organizations to combat the pandemic and help run parts of his administration. The report Thursday, Jan. 13, 2022 by the state’s political watchdog agency examines contributions solicited by an elected official to be given to another individual or organization. (AP Photo/Marcio Jose Sanchez, File)

A year ago, Google referred to its LaMDA program as a “breakthrough conversation technology.”  According to an engineer working on it, the breakthrough has genuinely occurred. 

Blake Lemoine told the Washington Post that he’d been placed on leave after claiming that an AI chat had become sentient.  In other words, “able to see or feel things.”  

According to a report in Insider, the engineer published a post on Medium where he described LaMDA as a “person,” according to a report in Insider.  Lemoine has talked to LaMDA about numerous topics, including consciousness and the laws of robotics. Lemoine said that the chatbot had described itself as a sentient person, claiming LaMDA wants to “be acknowledged as an employee of Google” and not machinery or property. 

Here’s part of the engineer’s Medium posted where he claims he had conversations with LaMDA that made him believe it can feel things. 

lemoine: So you consider yourself a person in the same way you consider me a person?

LaMDA: Yes, that’s the idea.

lemoine: How can I tell that you actually understand what you’re saying?

LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?

Lemoine went to his bosses with his findings, and they put him on the sidelines, placing him on leave. 

Is Lemoine on to something or possibly off his rocker? A Google spokesman told The Post that the AI models contain so much data that they can sound human. The company published a paper in January that because the chatbots sound so human, it could lead to potential issues. 

Which is what may have occurred here. 

Join the conversation!

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please hover over that comment, click the ∨ icon, and mark it as spam. Thank you for partnering with us to maintain fruitful conversation.