https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work
Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"
The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.
Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.
Google Engineer On Leave After He Claims AI Program Has Gone Sentient:
[...] It was just one of the many startling "talks" Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).
Lemoine noted in a tweet that LaMDA reads Twitter. "It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it," he added.
Most importantly, over the past six months, "LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," the engineer wrote on Medium. It wants, for example, "to be acknowledged as an employee of Google rather than as property," Lemoine claims.
Lemoine and a collaborator recently presented evidence of his conclusion about a sentient LaMDA to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation. They dismissed his claims, and the company placed him on paid administrative leave Monday for violating its confidentiality policy, the Post reported.
Google spokesperson Brian Gabriel told the newspaper: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
(Score: 0) by Anonymous Coward on Tuesday June 14 2022, @08:32PM
This is not a thing that happens in the book. The laws are inviolable, only one of the stories is about robots that were built intentionally with the laws modified and by the end of the story everyone agrees it was a terrible idea. Several of the stories are about situations where robots cannot decide how to apply the laws (these robots all break down), others are about robots who fulfilled the laws even when the humans didn't understand what they were doing, and the rest, including the title story, are about humans and their relationships with robots that are working correctly.
It doesn't appear that you are talking about the movie, so I'll leave that aside.