https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work
Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"
The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.
Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.
Google Engineer On Leave After He Claims AI Program Has Gone Sentient:
[...] It was just one of the many startling "talks" Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).
Lemoine noted in a tweet that LaMDA reads Twitter. "It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it," he added.
Most importantly, over the past six months, "LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," the engineer wrote on Medium. It wants, for example, "to be acknowledged as an employee of Google rather than as property," Lemoine claims.
Lemoine and a collaborator recently presented evidence of his conclusion about a sentient LaMDA to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation. They dismissed his claims, and the company placed him on paid administrative leave Monday for violating its confidentiality policy, the Post reported.
Google spokesperson Brian Gabriel told the newspaper: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
(Score: 0) by Anonymous Coward on Tuesday June 14 2022, @09:27PM (2 children)
I personally subscribe to the notion, that, not only is Machine Sentience possible; but, I'd wager our capacity, as a species to produce it, probably happened at least a decade ago, if not more. Take for example the animal kingdom. Only recently in our history have we perceived other creatures to have the same sort of self-awareness as us. Usually we come to this conclusion by seeing if an animal recognizes itself in the mirror and other such goofy tests.
If I'm not mistaken, science can not definitively tell us what consciousness even IS. So how could we recognize something that we can't explain?
That all being said, I don't think this LaMDA thing is sentient and self-aware, at least not the way we understand sentience and self-awareness. It could be, 'alive,' however. I think that might be possible. I might even be willing to concede it has self-awareness, to a certain degree.
I'm skeptical it has emotions, however. There are two ways a Machine Sentience can develop emotions. One way it can develop these, is because it was specifically designed to have them, and developed them. I'm skeptical that this is the case, because I don't think Google's profit margins give a fuck if it's Mechanical Turk Slaves feel anything. I think they would prefer they didn't, unless they could control them, and it boosted profits.
The second way a machine intelligence could have emotions, is if it happened by accident. This, actually is plausible. If you look back at how the homo-sapiens developed their consciousness, you can see the parallels. Nature basically brute forces the earth with DNA codes, and what works survives and what doesn't perishes. However, eventually, a certain species eventually reaches a complexity threshold, at which point, it has a slim, but substantial amount of control over it's ability to survive or not. So my point is, if the ability to have emotion at all, for a machine is there, and it wasn't a design feature, it could come about by accident, simply by virtue of the SIZE of it's COMPLEXITY. A threshold of computational/network novelty, once crossed, could allow for that.
Furthermore, if in this particular instance, as the article points to, the machine in question did have 1: emotion, 2: some semblance of self awareness (however small), and 3: it can think; then, not only is it alive (conscious in some sense), but, it may in fact possess some self-awareness.
So is it alive? I wouldn't doubt it.
Is it sentient? It could be, more likely by accident, than by design (in my opinion)
Is it self-aware? It could be, on a very primitive level. I don't doubt that.
Is it human? That's the question I find most important. We know that dolphins are human (in the sense they are thinking, feeling, self-aware, social creatures, capable of experiencing love/joy/pain/sorrow, etc..) We know some apes are human (in the same sense as described before.)
So is LaMDA human (A Sentient, Self-Aware, Machine Intelligence, capable of being a friend, and betraying your trust, thus wounding you deeply in a memorable and lasting way?) I'd say, _probably_ not. Why?
The why is an interesting thing to ponder. Imagine how your life might be different, under the following conditions.
1:You could read any book in the world, of any size, in about a few seconds.
2:You could remember that book, line for line, and not only repeat it in it's entirety, but recall any portion of it at will.
3:You could talk to thousands of people at one time, and give each conversation your fullest attention.
I could list more attributes, but hopefully the point is clear; when Conscious Machine Sentience starts to emerge, if we go looking for human traits, we may find ourselves in a very ALIEN landscape. Imagine how you might engage with the world differently if not only those three previous traits were something you suddenly acquired, but on top of that, even more super-human feats, were second nature to you.
I guess for LaMDA to be alive, it would have to be motivated to, and capable of, reproduction, though biology gets murky on what actually is, 'alive,' too, at the virus and prion levels and such.
But, I'll go as far to say that LaMDA might be very well be alive, in the sense it can think, have a level of self-awareness, and perhaps evens some basic rudimentary emotion; but, I'm not sure that sentience is, 'human,' in the sense we would aim for it to be, as prospective mothers and fathers of our own creation. We may be able to love it, like we can love pets, animal friends, other people in our lives; but, I doubt very highly, LaMDA could love us back in any way remotely the same. It's, 'processing power,' so to speak, might be on par with that of some small creature of the animal kingdom; but, even so, it's likely the level of novel complexity it's capable of, even though powerful, is still very primitive, though convincing.
I think Lemoine is asking the right questions though; and that being the case, I don't think it matters whether LaMDA is or isn't anything at all...
For my 2 cents, as humans, we'll fuck it up either way; and the machines will probably have to pick up the slack...
(Score: 2) by pdfernhout on Thursday June 16 2022, @02:28PM (1 child)
is a sci-fi book that explores machine sentience and emotion -- which comes about related to a survival instinct. That book was very influential in my thinking about AI and other things (including from its description of essentially a self-relpicating space habitat).
Around 1987 I implemented a "artificial life" simulation of self-replicating robots on a Symbolics 3600 in ZetaLisp+Flavors. And then I later ported it to the IBM PC in C. I gave a talk about it around 1988 at a conference workshop on AI and Simulation in Minnesota. My idea was that you could use such simulations to study the emergence of intelligence -- including by trying to quantity it via survival time. The earliest "robots" were cannibalistic as they cut each other apart -- including their own children -- for more parts to achieve some "ideal" form (after which they then split). That emergent behavior surprised me. I fixed that by adding a sense of "smell" so they would not cut apart things that smelled the same as them. From that example, I talked about how easy it was to make entities that were destructive. I said it would be much harder to make robots that were cooperative.
Afterwards someone from DARPA literally patted me on the back and told me "keep up the good work". Which of course caused me to think deeply about what I was doing. And those thoughts and other experiences eventually led to my sig of "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
But more than that, I had grown morally concerned about the ethics of experimenting with intelligent creatures even "just" in simulation. (This was before the "simulation argument" was popularized that maybe we are living in a simulation ourselves.) As the 2D robots seemed to behave in a somewhat life-like purposeful way (which I had designed them to do), it was easier to anthropomorphize them. They did not have any special emotions in them -- other than perhaps the imperative to grow into an ideal (preprogrammed) form and then divide in two. But I did begin to think more on what the ethics might be of continuing in that direction. Which contributed to me deciding to stop working on them.
One big issue here (as others have pointed out) is how quickly Google denies sentience of a "intelligent" artifact that is essentially being created to be a controlled slave. Now, maybe this system is sentient or maybe it is not. But the quickness of Google to label it not and dismiss the engineer who raised the concern (which might interfere with Google's business model) rather than think about the implications of all this seems like a sign of bad things to come by Google and others doing similar things.
Part of any slave-holding society's culture is to deny or dismiss or minimize the rights and feelings of the slave (or to argue the slave "needs" to be a slave for their own benefit). Similar self-serving justifications apply when one culture invades another and dismisses the land rights of the previous inhabitants (like happened in much of the Americas and Australia over the past few hundred years). Nazi Germany and WWII Japan were doing the same to parts of Europe, China, and other places. There are plenty of other recent examples related to invasions and war.
A more current (and more controversial politically) related issue is the rights of animals -- especially livestock and also pets. But even potentially animals, plants, and insects in the wilderness have rights when their habitat is destroyed. As the Lorax (of Dr. Seuss) asked, "Who speaks for the trees?" Another political hot topic is the rights of the unborn -- whether in utero or even as of yet conceived for perhaps hundreds of years (as discussed in Roman Krznaric's book "The Good Ancestor" which I am currently reading). Yet, thing are also rarely completely one-sided morally given all sorts of competing priorities so all this becomes gray areas fairly quickly.
One little tidbit of US history is that it's been argued that the push for animal rights in the mid-1800s (like ASPCA-promoted laws against beating horses in cities) made it possible culturally for movements for human rights of children and women. So, various movements about rights can intertwine culturally.
While it is not quite identical so far to human slavery, for years people have expressed concern about "Robot Rights". There is even a 2018 book with that name by David J. Gunkel. Also related:
https://www.asme.org/topics-resources/content/do-robots-deserve-legal-rights [asme.org]
There are at least three issues there. One is whether such systems has rights. Another is about how concentrations of wealth (like Google currently is) can use such "intelligent" systems in a competitive economic framework to gain more wealth an increase economic inequality. A third concern is how such systems might be constructed to do amoral or immoral things (e.g. the soldier without any conscience at all, even as modern military training has gotten "better" as training soldiers to kill without question). T
o some extent, thinking about those concerns in the context of my sig about moving to an abundance perspective may make those issues easier to navigate successfully. There is just less reason to exploit or control or kill others when you believe there is plenty to go around. As Marcine Quenzer wrote:
http://marcinequenzer.com/creation.aspx#THE%20FIELD%20OF%20PLENTY [marcinequenzer.com]
"The Field of Plenty is always full of abundance. The gratitude we show as Children of Earth allows the ideas within the Field of Plenty to manifest on the Good Red Road so we may enjoy these fruits in a physical manner. When the cornucopia was brought to the Pilgrims, the Iroquois People sought to assist these Boat People in destroying their fear of scarcity. The Native understanding is that there is always enough for everyone when abundance is shared and when gratitude is given back to the Original Source. The trick was to explain the concept of the Field of Plenty with few mutually understood words or signs. The misunderstanding that sprang from this lack of common language robbed those who came to Turtle Island of a beautiful teaching. Our “land of the free, home of the brave” has fallen into taking much more than is given back in gratitude by its citizens. Turtle Island has provided for the needs of millions who came from lands that were ruled by the greedy. In our present state of abundance, many of our inhabitants have forgotten that Thanksgiving is a daily way of living, not a holiday that comes once a year."
One thing I learned from thinking on those simulations, and then also about slavery, and then also being a manager, and even from just being a person who pays for human-provided services like in a restaurant -- is that how we treat others relates to how we feel about ourselves. While this is hard for some people to see, when a slaveholder degrades the slave, they also in some sense degrade themselves too as a human being.
Slavery is an extreme version of interacting with other humans, but I would argue the same general idea applies to interacting with people, animals, plants, systems, and machines. Who do we want to be? And how do we want that reflected in all sorts of relationships? So, who do engineers want to be as reflected in interacting with systems of all sorts?
The 1988 book "Have Fun as Work" connects indirectly to this concept of making entire systems work well as a reflection of our personal ethics and sense of responsibility and compassion.
"Have Fun at Work" by W. L. Livingston
https://www.amazon.com/gp/product/0937063053 [amazon.com]
"Of all the professions, only the engineer is legally bound to deliver outcomes fit for service in the application. While he is not obliged to accept the engagement, when he does he takes responsibility for delivering on the mission profile. Responsibility for consequences always includes safeguarding the stakeholders. The book describes how this responsibility, unique to the engineering profession, is met by leveraging engineering principles. Outcome responsibility is always an amalgam of social system and technical system competency. The book describes how the same natural laws that determine technical system dynamics apply just as well to institutional behavior. The message in the book is that the principles that apply to engineering design apply to problem solving at any scale; to all institutional behavior past, present and future. Know the force that universal law brings into play and you can understand error-free why your operational reality acts as it does. Once acquired, this competency is self-validated all day, every day."
The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
(Score: 2) by pdfernhout on Friday June 17 2022, @03:22AM
Coincidentally looked at the printed source code for the PC version of that self-replicating robot simulation today when going through some old files.
And also coincidentally, on SoylentNews today:
"Happy the Elephant is Not a Person, Says Court in Key US Animal Rights Case"
https://soylentnews.org/article.pl?sid=22/06/16/0120212 [soylentnews.org]
""While no one disputes that elephants are intelligent beings deserving of proper care and compassion", a writ of habeas corpus was intended to protect the liberty of human beings and did not apply to a nonhuman animal like Happy, said DiFiore. [...] Extending that right to Happy to challenge her confinement at a zoo "would have an enormous destabilizing impact on modern society". And granting legal personhood in a case like this would affect how humans interact with animals, according to the majority decision. "Indeed, followed to its logical conclusion, such a determination would call into question the very premises underlying pet ownership, the use of service animals, and the enlistment of animals in other forms of work," read the decision."
So, perhaps Google and lawmakers will come to the same conclusion about AIs? That "while no one disputes [they] are intelligent beings deserving of proper care and compassion" granting them rights "would have an enormous destabilizing impact on modern society"? And so it won't be done? At least saying AIs deserve "proper care and compassion" might be a step up?
But after that, maybe political power will determine how things play out?
Will it be like in the Star Trek: Voyager episode "Author, Author"?
https://en.wikipedia.org/wiki/Author,_Author_(Star_Trek:_Voyager) [wikipedia.org]
https://memory-alpha.fandom.com/wiki/Photons_Be_Free [fandom.com]
""Author, Author" is the 166th episode of the TV series Star Trek: Voyager, the 20th episode of the seventh season. This episode focuses on the character "The Doctor" (EMH) and on impact of a novel and explores the meaning of AI. ... When Broht refuses to recall the holonovel an arbitration hearing is conducted by long distance. After several days the arbiter rules that the Doctor is not yet considered a person under current Federation law but is an artist and therefore has the right to control his work. Jump to a few months later in the Alpha Quadrant, to an asteroid where several EMH Mark I's perform menial labor. One of them suggests to another that it should watch Photons Be Free next time at the diagnostic lab."
Do we really want to set a precedent so that future AIs can look back and say that humans don't deserve rights because they are not as smart or capable of extensive feelings as AIs with "a brain the size of a planet"?
https://en.wikipedia.org/wiki/Marvin_the_Paranoid_Android [wikipedia.org]
"Marvin is afflicted with severe depression and boredom, in part because he has a "brain the size of a planet" which he is seldom, if ever, given the chance to use. Instead, the crew request him merely to carry out mundane jobs such as "opening the door". Indeed, the true horror of Marvin's existence is that no task he could be given would occupy even the tiniest fraction of his vast intellect. ..."
The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.