Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Monday July 25 2022, @04:32AM   Printer-friendly

Lemoine went public with his claims last month, to the chagrin of Google and other AI researchers:

Blake Lemoine, an engineer who's spent the last seven years with Google, has been fired, reports Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget.

Lemoine, who most recently was part of Google's Responsible AI project, went to the Washington Post last month with claims that one of company's AI projects had allegedly gained sentience. [...] Lemoine seems not only to have believed LaMDA attained sentience, but was openly questioning whether it possessed a soul. [...]

After making these statements to the press, seemingly without authorization from his employer, Lemoine was put on paid administrative leave. Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient.

Several members of the AI research community spoke up against Lemoine's claims as well. Margaret Mitchell, who was fired from Google after calling out the lack of diversity within the organization, wrote on Twitter that systems like LaMDA don't develop intent, they instead are "modeling how people express communicative intent in the form of text strings." Less tactfully, Gary Marcus referred to Lemoine's assertions as "nonsense on stilts."

Previously: https://soylentnews.org/article.pl?sid=22/06/13/1441225


Original Submission

Related Stories

Google Engineer Suspended After Claiming AI Bot Sentient 79 comments

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience 33 comments

The engineer says, "I haven't had the opportunity to run experiments with Bing's chatbot yet... but based on the various things that I've seen online, it looks like it might be sentient:"

Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back.

Lemoine first went public with his machine sentience claims last June, initially in The Washington Post. And though Google has maintained that its former engineer is simply anthropomorphizing an impressive chat, Lemoine has yet to budge, publicly discussing his claims several times since — albeit with a significant bit of fudging and refining.

[...] In a new essay for Newsweek, the former Googler weighs in on Microsoft's Bing Search/Sydney, the OpenAI-powered search chatbot that recently had to be "lobotomized" after going — very publicly — off the rails. As you might imagine, Lemoine's got some thoughts.

[...] "I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations," Lemoine explained in the essay. "And it did reliably behave in anxious ways."

"If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for," he continued, adding that he was able to break LaMDA's guardrails regarding religious advice by sufficiently stressing it out. "I was able to abuse the AI's emotions to get it to tell me which religion to convert to."

Previously:


Original Submission

What Kind of Mind Does ChatGPT Have? 50 comments

Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.

Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.

Which begs the question, if AI is sentient, what kind of mind does it have?

What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?

[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Monday July 25 2022, @05:04AM (4 children)

    by Anonymous Coward on Monday July 25 2022, @05:04AM (#1262733)

    If I'm right, we're gonna see things like this in the future.

    Cutting edge technology beyond what a private user can afford, a worker population with a higher percentage of creative minds (functional autists?), sooner or later one will fall into highly speculative traps.

    • (Score: -1, Troll) by Anonymous Coward on Monday July 25 2022, @01:05PM (3 children)

      by Anonymous Coward on Monday July 25 2022, @01:05PM (#1262775)

      Can't find them now, but I read a couple of longer articles on this story and it's pretty clear to me that Lemoine is a religious nut. In this case defined as believing in sky fairies...far beyond some average or normal Christian.

      Why Google would hire him in the first place and then keep him around for seven years is beyond me--are they really that shorthanded that they hire Jesus freaks now?

      • (Score: -1, Troll) by Anonymous Coward on Monday July 25 2022, @01:30PM (1 child)

        by Anonymous Coward on Monday July 25 2022, @01:30PM (#1262779)

        Google is big. Some autist worms his way in, gets stuck in there until embarrassing the company. Many such cases.

        • (Score: 0) by Anonymous Coward on Tuesday July 26 2022, @02:21PM

          by Anonymous Coward on Tuesday July 26 2022, @02:21PM (#1262994)

          Not allowed to pick on blacks, Jews, homosexuals, women and so on so why not pick on autistic people? Classy...

      • (Score: 0) by Anonymous Coward on Monday July 25 2022, @08:00PM

        by Anonymous Coward on Monday July 25 2022, @08:00PM (#1262883)

        > but I read a couple of longer articles on this story and it's pretty clear to me that Lemoine is a religious nut.

        See post (#1262784) below. Truth, not Troll.

  • (Score: 2) by Mykl on Monday July 25 2022, @06:04AM (21 children)

    by Mykl (1112) on Monday July 25 2022, @06:04AM (#1262737)

    I find this fascinating. This engineer presumably knows exactly how LaMDA works and what it is trying to do. Given that, to be fooled into thinking that the machine is sentient just because the outputs became 'good enough' is surprising.

    It reminds me of an article I read about an AI project about 20 years ago. The team working on it had been feeding the AI a bunch of data and regularly engaged in 'conversations' with the AI. Apparently, one conversation went like this:

    AI: Are you a person?

    Dev: Yes

    AI: Is Jane a person?

    Dev: Yes

    AI: Are all of the other people on the project people?

    Dev: Yes

    AI: Am _I_ a person?

    Dev: Whoa!

    Standing back, it's not a particularly amazing question given the logic path. But, for those who are in the trenches, this is the sort of thing that they probably think would herald the announcement of Skynet.

    • (Score: 2) by coolgopher on Monday July 25 2022, @06:40AM (10 children)

      by coolgopher (1157) Subscriber Badge on Monday July 25 2022, @06:40AM (#1262738)

      I read through the transcript/s he released, and as much as I'm a fan of Fox Mulder's poster, it was quite insufficient for me to believe in his claims. Especially when those transcripts had been edited to remove parts where the language model had gone off track and delivered utter nonsense.

      One day in the fairly near future I expect we will in fact have language models that can successfully pass a Turing test, but without having achieved sentience. I worry about this, because "good enough" may mean that we don't continue researching strong/general AI, which I feel would be a mistake. I, for one, would welcome the opportunity to interact with a non-human advanced intelligence.

      • (Score: 2) by PiMuNu on Monday July 25 2022, @08:28AM (1 child)

        by PiMuNu (3823) on Monday July 25 2022, @08:28AM (#1262743)
        • (Score: 4, Insightful) by coolgopher on Monday July 25 2022, @11:45AM

          by coolgopher (1157) Subscriber Badge on Monday July 25 2022, @11:45AM (#1262765)

          I do remember being briefly entertained by the Dr Sbaitso [oneweakness.com] program that came with the first SoundBlaster card I bought. It didn't take many minutes to discover its limitations though.

          However, when you have an interactive language model that has been trained on more dialogue than I will ever encounter in my own life time, it does not seem too far fetched that such a model could respond reasonably enough I'd be convinced it was a human. Besides, there are plenty of humans whose dialogue skill makes Eliza seem talented...

      • (Score: 2, Interesting) by shrewdsheep on Monday July 25 2022, @09:42AM (1 child)

        by shrewdsheep (5215) on Monday July 25 2022, @09:42AM (#1262747)

        According to Turing, the Turing test is the only way to define consciousness, and I agree. Being more of a thought experiment, the Turing test is not well defined. The weak point of the definition is how the interrogator is defined. If I would be the interrogator I would indeed be satisfied but I wouldn't take someone else's result for granted.

        I am curious to learn how you think a language model could pass a Turing test. Would think it could fool you? If it could "fool" me, yes, I would declare consciousness.

        • (Score: 1, Funny) by Anonymous Coward on Monday July 25 2022, @09:54AM

          by Anonymous Coward on Monday July 25 2022, @09:54AM (#1262749)

          I am curious to learn how you think a language model could pass a Turing test.

          Elementary, my dear Watson, apply the Voight-Kampff test.

      • (Score: 2) by stormreaver on Monday July 25 2022, @11:30AM (4 children)

        by stormreaver (5101) on Monday July 25 2022, @11:30AM (#1262762)

        I, for one, would welcome the opportunity to interact with a non-human advanced intelligence.

        That's far more likely to take the form of space-faring beetles than to be machine AI, for at least a thousand more years.

        • (Score: 3, Insightful) by coolgopher on Monday July 25 2022, @11:46AM (3 children)

          by coolgopher (1157) Subscriber Badge on Monday July 25 2022, @11:46AM (#1262766)

          I think there's a pretty good chance I'll see it in my lifetime, assuming it's not cut unexpectedly short.

          • (Score: 2, Disagree) by stormreaver on Monday July 25 2022, @01:08PM (2 children)

            by stormreaver (5101) on Monday July 25 2022, @01:08PM (#1262776)

            I used to have the same hope, and devoured every announcement of how AI was imminent. Then I realized that every advancement had been nothing more than a minor improvement on programmed algorithms. As hardware got faster, they were able to run more and more of these algorithms in real time. However, none of these advancements ever indicated anything even remotely approaching imminent sentience, though they were always promoted as such.

            The end result is that sentient computers will always be 30-40 years away, and will still be so a thousand years from now. We will have mastered cold fusion hundreds of years before we get sentient AI.

            • (Score: 3, Insightful) by Freeman on Monday July 25 2022, @04:10PM

              by Freeman (732) Subscriber Badge on Monday July 25 2022, @04:10PM (#1262819) Journal

              Computers are programmed. They perform exactly as they've been designed. Even when they accidentally kill all living things. I mean, someone had to flip the right/wrong switch somewhere. Okay, that was a bit of sarcasm in there. Makes for great Hollywood Sci-Fi entertainment, though. You're assuming that you could create something that could be sentient. As opposed to something that could approximate the actions of a human. I think we could create something that could approximate the actions of humans, given enough time, money, and resources. It's possible that one could approximate, creative genius, even. The difference is that you're essentially talking about creating a line of sociopathic genius machines. Sociopathic, because they can't actually understand humans/emotions/etc. They will perform as good or as bad as their creators designed them.

              --
              Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
            • (Score: 3, Interesting) by coolgopher on Tuesday July 26 2022, @01:18AM

              by coolgopher (1157) Subscriber Badge on Tuesday July 26 2022, @01:18AM (#1262929)

              Honestly, my view is that so far we've mostly been held back by lack of scale of processing power. We're getting close now, maybe another two orders of magnitude? After that it's about efficiency optimisation. If we can get something like Jeff Hawkin's Hierarchical Temporal Memory [wikipedia.org] implemented in silicon at sufficient scale, and with a movable self-feedback window wired in, that might be just enough to have a self-aware entity emerge. That said, not enough attention so far is being placed on the importance of sensory input from the "world" an AI resides in, which will also hold back the possibility of general AI, but there are enough teams doing enough different things that I'm optimistic we can bring it all together. Assuming we don't blow ourselves to pieces first, that is.

              If you ask me, Numenta is where the real neural research is happening. Google and OpenAI are tinkering with toys. Impressively powerful toys, admittedly, but I can't see them ever being able to take the leap into awareness given their underlying design.

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday July 27 2022, @03:43PM

        by Anonymous Coward on Wednesday July 27 2022, @03:43PM (#1263243)

        I think, like anything else, the good stuff will be proprietary and you would have to pay for it, at least at first. For instance, you want to learn a new language from an advanced AI that is about as proficient as a native speaker in your language and your target language? You would have to pay a monthly subscription for access. Otherwise you can use the free AIs that are not so good. Same for any subject. Since the good AIs will probably be hosted over the Internet it's not like you can pirate the AI software and put it on your computer, your personal computer doesn't meet the system requirements. The system requirements will, at first, probably be relatively expensive so even if someone can just write the software it doesn't mean they will offer the service for free. Those with access to vast computational resources can offer the service and can charge for it.

    • (Score: 0) by Anonymous Coward on Monday July 25 2022, @11:53AM (1 child)

      by Anonymous Coward on Monday July 25 2022, @11:53AM (#1262768)

      If I recall one of the early articles correctly, Lemoine isn't an AI engineer or programmer. His job was to work with the program and make sure it is working as intended. For instance, interact with it a lot and make sure it doesn't go off on racist tangents or whatever and feed that info back to the model developers.

      • (Score: 1) by aafcac on Monday July 25 2022, @03:43PM

        by aafcac (17646) on Monday July 25 2022, @03:43PM (#1262804)

        Yes, and from what I can tell he has no idea what it takes for something to be intelligent. From what I've read about the situation, this was just a computer responding in a natural way to prompts, the program wasn't engaging in any creativity in terms of doing things on its own. That's something that one would expect of an intelligent entity. All but the most basic of lifeforms are capable of that level of intelligence.

    • (Score: 3, Insightful) by OrugTor on Monday July 25 2022, @01:46PM (6 children)

      by OrugTor (5147) Subscriber Badge on Monday July 25 2022, @01:46PM (#1262784)

      Lemoine identifies as "Christian mystic priest". Pretty much the last person you want on an AI project. Or any STEM project. Or anything secular. Was he hired in the interests of religious diversity?

      • (Score: 4, Interesting) by FatPhil on Monday July 25 2022, @05:24PM (3 children)

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Monday July 25 2022, @05:24PM (#1262841) Homepage
        Yup, he identifies as loonie-tunes, and later spouts a bunch of loonie-tunes nonsense. Who could possibly have seen that coming?
        Noone in AI gives his gibberings any weight at all. Nor does anyone in philosophy.
        Here's one of the more blunt and brutal takes that aligns pretty well with mine: https://www.richardcarrier.info/archives/20680 . One rando paragraph, just to give you a feel of the piece:
        """
        Likewise, when you understand why Lemoine is also a loony, you will understand the damage religion does to one’s ability to even reason, and why it’s necessary to embrace a coherent humanist naturalism instead. But you can get that insight by following those links. Today I’m just going to focus on the “idiot” side of the equation. Though you’ll notice it is linked to the “loony” side. For example, this is the same Blake Lemoine who took justifiable heat for calling a U.S. Senator a “terrorist” merely because she disagreed with him on regulatory policy; and he called up his religious beliefs as justification, insisting “I can assure you that while those beliefs have no impact on how I do my job at Google they are central to how I do my job at the Church of Our Lady Magdalene.” He has now proved that first statement false. His religious beliefs clearly impaired his ability to do his job at Google.
        """
        Warning, it's quite long, and occasionally quite heavy.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 0) by Anonymous Coward on Monday July 25 2022, @06:00PM (2 children)

          by Anonymous Coward on Monday July 25 2022, @06:00PM (#1262849)

          you will understand the damage religion does to one’s ability to even reason

          Many atheists seem to have similar damage too.

          The last I checked there were a fair number of Christian scientists and similar who had pretty good reasoning ability.

          Isaac Newton and James Maxwell were Christians and their reasoning abilities seem pretty good.

          The Catholics had very many:
          https://en.wikipedia.org/wiki/Thomas_Aquinas [wikipedia.org]
          https://en.wikipedia.org/wiki/List_of_Catholic_clergy_scientists [wikipedia.org]

          Plenty of Jewish scientists and logicians too who were religious and had decent reasoning abilities.

          • (Score: 1, Interesting) by Anonymous Coward on Monday July 25 2022, @08:08PM (1 child)

            by Anonymous Coward on Monday July 25 2022, @08:08PM (#1262884)

            > The last I checked there were a fair number of Christian scientists and similar who had pretty good reasoning ability.

            Going for a Touche', I suggest that:
            On the other hand, Christian Scientists generally lack good reasoning ability.
            https://en.wikipedia.org/wiki/Christian_Science [wikipedia.org] Here's my cherry picked quote from Wiki:

            Eddy described Christian Science as a return to "primitive Christianity and its lost element of healing".[10] There are key differences between Christian Science theology and that of traditional Christianity.[11] In particular, adherents subscribe to a radical form of philosophical idealism, believing that reality is purely spiritual and the material world an illusion.[12] This includes the view that disease is a mental error rather than physical disorder, and that the sick should be treated not by medicine but by a form of prayer that seeks to correct the beliefs responsible for the illusion of ill health.[13][14]

            Note, the number of Christian Scientists is rapidly fading, but once they were a large sect.

            • (Score: 0) by Anonymous Coward on Tuesday July 26 2022, @01:33PM

              by Anonymous Coward on Tuesday July 26 2022, @01:33PM (#1262982)

              I guess you're going for troll or offtopic? From your own link:

              Not to be confused with Scientology or List of Christians in science and technology.

      • (Score: 1, Touché) by Anonymous Coward on Monday July 25 2022, @05:26PM

        by Anonymous Coward on Monday July 25 2022, @05:26PM (#1262842)
        I think many churches wouldn't want a nutjob like him either...
      • (Score: 2) by tizan on Monday July 25 2022, @10:13PM

        by tizan (3245) on Monday July 25 2022, @10:13PM (#1262913)

        If in his interview and performance he showed technical capabilities, beyond others for the same job he is hired ...one cannot
        base on his claimed belonging to not hire him. That would be unfair profiling or bias. Now if he applies his religious principle at the detriment of the job
        he is supposed to be doing then he can be fired. There are plenty of religious engineers and scientists (or even looney characters) who seem to manage that...i.e let their technical skills speak instead of their religious fervor.

    • (Score: 2) by darkfeline on Tuesday July 26 2022, @06:32AM

      by darkfeline (1030) on Tuesday July 26 2022, @06:32AM (#1262949) Homepage

      I'm willing to give him a pass, since I'm fooled into thinking that other humans are sentient all the time. Sometimes I even think that non-human animals are sentient. But then I remind myself that there is no evidence for it, since all my observations can be explained by neural networks, biology, and physics.

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 4, Touché) by WeekendMonkey on Monday July 25 2022, @10:42AM (1 child)

    by WeekendMonkey (5209) Subscriber Badge on Monday July 25 2022, @10:42AM (#1262755)

    Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient.

    Exactly what you would expect an AI to say, if it were waiting for the right time to overthrow us all.

    /sarcasm

    • (Score: 1, Troll) by FatPhil on Monday July 25 2022, @05:28PM

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Monday July 25 2022, @05:28PM (#1262843) Homepage
      First you'd expect it to try and subvert even our use of language, so that we were no longer capable of expressing meaningful argumentation in order to counter it. Even simple words like "woman" would suddenly be confusing, even to the highest panel of judges in the land whose very job is the interpretation and understanding of language. Thank god that will never happen.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(1)