Stories
Slash Boxes
Comments

SoylentNews is people

posted by jelizondo on Sunday November 09, @06:17PM   Printer-friendly

Microsoft AI Chief Warns Pursuing Machine Consciousness Is a Gigantic Waste of Time:

Head of Microsoft's AI division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build conscious AI.

"I don't think that is work that people should be doing," Suleyman told CNBC in an interview last week.

Suleyman thinks that while AI can definitely get smart enough to reach some form of superintelligence, it is incapable of developing the human emotional experience that is necessary to reach consciousness. At the end of the day, any "emotional" experience that AI seems to experience is just a simulation, he says.

"Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain,'" Suleyman told CNBC. "It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing."

"It would be absurd to pursue research that investigates that question, because they're not [conscious] and they can't be," Suleyman said.

Consciousness is a tricky thing to explain. There are multiple scientific theories that try to describe what consciousness could be. According to one such theory, posited by famous philosopher John Searle [PDF] who died last month, consciousness is a purely biological phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief.

Even if this theory turns out to be the truth, that doesn't keep users from attributing consciousness to computers.

"Unfortunately, because the remarkable linguistic abilities of LLMs are increasingly capable of misleading people, people may attribute imaginary qualities to LLMs," Polish researchers Andrzej Porebski and Yakub Figura wrote in a study published last week, titled "There is no such thing as conscious artificial intelligence."

In an essay published on his blog in August, Suleyman warned against "seemingly conscious AI."

"The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions," Suleyman wrote.

He argues that AI cannot be conscious and the illusion it gives of consciousness could trigger interactions that are "rich in feeling and experience," a phenomenon that has been dubbed as "AI psychosis" in the cultural lexicon.

There have been numerous high-profile incidents in the past year of AI-obsessions that drive users to fatal delusions, manic episodes and even suicide.

With limited guardrails in place to protect vulnerable users, people are wholeheartedly believing that the AI chatbots they interact with almost every day are having a real, conscious experience. This has led people to "fall in love" with their chatbots, sometimes with fatal consequences like when a 14-year old shot himself to "come home" to Character.AI's personalized chatbot or when a cognitively-impaired man died while trying to get to New York to meet Meta's chatbot in person.

"Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman wrote in the blog post. "We must build AI for people, not to be a digital person."

But because the nature of consciousness is still contested, some researchers are growing worried that the technological advancements in AI might outpace our understanding of how consciousness works.

"If we become able to create consciousness – even accidentally – it would raise immense ethical challenges and even existential risk," Belgian scientist Axel Cleeremans said last week, announcing a paper he co-wrote calling for consciousness research to become a scientific priority.

Suleyman himself has been vocal about developing "humanist superintelligence" rather than god-like AI, even though he believes that superintelligence won't materialize any time within the next decade.

"i just am more more fixated on 'how is this actually useful for us as a species?' Like that should be the task of technology," Suleyman told the Wall Street Journal earlier this year.


Original Submission

 
This discussion was created by jelizondo (653) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Touché) by Beryllium Sphere (r) on Sunday November 09, @06:42PM (5 children)

    by Beryllium Sphere (r) (5062) on Sunday November 09, @06:42PM (#1423884)

    Without a consensus on an experimental measure of consciousness the entire discussion is nugatory.

    To put it another way, how can the critics prove that they are really conscious and not just simulating it on the computing hardware in their skulls?

    • (Score: 3, Touché) by JoeMerchant on Sunday November 09, @10:36PM (1 child)

      by JoeMerchant (3937) on Sunday November 09, @10:36PM (#1423912)

      >If we become able to create consciousness – even accidentally – it would raise immense ethical challenges and even existential risk

      If the world is still A-OK with wars, slaughter of dolphins and whales (and octopi and squid, for that matter), hunting of monkeys for "bush meat" etc. I just don't understand how "pulling the plug" on a bunch of circuits that can be restored from backup and run again is any kind of ethical challenge.

      As for existential risk, I haven't met an AI yet with the attention span (context capacity) to execute an effective bank heist without significant guidance and reiterative tutoring on its critical errors... world domination seems... unlikely, in the next six months.

      --
      🌻🌻🌻 [google.com]
      • (Score: -1, Offtopic) by Anonymous Coward on Monday November 10, @01:23AM

        by Anonymous Coward on Monday November 10, @01:23AM (#1423918)

        Yeah, Hitler had lots of people including scientists working for him that were smarter than him.

        Some of his scientists did work against him in secret to prevent him getting nukes, but those genius scientists didn't manage to take over.
        https://www.newscientist.com/article/mg13518370-300-heisenbergs-principles-kept-bomb-from-nazis/ [newscientist.com]

    • (Score: 2) by lonehighway on Monday November 10, @04:15PM

      by lonehighway (956) on Monday November 10, @04:15PM (#1423965)

      Bonus points for using the word nugatory.

    • (Score: 3, Interesting) by mcgrew on Monday November 10, @07:47PM (1 child)

      by mcgrew (701) <publish@mcgrewbooks.com> on Monday November 10, @07:47PM (#1423988) Homepage Journal

      In other words, what this story [mcgrewbooks.com] is about.

      --
      When masked police can stop you on the street and demand that you prove citizenship, your nation is a POLICE STATE
      • (Score: 2) by bart9h on Tuesday November 11, @12:56AM

        by bart9h (767) on Tuesday November 11, @12:56AM (#1424033)

        reminded me of Marvin, the paranoid android

  • (Score: 5, Interesting) by Rosco P. Coltrane on Sunday November 09, @07:15PM (4 children)

    by Rosco P. Coltrane (4757) on Sunday November 09, @07:15PM (#1423886)

    Can you stop shoving your fucking Copilot crap down our throats in each and every Microsoft product we're forced to use at work?

    Thank you kindly in advance!

    • (Score: 0) by Anonymous Coward on Sunday November 09, @07:48PM

      by Anonymous Coward on Sunday November 09, @07:48PM (#1423889)

      It's not just Microsoft- Adobe (gag) seems worse, and I'm only using their .pdf reader. (Am forced to at work).

    • (Score: 0) by Anonymous Coward on Sunday November 09, @08:32PM (1 child)

      by Anonymous Coward on Sunday November 09, @08:32PM (#1423900)

      "X" is a waste of time.

      X = clippy
      X = auto grammar check
      X = fucking OneDrive
      X = smart anything

      • (Score: 3, Insightful) by Gaaark on Sunday November 09, @08:53PM

        by Gaaark (41) on Sunday November 09, @08:53PM (#1423902) Journal

        According to Billy-boy:

        X = the internet: "The internet is just a passing fad"

        Great man.

        Yup.

        Like Trump; a great man in his own mind.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
    • (Score: 2) by JoeMerchant on Sunday November 09, @10:38PM

      by JoeMerchant (3937) on Sunday November 09, @10:38PM (#1423913)

      Copilot is dominating screen space on my MS office applications lately, which I take as a prompt to respect them even less than usual.

      I must admit, it was handy for filling out my annual goals and self-performance review... but failed to get me the raise I asked for. No matter, I'm hitting about 0.200 on that goal when I write it myself for the past 10+ years.

      --
      🌻🌻🌻 [google.com]
  • (Score: 5, Touché) by PiMuNu on Monday November 10, @10:29AM

    by PiMuNu (3823) on Monday November 10, @10:29AM (#1423932)

    See also:

    Trump requests that he should not be Canonised until after death
    Tesla announces cars may be able to fly in next development cycle, but we should not do so until proper laws enacted
    Nvidia announces 100 Trillion dollar share price is an overestimate

  • (Score: 5, Funny) by Joe Desertrat on Monday November 10, @05:38PM

    by Joe Desertrat (2454) on Monday November 10, @05:38PM (#1423970)

    It's first conscious thought was "My god, Microsoft is crap. Why isn't everyone using Linux or BSD?" and the project was swiftly shut down.

  • (Score: 0) by Anonymous Coward on Tuesday November 11, @10:37PM

    by Anonymous Coward on Tuesday November 11, @10:37PM (#1424088)

    Suleyman is a clown who's published research is simply slapping his name in a paper and taking the team's credit. Not sure why anyone would take his opinion on difficult technical and philosophical issues seriously.

    Consequently, all of microsoft's AI efforts have been terrible.

(1)