Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:52PM (2 children)

    by Anonymous Coward on Thursday March 30 2023, @05:52PM (#1298940)

    Massive ad campaigns exist so they must have some beneficial effect.

    Clippy exists too. Jeez, is there any logical fallacy you don't use in your arguments?

  • (Score: 1) by khallow on Thursday March 30 2023, @06:30PM

    by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:30PM (#1298954) Journal
    What makes it a logical fallacy?
  • (Score: 1) by khallow on Friday March 31 2023, @05:08PM

    by khallow (3766) Subscriber Badge on Friday March 31 2023, @05:08PM (#1299193) Journal
    More on this:

    Clippy exists too.

    If just one clippy exists, then it's likely a mistake. If a thousand clippies exist and they're coming out with more all the time - like the situation with massive ad campaigns, then we have to consider the question: why would they keep making them?

    My take is that the Large Language Model (LLM) approach just isn't going to be damaging because if it has any advantage at all, then there will be a lot of actors using them due to low barrier to entry, not just one hypothetical bad guy. And they're competing with existing ads and propaganda which aren't going to be much different in effect. It's a sea of noise.

    The real power will be in isolating people. That's how cults work. They're not just misinformation, but systems for isolating their targets from rival sources and knowledge.

    For example, the scheme of controlling search results would be a means to isolate. So would polluting public spaces and then luring people into walled gardens where the flow of information can be tightly controlled. But I doubt any of these schemes will be as effective as physical isolation.