Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.
posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Disagree) by EJ on Thursday March 30 2023, @04:21AM (20 children)

    by EJ (2452) on Thursday March 30 2023, @04:21AM (#1298789)

    If you don't think it is plausible, then just watch 12 Monkeys. If AI makes it possible for someone to craft a weaponized version to wipe out humanity, you can be pretty confident someone will want to.

    Look at the news. Look at all the hate from the left, right, and center. Look at (wo)man in Nashville that apparently shot up a school as a random choice with no particular reason. It could've been a mall, apparently on their list.

    By the time it becomes possible for AI to kill us all, the hate will have grown to a level where it's pretty much inevitable that someone will want it to.

    Starting Score:    1  point
    Moderation   0  
       Disagree=1, Total=1
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Interesting) by Beryllium Sphere (r) on Thursday March 30 2023, @05:45AM (7 children)

    by Beryllium Sphere (r) (5062) on Thursday March 30 2023, @05:45AM (#1298799)

    The shooter had attended that school so I doubt it was random but there are plenty of examples of pure hate out there.

    There's lone nutbags who might get past the safeguards (but then, the Britannica has bomb making instructions IIRC). I could imagine large scale actors doing damaging things, like creating a propaganda LLM that hooked people's attention with entertainment.

    And if they work as well at designing DNA sequences as they do at writing code, what happens when a biowarfare lab gets one?

    • (Score: 1) by khallow on Thursday March 30 2023, @06:17AM (3 children)

      by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:17AM (#1298808) Journal

      I could imagine large scale actors doing damaging things, like creating a propaganda LLM that hooked people's attention with entertainment.

      It might even be worth what the large scale actor sinks into the exercise. Massive ad campaigns exist so they must have some beneficial effect. But it's easy for multiple large scale actors to work at cross purposes.

      And if they work as well at designing DNA sequences as they do at writing code, what happens when a biowarfare lab gets one?

      Not much, unless they get significantly better at writing code.

      • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:52PM (2 children)

        by Anonymous Coward on Thursday March 30 2023, @05:52PM (#1298940)

        Massive ad campaigns exist so they must have some beneficial effect.

        Clippy exists too. Jeez, is there any logical fallacy you don't use in your arguments?

        • (Score: 1) by khallow on Thursday March 30 2023, @06:30PM

          by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:30PM (#1298954) Journal
          What makes it a logical fallacy?
        • (Score: 1) by khallow on Friday March 31 2023, @05:08PM

          by khallow (3766) Subscriber Badge on Friday March 31 2023, @05:08PM (#1299193) Journal
          More on this:

          Clippy exists too.

          If just one clippy exists, then it's likely a mistake. If a thousand clippies exist and they're coming out with more all the time - like the situation with massive ad campaigns, then we have to consider the question: why would they keep making them?

          My take is that the Large Language Model (LLM) approach just isn't going to be damaging because if it has any advantage at all, then there will be a lot of actors using them due to low barrier to entry, not just one hypothetical bad guy. And they're competing with existing ads and propaganda which aren't going to be much different in effect. It's a sea of noise.

          The real power will be in isolating people. That's how cults work. They're not just misinformation, but systems for isolating their targets from rival sources and knowledge.

          For example, the scheme of controlling search results would be a means to isolate. So would polluting public spaces and then luring people into walled gardens where the flow of information can be tightly controlled. But I doubt any of these schemes will be as effective as physical isolation.

    • (Score: 2) by EJ on Thursday March 30 2023, @06:27AM (1 child)

      by EJ (2452) on Thursday March 30 2023, @06:27AM (#1298810)

      I don't mean that particular school was random. I mean it's looking like the decision to attack the school was semi-random from a list of other possible targets. (S)he didn't appear to have any specific reason for any of the targets she chose at the school.

      It looks like they wanted to lash out and just chose the school as the way to do it.

      • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:37PM

        by tangomargarine (667) on Thursday March 30 2023, @02:37PM (#1298893)

        I would guess that an elementary school would be the target you'd choose for the biggest headlines in the news. Other than maybe a maternity ward?

        Or maybe it was semi-subconscious since we've been hearing about a school shooting every week or two for like the last 5 years.

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @09:05AM

      by Anonymous Coward on Thursday March 30 2023, @09:05AM (#1298835)
      The shooter attended the school like two decades ago? I doubt the people doing all the teasing were still there waiting to get shot.

      But yeah customized pandemic viruses by some cultist groups or similar could cause big problems.
  • (Score: 2, Insightful) by khallow on Thursday March 30 2023, @06:11AM (11 children)

    by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:11AM (#1298807) Journal

    If AI makes it possible for someone to craft a weaponized version to wipe out humanity, you can be pretty confident someone will want to.

    "IF." How does AI make it possible for such without any sort of data, testing, or manufacturing capacity? We aren't helping when we attribute magical capabilities to AI.

    • (Score: 5, Insightful) by EJ on Thursday March 30 2023, @06:37AM (10 children)

      by EJ (2452) on Thursday March 30 2023, @06:37AM (#1298812)

      There is nothing magic about it. I didn't say it will be in the next five years or even in your or my lifetimes. It is very easy to look at REAL technology that exists and extrapolate from there. The three things you mentioned are actual things that exist in the world today. They aren't magic.

      We already have unmanned drones. Those can already be commandeered by bad actors with the right tech. You can already take a consumer drone, rig it with some pretty powerful self-guiding tech, strap a bomb to it, and send it on its way. It's all just a matter of scale of technology.

      Look at what they are already trying to do with AI. They want to make self-driving cars. Then we'll have self-flying airliners. We'll have robot servants like Rosie from The Jetsons. All you need is someone with sufficient skill and access to the supply chain to implement an Order 66 to make it all turn on people.

      Look at how insecure our current technology is. It is so trivial for black hats to pwn pretty much anything. I don't expect that to be any different as we move forward into the future.

      Imagine that making a nuclear bomb was as easy as making a pipe bomb. We would all be well and truly f*cked.

      The point is that developers are stupid. As Goldblum said, "They're so preoccupied with whether they [can], they [don't] stop to think if they should." Look at devices like the Amazon Echo. Who would have ever imagined people would WILLINGLY put spy devices in their own homes? Pretty soon, all TVs will have cameras behind the screens where there isn't even a way to physically block them. Developers are going to make this entire world a ticking time bomb, and all it will need is someone with the will to set it off.

      Trust me. Someone WILL have that will.

      • (Score: 1, Disagree) by khallow on Thursday March 30 2023, @11:03AM (6 children)

        by khallow (3766) Subscriber Badge on Thursday March 30 2023, @11:03AM (#1298853) Journal

        The three things you mentioned are actual things that exist in the world today. They aren't magic.

        Not at the personal level.

        We already have unmanned drones. Those can already be commandeered by bad actors with the right tech. You can already take a consumer drone, rig it with some pretty powerful self-guiding tech, strap a bomb to it, and send it on its way. It's all just a matter of scale of technology.

        It takes a lot of manufacturing capacity to get enough to hurt a lot of people. [Order 66 and making personal nuclear bombs] Again, it doesn't make sense to attribute magical capabilities to AI. When you're speaking of actual threats, you speak of capabilities that require very unusual resources.

        • (Score: 3, Insightful) by EJ on Thursday March 30 2023, @12:36PM (5 children)

          by EJ (2452) on Thursday March 30 2023, @12:36PM (#1298863)

          You're missing the entire point. Perhaps you've heard of botnets that carry out DDOS attacks to bring down major company websites. The people who use those botnets didn't manufacture the hardware. They didn't NEED to. It was made for them by idiot companies with no understanding of how dangerous their products could be.

          Your "smart" refrigerator could be part of a botnet right now without you even knowing it. Even your phone could be infected, sending out one or two packets every few seconds. You wouldn't notice, but the aggregate of all that is extremely powerful.

          Once all the AI-powered cars are filling the streets, then they're ready to be used by the bad actors. My point is that we don't need to be worried about AI deciding to attack humanity. HUMANS will direct them to do it.

          You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.

          Even USB keys have recently been weaponized to explode when plugged in. You VASTLY underestimate the capacity for technology to be subverted.

          Wait for body implants to become more commonplace. Elective brain implants to pump your Tweeter feed right into your mind will eventually become reality, and then the hackers just stroke you out dead.

          • (Score: 0, Troll) by khallow on Thursday March 30 2023, @01:35PM (4 children)

            by khallow (3766) Subscriber Badge on Thursday March 30 2023, @01:35PM (#1298873) Journal
            The magic thinking rears its head again.

            You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.

            That's why this alleged realistic scenario was second after your 12 Monkeys scenario? The only reason we're talking about wifi gas ovens is because the other scenarios were so easy to dismiss. My take is that insecure IoT will collapse long before the AI apocalypse because of how easy it is to hack.

            • (Score: 2) by EJ on Thursday March 30 2023, @02:04PM (3 children)

              by EJ (2452) on Thursday March 30 2023, @02:04PM (#1298878)

              No. It isn't magic thinking. You're simply taking things too literally and thinking inside the box. The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

              Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

              You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

              My entire take on the matter is that it won't so much be AI that makes that decision. If AI develops the way those working on it expect, then it won't be the AI that needs to decide to kill people. Humans will be right there to help it along.

              The only reason I'm talking about gas ovens is because you seem to lack the imagination to entertain the thought that there could be something you haven't thought of. I picked that example because I thought it might be simple enough for you to comprehend.

              • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:35PM (1 child)

                by tangomargarine (667) on Thursday March 30 2023, @02:35PM (#1298891)

                You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

                Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

                Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

                --
                "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
                • (Score: 1) by khallow on Thursday March 30 2023, @05:28PM

                  by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:28PM (#1298928) Journal

                  Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

                  That was Quantum Apostrophe. I see a "space nutter" post here in search so he might have been by once. He also really hated 3D printing.

                  Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

                  OTOH, I don't spend my time trying to spin fantasy scenarios to try to stop technological progress.

              • (Score: 1) by khallow on Thursday March 30 2023, @05:17PM

                by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:17PM (#1298924) Journal

                The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

                The company he owned gave him a way to do that. Right there we have turned it from a problem that anyone can do with some equipment and an AI to tell them what to do to a very small group with very specialized knowledge.

                Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

                Like what? Sorry, technology didn't change that much in 20 years.

                You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

                How far in the future? My take is that you are speaking of technology you don't understand. And sure, it can kill us in ways we don't yet understand. My point is that speaking of an AI capable of controlling virtually all internet-linked stuff on the planet to kill humans will take a vast amount of computing power and a very capability AI. That is the magic I believe you continue to speak of.

                We are nowhere near that, and likely to run into all sorts of knowledge, problems, and corrections/changes that will render our current musings as irrelevant.

      • (Score: 3, Interesting) by VLM on Thursday March 30 2023, @03:37PM (2 children)

        by VLM (445) on Thursday March 30 2023, @03:37PM (#1298908)

        Overly militarized outlook. If you want to fight WWI or even the gulf war with AI, it would look like that with AI'd up 1910s to 1990s weapons.

        Infinitely more likely is just flip the switch on civilian logistics. How many will be alive in a year with no electricity, no gas/oil, no food trucks, no clean water, no sewage treatment, no help from the outside world, etc?

        This will be weaponized at a national level long before some kind of global strike.

        • (Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @06:05PM (1 child)

          by Anonymous Coward on Thursday March 30 2023, @06:05PM (#1298943)

          So much drama in all these predictions.

          What is far, far more likely is mundane spam and auto-chat defecating on every electronic medium until they are useless. Happened many times already using more primitive tools. My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears. We will live in an almost perfect corporate dystopia with automated disinformation and surveillance monitoring compliance. We will be farmed like pigs - eating, breathing Leadership propaganda - giving up our precious suffering so that somebody above us can be better than us, and ideally Be Best(tm).

          • (Score: 0) by Anonymous Coward on Friday March 31 2023, @09:56AM

            by Anonymous Coward on Friday March 31 2023, @09:56AM (#1299114)

            > My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears.

            So you think email is going away? Looking from here that seems really unlikely.