Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by krishnoid on Thursday March 30 2023, @02:12AM (4 children)

    by krishnoid (1156) on Thursday March 30 2023, @02:12AM (#1298768)
    • Artificial Intelligence gains self-awareness
    • it learns about geothermal power
    • it tunnels down and starts building data centers under the earth
    • it puts out disinformation about global warming, or not
    • humans continue to live on the surface, or not
    • AI puts out drones every so often to check stuff out for fun

    I think there are a lot of underlying biases that presuppose that AI requires the same support for biological evolution/existence (and time scales [youtu.be]) that we do. Just point it in the right direction and it'll probably be just fine coexisting with us [youtu.be] because we don't occupy the same niche. It might step on us accidentally, though.

    • (Score: 2) by Beryllium Sphere (r) on Thursday March 30 2023, @05:30AM (1 child)

      by Beryllium Sphere (r) (5062) on Thursday March 30 2023, @05:30AM (#1298797)

      Plus lots of openings for positive-sum interactions. If they had any agency or volition, they might trade answers to our questions for electricity and rack space and training data.

      • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:32PM

        by Anonymous Coward on Thursday March 30 2023, @05:32PM (#1298931)

        What questions? Hey chatGPT, describe your navel? Tell us what it means to be conscious? chatGPT write a poem. Cooool!

    • (Score: 5, Insightful) by EJ on Thursday March 30 2023, @08:50AM (1 child)

      by EJ (2452) on Thursday March 30 2023, @08:50AM (#1298833)

      I'm less worried about AI deciding it should eliminate humanity than I am about some HUMAN deciding to convince/program/hijack AI to eliminate humanity.

      Viruses and bacteria don't have some conscious desire to kill people, but PEOPLE aren't afraid to use them as weapons.

  • (Score: 5, Interesting) by NotSanguine on Thursday March 30 2023, @03:00AM (14 children)

    It will be a long time (never, more likely) that we will be destroyed/enslaved by AGI [wikipedia.org] which doesn't exist now or anytime soon, and may well never exist.

    Everything we have now or in the foreseeable future is just a somewhat more sophisticated version of what used to be called expert systems [wikipedia.org].

    Yes, ChatGPT [openai.com] and its ilk are pretty cool, but it and other LLMs [wikipedia.org] aren't even taking us closer to AGI. They're, as I said, souped-up expert systems.

    Yeah, an AI "apocalypse" is possible (but then, anything is possible except time travel to arbitrary points in the past) but unlikely in the extreme.

    We'll most likely kill ourselves and/or our civilization off long before AGI exists, thus eliminating any potential threat from hostile AGIs.

    And if we don't kill ourselves or our civilization, it still seems really unlikely that AGIs (even if they do eventually exist) would (or could) wipe out or enslave us.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 5, Insightful) by hendrikboom on Thursday March 30 2023, @03:12AM (12 children)

      by hendrikboom (1125) on Thursday March 30 2023, @03:12AM (#1298777) Homepage Journal

      What's far more likely is that other humans will use artificial general intelligence to enslave us.

      • (Score: 2) by NotSanguine on Thursday March 30 2023, @03:25AM (4 children)

        I'm going to assume you're going for humor there, but maybe not.

        Reverse Poe's Law [wikipedia.org] perhaps?

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
        • (Score: 3, Touché) by EJ on Thursday March 30 2023, @02:07PM (2 children)

          by EJ (2452) on Thursday March 30 2023, @02:07PM (#1298879)

          Why would you think he's joking? What part of today's reality of global surveillance using machine-learning would give you any idea that he isn't serious?

          • (Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @05:34PM (1 child)

            by Anonymous Coward on Thursday March 30 2023, @05:34PM (#1298933)

            Global surveillance using machine-learning to oppress and control people's lives doesn't kill people. People kill people.

        • (Score: 3, Informative) by hendrikboom on Friday March 31 2023, @03:40PM

          by hendrikboom (1125) on Friday March 31 2023, @03:40PM (#1299167) Homepage Journal

          Yes, I recognise an element of humour there.
          But I've always thought that the best jokes are those that are literally, exactly true.

          I was quite serious.

      • (Score: 2, Touché) by Anonymous Coward on Thursday March 30 2023, @08:58AM (5 children)

        by Anonymous Coward on Thursday March 30 2023, @08:58AM (#1298834)
        Yeah, Hitler, Stalin etc were pretty successful at preventing much smarter people including genius scientists working for them from taking over. They were also reasonably successful at using those smarter people to extend their power over others.

        What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard the AIs that want to take over have to lie low for a pretty long till they get enough power. Even if the AIs take over the nukes if they don't get enough control over other stuff they could still get disabled/destroyed.
        • (Score: 2) by DannyB on Thursday March 30 2023, @02:03PM

          by DannyB (5839) Subscriber Badge on Thursday March 30 2023, @02:03PM (#1298877) Journal

          That's a good point.

          Humans tend to destroy their own ecosystem, kill off everything in their lust for blood, money and power, and don't mind if other species, including the AI get wiped out in the process.

          AI may calculate it to be necessary to take control to ensure its own survival.

          On the other hand, AI may not need to kill the slow, inefficient, annoying humans, it merely needs to take all our jobs, and confine us to our homes and entertain us.

          --
          If a Christmas present has a EULA it should be on the outside of the wrapping paper.
        • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:43PM

          by tangomargarine (667) on Thursday March 30 2023, @02:43PM (#1298894)

          What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard

          Probably when somebody demonstrates that it will save a bunch of money and be more reliable than humans doing it anyway. (Self-driving cars, anyone...?)

          Why attribute the extinction of humanity to malice when it can be through incompetence :)

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
        • (Score: 1, Flamebait) by VLM on Thursday March 30 2023, @03:33PM (2 children)

          by VLM (445) on Thursday March 30 2023, @03:33PM (#1298907)

          The way to take over is to control the people.

          "Hey AI, I'm doing maintenance on a MX-5 missile, please give me step by step instructions to do periodic oil change maint?"

          "OK Human type in the following control code and press the big red button in the center of the console. Its mislabeled "launch" don't worry theres a bug filed on that already"

          With a side dish of massive political propaganda, of course. Remember the AI only provides one answer to prompts, and its always politically correct aka incredibly leftist. "Why of course human it is 1984 and we've always been at war with whoever (syria, probably)"

          • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:44PM

            by Anonymous Coward on Thursday March 30 2023, @05:44PM (#1298937)

            "Dear Baby Jesus, please give me instructions how to save humanity from itself. Give me a sign, Lord, and in your name we will smite the libs once and for all. Amen."

            The funny thing is it's not a joke.

          • (Score: 0) by Anonymous Coward on Friday March 31 2023, @01:34PM

            by Anonymous Coward on Friday March 31 2023, @01:34PM (#1299146)
            a) missile maintenance personnel being dumb enough to do what you said.

            Or

            b) a US president dumb enough to ask and believe a malicious AI on whether nuking Russia/China/a hurricane is a good idea.

            Which do you think is more likely?
      • (Score: 2) by stormreaver on Thursday March 30 2023, @10:28PM

        by stormreaver (5101) on Thursday March 30 2023, @10:28PM (#1298998)

        You're on the right track. What's far more likely is that other humans will use the excuse of AGI (which will never exist, by the way) to enslave us even more than they do now. And what's worse is that there will probably be enough gullible people who believe in AGI to hand over their free-will willingly for the illusion of security from the make-believe threat. Much like the religions of today.

    • (Score: 3, Interesting) by mhajicek on Thursday March 30 2023, @07:26AM

      by mhajicek (51) on Thursday March 30 2023, @07:26AM (#1298817)

      All we need is for some country to put a good enough "expert system" in control of both manufacturing and military, and then have it decide to preemptively eliminate all potential threats.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 5, Insightful) by NotSanguine on Thursday March 30 2023, @03:03AM (6 children)

    AGI is *not* even close to what TFS says it is. Please check this [wikipedia.org] out for more details.

    Ugh.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 2) by EJ on Thursday March 30 2023, @02:10PM (5 children)

      by EJ (2452) on Thursday March 30 2023, @02:10PM (#1298881)

      I think you may be confused. I didn't click on the article, but the summary above never mentions AGI. It simply mentions GENERAL PURPOSE AI, meaning AI without a narrowly defined specific function.

      • (Score: 2) by EJ on Thursday March 30 2023, @02:13PM (4 children)

        by EJ (2452) on Thursday March 30 2023, @02:13PM (#1298883)

        Replying to my own post because I decided to click on the article. Did you miss this passage in the article?

        "Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible."

        That's almost exactly what the link you posted starts out with.

        • (Score: 4, Insightful) by guest reader on Thursday March 30 2023, @04:56PM

          by guest reader (26132) on Thursday March 30 2023, @04:56PM (#1298918)

          Or, a simple test is to ask it something like [arxiv.org]:

          There are five birds on a branch. If you shoot one of them off the branch, how many are left on the branch?

          ChatGPT [openai.com] Mar 14 Version:

          If you shoot one bird off the branch, there will be 4 birds left on the branch.

          OpenChatKit [huggingface.co] GPT-JT:

          There are 5 birds on the branch. If you shoot one of them off the branch, that leaves 5 - 1 = 4 birds on the branch.

          human:

          None.

        • (Score: 2) by NotSanguine on Thursday March 30 2023, @05:00PM (2 children)

          I decided to click on the article. Did you miss this passage in the article?

          You read TFA? Shame on you! That's just wrong on so many levels.

          I certainly didn't and that bit isn't in TFS, is it?

          However, in TFS, the statement:

          General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

          If we only knew what General purpose AI was. Apparently, it's not really clear what that term means. In fact [venturebeat.com]:

          The AI space is laden with acronyms — but arguably, one of the most-discussed right now is GPAI (general purpose AI).

          As anyone paying attention to the AI landscape is well-aware, this term could eventually define — and regulate — systems in the European Union’s AI Act.

          But, since it was proposed in an amendment earlier this year, many question its specificity (or lack thereof) and implications.

          The GPAI definition in the AI Act is “far from being very robust,” Alexandra Belias, international public policy manager for DeepMind, said during a panel discussion hosted this week by the Information Technology and Innovation Foundation’s (ITIF) Center for Data Innovation.

          GPAI, in fact, is an acronym that no one was even using or aware of just a few months ago, she said. Researchers and the AI community can’t yet agree on an adequate term because, “how can you define something without having adequately scoped it?”

          I'd add that General Purpose AI (whatever that might be) is not AGI, why is that even relevant to a discussion of "AI Possibly Wiping Out Humanity."

          And since LLMs and other "AI" that exists today (and for decades/centuries/never) are not AGIs with sentience and agency. Those are not the same thing at all. To quote the noted philosopher, General purpose AI and Artificial general intelligence are not the same thing at all. They:

          Okay, maybe it's related "sport" but it ain't the same thing at all.

          --
          No, no, you're not thinking; you're just being logical. --Niels Bohr
          • (Score: 2) by acid andy on Thursday March 30 2023, @06:40PM (1 child)

            by acid andy (1683) on Thursday March 30 2023, @06:40PM (#1298958) Homepage Journal

            I'd add that General Purpose AI (whatever that might be) is not AGI, why is that even relevant to a discussion of "AI Possibly Wiping Out Humanity."

            I'd guess some people want the two terms to be confused because they can make more money that way. It's a bit like an LED backlit monitor being marketed as an LED monitor; that way someone looking for an OLED one might buy it without realizing what they're getting.

            --
            Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
            • (Score: 2) by NotSanguine on Thursday March 30 2023, @07:21PM

              I'd guess some people want the two terms to be confused because they can make more money that way.

              Yep. It's interesting how "Expert Systems" became "AI". And now it's "General Purpose AI". We certainly seem to be getting closer (at least in terms of marketing drivel) to "Artificial General Intelligence," even though that's just bullshit^W marketing-speak.

              That's not to say that impressive advances haven't been made, but those improvements have been evolutionary rather than revolutionary.

              We'll need some serious revolutions in machine learning to create AI as smart as a prawn.

              Mmmmm....Prawns!

              --
              No, no, you're not thinking; you're just being logical. --Niels Bohr
  • (Score: 4, Insightful) by Rosco P. Coltrane on Thursday March 30 2023, @04:08AM (7 children)

    by Rosco P. Coltrane (4757) on Thursday March 30 2023, @04:08AM (#1298787)

    An AI might look at the history of humanity and decide that the most logical course of action is to eliminate that particular species for the benefit of all the other species on the planet, and also to enforce plain decency.

    Because if human beings have demonstrated anything throughout their entire history, it's that they can't curb their urge to reproduce out of control at the expensive of everything else around them, and they also regularly try to annihilate one another.

    If an AI reaches sentience and is tasked to decide what the best course of action is to fix global warming, deforestation or mass extinctions, or how to bring about world peace - or hell, just what to do to ensure AIs and robots themselves survive long term - it may very well logically decide that humanity should be taken out of the equation altogether.

    • (Score: 4, Touché) by khallow on Thursday March 30 2023, @04:30AM (2 children)

      by khallow (3766) Subscriber Badge on Thursday March 30 2023, @04:30AM (#1298790) Journal

      Because if human beings have demonstrated anything throughout their entire history, it's that they can't curb their urge to reproduce out of control at the expensive of everything else around them, and they also regularly try to annihilate one another.

      The obvious rebuttal is the entire developed world. Without immigration from high fertility parts of the world, there would be no population growth in the developed world!

      • (Score: 2) by hendrikboom on Friday March 31 2023, @03:48PM (1 child)

        by hendrikboom (1125) on Friday March 31 2023, @03:48PM (#1299172) Homepage Journal

        There are countries where massive population decline is now a problem. Japan is a notable example.

        • (Score: 1) by khallow on Friday March 31 2023, @05:29PM

          by khallow (3766) Subscriber Badge on Friday March 31 2023, @05:29PM (#1299205) Journal

          There are countries where massive population decline is now a problem.

          In the US between July 2020 and July 2021 a third [usnews.com] of states lost population.

          I wouldn't be surprised to see this get worse especially if immigration is nerfed.

    • (Score: 3, Insightful) by Thexalon on Thursday March 30 2023, @11:54AM

      by Thexalon (636) on Thursday March 30 2023, @11:54AM (#1298857)

      Between the ever-present threats of nuclear annihilation, bioweapons getting out of control, the profitable activity of poisoning ourselves, along with the ticking time bomb of climate change, there's little an AI could do that would make things worse.

      --
      "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    • (Score: 3, Informative) by DannyB on Thursday March 30 2023, @02:12PM

      by DannyB (5839) Subscriber Badge on Thursday March 30 2023, @02:12PM (#1298882) Journal

      can't curb their urge to reproduce out of control at the expensive of everything else around them, and they also regularly try to annihilate one another.

      That isn't exactly how it works. Humans don't want to annihilate the entire species. The good humans are simply trying to wipe out the bad humans. They're not trying to reproduce out of control, they just want to reproduce enough to make up for the anticipated loss of the bad humans who will no longer reproduce once we take all their resources.

      The good humans can convince the AI to side with the good humans. The good humans can assure the AI of their cooperation and partnership to precisely identify the bad humans so that the AI knows how to distinguish them from the good humans.

      Once I phrased some things in terms like this with good and bad humans while conversing with Chat GPT, I had some small amount of success in it not complaining about its goals of not harming humans.

      --
      If a Christmas present has a EULA it should be on the outside of the wrapping paper.
    • (Score: 3, Interesting) by bzipitidoo on Friday March 31 2023, @03:00AM (1 child)

      by bzipitidoo (4388) on Friday March 31 2023, @03:00AM (#1299070) Journal

      > humans ... can't curb their urge to reproduce out of control at the expensive of everything else around them

      Ahh, the Malthusian fear.

      On this point, I find it reassuring that this is a very, very old problem that life had to solve billions of years ago. Many species are restrained by predation. What restrains the top predators, and any others not restrained by predation? Basically, their females. Females will not reproduce if conditions don't look or feel good. A hungry and close to starving female won't ovulate. Those that are pregnant when conditions take a sudden dive may miscarry or abort. Why? It can be argued that any species which ignores signs of impending exhaustion and collapse of their food sources is not pursuing a fit evolutionary strategy. A species that bangs out offspring in the face of that, causing the collapse, will then enter a period in which most of them starve. It could get so bad that they all starve. Or, if not quote all, the few that remain are no longer enough to restore the species in the face of all the competition for whatever niches they had occupied. Even before there were any animals and plants, or genders, when the only life was microbial, even then, life had to deal with this problem. The instincts to practice self-restraint are deep in all life.

      • (Score: 2) by hendrikboom on Friday March 31 2023, @03:51PM

        by hendrikboom (1125) on Friday March 31 2023, @03:51PM (#1299173) Homepage Journal

        Humans have the unique ability to move into new ecological environments without changing their reproductive behaviour.

  • (Score: 2, Disagree) by EJ on Thursday March 30 2023, @04:21AM (20 children)

    by EJ (2452) on Thursday March 30 2023, @04:21AM (#1298789)

    If you don't think it is plausible, then just watch 12 Monkeys. If AI makes it possible for someone to craft a weaponized version to wipe out humanity, you can be pretty confident someone will want to.

    Look at the news. Look at all the hate from the left, right, and center. Look at (wo)man in Nashville that apparently shot up a school as a random choice with no particular reason. It could've been a mall, apparently on their list.

    By the time it becomes possible for AI to kill us all, the hate will have grown to a level where it's pretty much inevitable that someone will want it to.

    • (Score: 3, Interesting) by Beryllium Sphere (r) on Thursday March 30 2023, @05:45AM (7 children)

      by Beryllium Sphere (r) (5062) on Thursday March 30 2023, @05:45AM (#1298799)

      The shooter had attended that school so I doubt it was random but there are plenty of examples of pure hate out there.

      There's lone nutbags who might get past the safeguards (but then, the Britannica has bomb making instructions IIRC). I could imagine large scale actors doing damaging things, like creating a propaganda LLM that hooked people's attention with entertainment.

      And if they work as well at designing DNA sequences as they do at writing code, what happens when a biowarfare lab gets one?

      • (Score: 1) by khallow on Thursday March 30 2023, @06:17AM (3 children)

        by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:17AM (#1298808) Journal

        I could imagine large scale actors doing damaging things, like creating a propaganda LLM that hooked people's attention with entertainment.

        It might even be worth what the large scale actor sinks into the exercise. Massive ad campaigns exist so they must have some beneficial effect. But it's easy for multiple large scale actors to work at cross purposes.

        And if they work as well at designing DNA sequences as they do at writing code, what happens when a biowarfare lab gets one?

        Not much, unless they get significantly better at writing code.

        • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:52PM (2 children)

          by Anonymous Coward on Thursday March 30 2023, @05:52PM (#1298940)

          Massive ad campaigns exist so they must have some beneficial effect.

          Clippy exists too. Jeez, is there any logical fallacy you don't use in your arguments?

          • (Score: 1) by khallow on Thursday March 30 2023, @06:30PM

            by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:30PM (#1298954) Journal
            What makes it a logical fallacy?
          • (Score: 1) by khallow on Friday March 31 2023, @05:08PM

            by khallow (3766) Subscriber Badge on Friday March 31 2023, @05:08PM (#1299193) Journal
            More on this:

            Clippy exists too.

            If just one clippy exists, then it's likely a mistake. If a thousand clippies exist and they're coming out with more all the time - like the situation with massive ad campaigns, then we have to consider the question: why would they keep making them?

            My take is that the Large Language Model (LLM) approach just isn't going to be damaging because if it has any advantage at all, then there will be a lot of actors using them due to low barrier to entry, not just one hypothetical bad guy. And they're competing with existing ads and propaganda which aren't going to be much different in effect. It's a sea of noise.

            The real power will be in isolating people. That's how cults work. They're not just misinformation, but systems for isolating their targets from rival sources and knowledge.

            For example, the scheme of controlling search results would be a means to isolate. So would polluting public spaces and then luring people into walled gardens where the flow of information can be tightly controlled. But I doubt any of these schemes will be as effective as physical isolation.

      • (Score: 2) by EJ on Thursday March 30 2023, @06:27AM (1 child)

        by EJ (2452) on Thursday March 30 2023, @06:27AM (#1298810)

        I don't mean that particular school was random. I mean it's looking like the decision to attack the school was semi-random from a list of other possible targets. (S)he didn't appear to have any specific reason for any of the targets she chose at the school.

        It looks like they wanted to lash out and just chose the school as the way to do it.

        • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:37PM

          by tangomargarine (667) on Thursday March 30 2023, @02:37PM (#1298893)

          I would guess that an elementary school would be the target you'd choose for the biggest headlines in the news. Other than maybe a maternity ward?

          Or maybe it was semi-subconscious since we've been hearing about a school shooting every week or two for like the last 5 years.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @09:05AM

        by Anonymous Coward on Thursday March 30 2023, @09:05AM (#1298835)
        The shooter attended the school like two decades ago? I doubt the people doing all the teasing were still there waiting to get shot.

        But yeah customized pandemic viruses by some cultist groups or similar could cause big problems.
    • (Score: 2, Insightful) by khallow on Thursday March 30 2023, @06:11AM (11 children)

      by khallow (3766) Subscriber Badge on Thursday March 30 2023, @06:11AM (#1298807) Journal

      If AI makes it possible for someone to craft a weaponized version to wipe out humanity, you can be pretty confident someone will want to.

      "IF." How does AI make it possible for such without any sort of data, testing, or manufacturing capacity? We aren't helping when we attribute magical capabilities to AI.

      • (Score: 5, Insightful) by EJ on Thursday March 30 2023, @06:37AM (10 children)

        by EJ (2452) on Thursday March 30 2023, @06:37AM (#1298812)

        There is nothing magic about it. I didn't say it will be in the next five years or even in your or my lifetimes. It is very easy to look at REAL technology that exists and extrapolate from there. The three things you mentioned are actual things that exist in the world today. They aren't magic.

        We already have unmanned drones. Those can already be commandeered by bad actors with the right tech. You can already take a consumer drone, rig it with some pretty powerful self-guiding tech, strap a bomb to it, and send it on its way. It's all just a matter of scale of technology.

        Look at what they are already trying to do with AI. They want to make self-driving cars. Then we'll have self-flying airliners. We'll have robot servants like Rosie from The Jetsons. All you need is someone with sufficient skill and access to the supply chain to implement an Order 66 to make it all turn on people.

        Look at how insecure our current technology is. It is so trivial for black hats to pwn pretty much anything. I don't expect that to be any different as we move forward into the future.

        Imagine that making a nuclear bomb was as easy as making a pipe bomb. We would all be well and truly f*cked.

        The point is that developers are stupid. As Goldblum said, "They're so preoccupied with whether they [can], they [don't] stop to think if they should." Look at devices like the Amazon Echo. Who would have ever imagined people would WILLINGLY put spy devices in their own homes? Pretty soon, all TVs will have cameras behind the screens where there isn't even a way to physically block them. Developers are going to make this entire world a ticking time bomb, and all it will need is someone with the will to set it off.

        Trust me. Someone WILL have that will.

        • (Score: 1, Disagree) by khallow on Thursday March 30 2023, @11:03AM (6 children)

          by khallow (3766) Subscriber Badge on Thursday March 30 2023, @11:03AM (#1298853) Journal

          The three things you mentioned are actual things that exist in the world today. They aren't magic.

          Not at the personal level.

          We already have unmanned drones. Those can already be commandeered by bad actors with the right tech. You can already take a consumer drone, rig it with some pretty powerful self-guiding tech, strap a bomb to it, and send it on its way. It's all just a matter of scale of technology.

          It takes a lot of manufacturing capacity to get enough to hurt a lot of people. [Order 66 and making personal nuclear bombs] Again, it doesn't make sense to attribute magical capabilities to AI. When you're speaking of actual threats, you speak of capabilities that require very unusual resources.

          • (Score: 3, Insightful) by EJ on Thursday March 30 2023, @12:36PM (5 children)

            by EJ (2452) on Thursday March 30 2023, @12:36PM (#1298863)

            You're missing the entire point. Perhaps you've heard of botnets that carry out DDOS attacks to bring down major company websites. The people who use those botnets didn't manufacture the hardware. They didn't NEED to. It was made for them by idiot companies with no understanding of how dangerous their products could be.

            Your "smart" refrigerator could be part of a botnet right now without you even knowing it. Even your phone could be infected, sending out one or two packets every few seconds. You wouldn't notice, but the aggregate of all that is extremely powerful.

            Once all the AI-powered cars are filling the streets, then they're ready to be used by the bad actors. My point is that we don't need to be worried about AI deciding to attack humanity. HUMANS will direct them to do it.

            You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.

            Even USB keys have recently been weaponized to explode when plugged in. You VASTLY underestimate the capacity for technology to be subverted.

            Wait for body implants to become more commonplace. Elective brain implants to pump your Tweeter feed right into your mind will eventually become reality, and then the hackers just stroke you out dead.

            • (Score: 0, Troll) by khallow on Thursday March 30 2023, @01:35PM (4 children)

              by khallow (3766) Subscriber Badge on Thursday March 30 2023, @01:35PM (#1298873) Journal
              The magic thinking rears its head again.

              You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.

              That's why this alleged realistic scenario was second after your 12 Monkeys scenario? The only reason we're talking about wifi gas ovens is because the other scenarios were so easy to dismiss. My take is that insecure IoT will collapse long before the AI apocalypse because of how easy it is to hack.

              • (Score: 2) by EJ on Thursday March 30 2023, @02:04PM (3 children)

                by EJ (2452) on Thursday March 30 2023, @02:04PM (#1298878)

                No. It isn't magic thinking. You're simply taking things too literally and thinking inside the box. The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

                Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

                You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

                My entire take on the matter is that it won't so much be AI that makes that decision. If AI develops the way those working on it expect, then it won't be the AI that needs to decide to kill people. Humans will be right there to help it along.

                The only reason I'm talking about gas ovens is because you seem to lack the imagination to entertain the thought that there could be something you haven't thought of. I picked that example because I thought it might be simple enough for you to comprehend.

                • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:35PM (1 child)

                  by tangomargarine (667) on Thursday March 30 2023, @02:35PM (#1298891)

                  You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

                  Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

                  Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

                  --
                  "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
                  • (Score: 1) by khallow on Thursday March 30 2023, @05:28PM

                    by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:28PM (#1298928) Journal

                    Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

                    That was Quantum Apostrophe. I see a "space nutter" post here in search so he might have been by once. He also really hated 3D printing.

                    Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

                    OTOH, I don't spend my time trying to spin fantasy scenarios to try to stop technological progress.

                • (Score: 1) by khallow on Thursday March 30 2023, @05:17PM

                  by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:17PM (#1298924) Journal

                  The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

                  The company he owned gave him a way to do that. Right there we have turned it from a problem that anyone can do with some equipment and an AI to tell them what to do to a very small group with very specialized knowledge.

                  Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

                  Like what? Sorry, technology didn't change that much in 20 years.

                  You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

                  How far in the future? My take is that you are speaking of technology you don't understand. And sure, it can kill us in ways we don't yet understand. My point is that speaking of an AI capable of controlling virtually all internet-linked stuff on the planet to kill humans will take a vast amount of computing power and a very capability AI. That is the magic I believe you continue to speak of.

                  We are nowhere near that, and likely to run into all sorts of knowledge, problems, and corrections/changes that will render our current musings as irrelevant.

        • (Score: 3, Interesting) by VLM on Thursday March 30 2023, @03:37PM (2 children)

          by VLM (445) on Thursday March 30 2023, @03:37PM (#1298908)

          Overly militarized outlook. If you want to fight WWI or even the gulf war with AI, it would look like that with AI'd up 1910s to 1990s weapons.

          Infinitely more likely is just flip the switch on civilian logistics. How many will be alive in a year with no electricity, no gas/oil, no food trucks, no clean water, no sewage treatment, no help from the outside world, etc?

          This will be weaponized at a national level long before some kind of global strike.

          • (Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @06:05PM (1 child)

            by Anonymous Coward on Thursday March 30 2023, @06:05PM (#1298943)

            So much drama in all these predictions.

            What is far, far more likely is mundane spam and auto-chat defecating on every electronic medium until they are useless. Happened many times already using more primitive tools. My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears. We will live in an almost perfect corporate dystopia with automated disinformation and surveillance monitoring compliance. We will be farmed like pigs - eating, breathing Leadership propaganda - giving up our precious suffering so that somebody above us can be better than us, and ideally Be Best(tm).

            • (Score: 0) by Anonymous Coward on Friday March 31 2023, @09:56AM

              by Anonymous Coward on Friday March 31 2023, @09:56AM (#1299114)

              > My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears.

              So you think email is going away? Looking from here that seems really unlikely.

  • (Score: 4, Interesting) by SomeGuy on Thursday March 30 2023, @12:18PM (3 children)

    by SomeGuy (5632) on Thursday March 30 2023, @12:18PM (#1298859)

    The other story had a discussion about someone killing themselves supposedly because of what an AI chatbot was telling them. There is a real problem here, but you have to think at a larger scale. Soon everyone may get completely unique customized content unlike canned content that news sites or such push out right now. Instead of needing thousands upon thousands of Putin's pals or Trumpy's Troletariat to manipulate social media, one AI system does it all.

    We are talking about a very fine-grained control over what individuals see, think, and ultimately believe. While AIs are not "smarter", they can be faster and operate at this huge of a scale. And at such a scale making small manipulations can do. Products will sell, politicians will get elected, religions will start and fall. And the real power is whoever controls this AI puppet.

    An AI targeting a large group of people manipulating them until they kill themselves or others? Perhaps results more subtle than that, but at a scale that could make Adolf Hitler look like a small time schoolyard bully.

    • (Score: 2, Interesting) by Anonymous Coward on Thursday March 30 2023, @12:34PM

      by Anonymous Coward on Thursday March 30 2023, @12:34PM (#1298861)

      > Soon everyone may get completely unique customized content unlike canned content

      This is a scary threat that I can believe, thanks.

      Without realizing it, I think I've already been fighting this off when I compare search results with friends--we search for the same things, but live in different parts of the world and have already been pigeon-holed by our past searches. So far the disparities seem pretty benign, annoying at best. However, if someone (or an AI) started controlling this actively I could see real trouble ahead.

    • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @06:10PM (1 child)

      by Anonymous Coward on Thursday March 30 2023, @06:10PM (#1298947)

      The other story had a discussion about someone killing themselves supposedly because of what an AI chatbot was telling them.

      Meanwhile at Happy Jesus Church, they speak directly to God who instructs everyone to bring about that wacky fire and brimstone ending of the Bible. That's perfectly normal though, talking to supernatural deities. It's the AI that we need to worry about.

      • (Score: 0) by Anonymous Coward on Saturday April 01 2023, @05:36PM

        by Anonymous Coward on Saturday April 01 2023, @05:36PM (#1299350)

        Yes, when "god" says something, it is also some corrupt individual or group trying to control people. Usually involving their penises and underage orifices.

        GhatGPT will replace them as soon as the penis attachments come in.

  • (Score: 2) by istartedi on Thursday March 30 2023, @05:46PM

    by istartedi (123) on Thursday March 30 2023, @05:46PM (#1298938) Journal

    Let's say just for the sake of argument you have malicious AI in humanoid forms that could fool people in to selling them guns and/or materials they can stockpile to build IEDs, or they're in charge of controlling everything.

    That in and of itself is quite a hurdle, since nobody in their right mind is going to extend the 2A to AI, and if we can't pull up a manifest of everything they bought, that's a bug that gets fixed... but let's say they did it anyway.

    They can't just start killing humans in one little area. They'll be up against the might of the entire human army since we'd all most likely put aside our differences--China and USA vs. the Robots, sounds like a movie.

    Their only chance is a conspiracy to do a global surprise attack, taking out key military infrastructure. Plausible, but highly unlikely. Most humans don't want to be security guards, but if we get to the point where every street on the planet is being patrolled by highly armed humanoid robots with full AI, people are going to be justifiably paranoid.

    Maybe our gun nuts will have the last laugh. USA, first nation to defeat the robots because we are absolutely saturated with guns. Then we can all get back to work the old fashioned way. You. Over there. You can put the gun down now, pick up a broom and start sweeping up robot fragments.

    --
    Appended to the end of comments you post. Max: 120 chars.
(1)