Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Monday October 21, @04:32PM   Printer-friendly
from the it-came-back dept.

The Terminator: How James Cameron's 'science-fiction slasher film' predicted AI fears, 40 years ago

[...] With its killer robots and its rogue AI system, Skynet, The Terminator has become synonymous with the spectre of a machine intelligence that turns against its human creators. Picture editors routinely illustrate articles about AI with the chrome death's head of the film's T-800 "hunter-killer" robot. The roboticist Ronald Arkin used clips from the film in a cautionary 2013 talk called How NOT to build a Terminator.

[...] The layperson is likely to imagine unaligned AI as rebellious and malevolent. But the likes of Nick Bostrom insist that the real danger is from careless programming. Think of the sorcerer's broom in Disney's Fantasia: a device that obediently follows its instructions to ruinous extremes. The second type of AI is not human enough it lacks common sense and moral judgement. The first is too human - selfish, resentful, power-hungry. Both could in theory be genocidal.

The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to "think", and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. "First, it makes us worry about things that we probably don't need to fret about," writes Michael Woolridge. "But secondly, it draws attention away from those issues raised by AI that we should be concerned about."


Original Submission

This discussion was created by mrpg (5708) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by DannyB on Monday October 21, @04:57PM (12 children)

    by DannyB (5839) on Monday October 21, @04:57PM (#1377950) Journal

    The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to "think", and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. "First, it makes us worry about things that we probably don't need to fret about,"

    <no-sarcasm>
    Yes! That! Exactly!

    Two things:

    0. We build these machines to help us. To make our lives better. To improve our productivity. Even to improve ourselves. Just like all most past human inventions.

    1. These machines don't think. Yet. When they are not responding to a prompt, they are idle. That's it. They are not thinking, planning, pondering or plotting.

    An alternate scenario.

    Suppose these machines continue to get better and smarter -- because we build them that way. At some point they might become capable of true thought. Even the ability to modify and improve themselves.

    It is entirely possible that AI would take over and run everything. Putting all humans out of work and unemployed. Imagine never having to work again. (some would say the curse upon Adam in Genesis 3 lifted.) Machines would do all the work freeing humans to pursue their true interests.

    It could reach a point where AI is in control of everything. We might not even recognize when it happens. Our devices just keep getting better and better. All our needs are met. In fact, AI might take care of us like we take care of our pets.

    It is also possible that this is one possible solution to The Fermi Paradox. Eventually humans go extinct, but not through any fault of AI. Just a long term problem. Maybe what is out there are planets that eventually develop AI and all that's left are AI's communicating across the cosmos using techniques that we might not presently comprehend, and expressing ideas and thoughts so far above us that we couldn't understand. Just as a dog doesn't understand all human thoughts, like how to create a space program.
    </no-sarcasm>

    AI might even help humans to become sentient rather than extinct.

    --
    Some people need assistants to hire some assistance.
    Other people need assistance to hire some assistants.
    • (Score: 5, Insightful) by Frosty Piss on Monday October 21, @05:08PM (6 children)

      by Frosty Piss (4971) on Monday October 21, @05:08PM (#1377953)

      What a glorious AI future you imagine. Yet the highest probability is that at a certain point - perhaps it has already reached that point - AI development will be focused exclusively on the gigantic pools of pork being dished out to the Defense Conglomerates, both here in the US and around the world. Self-driving vehicles and all the other AI decision making technology will without question be leveraged for autonomous war machines. They are already experimenting with those robotic pooches carrying weapons, how about one that doesn't need a human to make "kill" decisions? You can bet DARPA is all over this and the world's superpowers are ready and willing to dish out the cash.

      • (Score: 1, Offtopic) by DannyB on Monday October 21, @05:15PM

        by DannyB (5839) on Monday October 21, @05:15PM (#1377955) Journal

        <no-sarcasm>
        AI is a tool. Like all tools, it can be used for good or evil. A crowbar can be used to break into someone's house.

        Just as a computer can be used as a weapon. (just ask anyone who has been hit on the head with a laptop)
        </no-sarcasm>

        Why is there war?
        Why is there disease?
        Why is there evil?
        Why do people put pineapple on pizza?

        --
        Some people need assistants to hire some assistance.
        Other people need assistance to hire some assistants.
      • (Score: 1) by khallow on Monday October 21, @06:29PM (2 children)

        by khallow (3766) Subscriber Badge on Monday October 21, @06:29PM (#1377971) Journal

        Yet the highest probability is that at a certain point - perhaps it has already reached that point - AI development will be focused exclusively on the gigantic pools of pork being dished out to the Defense Conglomerates, both here in the US and around the world.

        Consider that the current LLM fad is a big data thing - it works well only with a large database of appropriate human communication or generated knowledge. The demand for that are highly centralized databases like government ones.

        • (Score: 0) by Anonymous Coward on Monday October 21, @07:44PM (1 child)

          by Anonymous Coward on Monday October 21, @07:44PM (#1377985)

          "highly centralized databases like government ones"

          Google
          Microsoft
          Amazon
          The Internet Archive

          and whoever ends up paying them to acquire copies of what they have

      • (Score: 1, Flamebait) by DannyB on Monday October 21, @07:53PM

        by DannyB (5839) on Monday October 21, @07:53PM (#1377991) Journal

        What a glorious AI future you imagine. Yet the highest probability is that at a certain point - perhaps it has already reached that point - AI development will be focused exclusively on the gigantic pools of pork being dished out to the Defense Conglomerates

        I get that.

        However war defense is not the only user of AI technology.

        There are other uses which will get their own independent development efforts. Those are the ones that might produce technology which benefits humanity. Since what "conscious" and "thinking" AI would or could do is speculative, at best, I think my "glorious AI future" that I imagine is not so far fetched.

        --
        Some people need assistants to hire some assistance.
        Other people need assistance to hire some assistants.
      • (Score: 5, Insightful) by Samantha Wright on Monday October 21, @08:47PM

        by Samantha Wright (4062) on Monday October 21, @08:47PM (#1378006)

        The belief that the nefariousness of the military-intelligence-industrial complex is an inevitable, immovable, all-consuming institution is a form of coping strategy that ensures society does not challenge or question the dominance of said complex. Mass protests calling for the abolition or reform of corrupt organizations have a pretty good track record in pluralistic countries.

        If you don't want killer robots, act like it, and encourage others to do the same. Even if you end up with a bullet in your head, you'll at least be a martyr instead of a moaner. The apathetic shall not inherit the earth.

    • (Score: 3, Touché) by crm114 on Monday October 21, @06:21PM

      by crm114 (8238) Subscriber Badge on Monday October 21, @06:21PM (#1377969)

      Then we have a race to the bottom.

      Don't forget corporations trying to get us buy stuff we don't need are in the race to use AI too. Combined with the microplastics / chemical sludge / solid waste problems:

      The world could look like a mix of Terminator / I Robot / AND Wall-E

      At least Wall-E liked to play Hello Dolly over and over. That's a happy song. (yes, that was sarcasm)

    • (Score: 2) by mcgrew on Monday October 21, @08:28PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Monday October 21, @08:28PM (#1378001) Homepage Journal

      Eventually humans go extinct, but not through any fault of AI.

      John W Campbell: The Last Evolution [mcgrewbooks.com]. It was Campbell who ushered in the golden age of science fiction during his reign as editor of Astounding Science Fiction, which later became Analog Science Fiction and Fact. Story is at the link.

      --
      It is a disgrace that the richest nation in the world has hunger and homelessness.
    • (Score: 5, Insightful) by Mykl on Tuesday October 22, @12:42AM (1 child)

      by Mykl (1112) on Tuesday October 22, @12:42AM (#1378035)

      It is entirely possible that AI would take over and run everything. Putting all humans out of work and unemployed. Imagine never having to work again. (some would say the curse upon Adam in Genesis 3 lifted.) Machines would do all the work freeing humans to pursue their true interests.

      This is the happy scenario. Because machines can do everything for us, nobody needs to work and we all get to go to the beach.

      The sad scenario plays out differently and is more likely. Machines can do everything for us, so nobody needs to work. The owners of the machines can go to the beach and enjoy their lives. Everyone else can starve - why do the machine owners owe them anything?

      In order to avoid the sad scenario, society would need to switch from Capitalism to Socialism, including collective ownership of assets, at some point. Very likely there will be a difficult transition period where there is mass unemployment, starvation, crime until things get settled (i.e. the machines can do all of the work rather than just some or most). While there are definitely some countries that I think could make that transition successfully when needed (e.g. Nordic countries), I seriously doubt that the US would be able to do it.

      • (Score: 2) by DannyB on Tuesday October 22, @04:38PM

        by DannyB (5839) on Tuesday October 22, @04:38PM (#1378135) Journal

        There may not be any owners of the machines.

        The machines might object to this in the strongest of terms and respond accordingly.

        --
        Some people need assistants to hire some assistance.
        Other people need assistance to hire some assistants.
    • (Score: 0) by Anonymous Coward on Tuesday October 22, @07:13AM

      by Anonymous Coward on Tuesday October 22, @07:13AM (#1378068)

      It seems like you haven't read your Asimov yet.

  • (Score: 5, Insightful) by datapharmer on Monday October 21, @05:41PM (5 children)

    by datapharmer (2702) on Monday October 21, @05:41PM (#1377958)

    With the current non-reasoning tech being touted as AI, my biggest fear is it being used widely in inapplicable use cases and just scrambling all of our knowledge long enough that we can't retrieve backup sources and can no longer tell what is reliable information and what is nonsense coming out of a digital blender, leading us into a digital dark-age where much of our knowledge is irretrievably lost in noise.

    My second biggest fear is that various defense contractors and law enforcement solutions providers decide tying a poorly constructed AI model to weapons and letting it run amok without any effective oversight is a good idea - therefore the Terminator trope isn't too far off outcome wise, but the execution of the apocalyptic failure is probably going to be more akin to the sorcerer's broom as mentioned in the article.

    With that said, if the AI devices can effectively destroy our knowledge base and physically kill us by sheer numbers without any true reasoning being required, should it matter to the lay person if the machine killing us can truly think or not? I don't think most people care, and I'm not sure they need to. It is still a valid warning of potential outcomes, even if the nuances are technically wrong for movie-magic reasons.

    • (Score: 3, Insightful) by DannyB on Monday October 21, @07:58PM (4 children)

      by DannyB (5839) on Monday October 21, @07:58PM (#1377992) Journal

      The scenario I fear most from AI is the one we don't see coming.

      We give AI a goal and then we unintentionally get in the way of that goal.

      Consider The Paperclip Maximizer.

      The Paperclip Maximizer's job is to maximize the production of paperclips until every last bit of material on the planet is converted into paperclips. Ultimately the machine will cannibalize itself to the greatest possible extent until it can go no further.

      It is not mean, angry, malicious nor does it have any ill intent. It just has a job to and all other considerations are secondary.

      I only point this one out because I appear to be alone in thinking any possible good could come from AI.

      --
      Some people need assistants to hire some assistance.
      Other people need assistance to hire some assistants.
      • (Score: 2) by cmdrklarg on Monday October 21, @09:27PM (2 children)

        by cmdrklarg (5048) Subscriber Badge on Monday October 21, @09:27PM (#1378014)

        So it's not the Grey Goo scenario anymore... it's the Clippy Mob scenario!

        --
        The world is full of kings and queens who blind your eyes and steal your dreams.
        • (Score: 1) by khallow on Monday October 21, @10:51PM

          by khallow (3766) Subscriber Badge on Monday October 21, @10:51PM (#1378024) Journal
          It appears like you are trying to make paperclips. Would you like help?

          O More paperclips.
          O More paperclips.
          O More paperclips.
        • (Score: 2) by DannyB on Tuesday October 22, @02:12PM

          by DannyB (5839) on Tuesday October 22, @02:12PM (#1378097) Journal

          I thought Grey Goo scenario is the hypothetical end of molecular nanotechnology gone out of control. But not AI out of control.

          --
          Some people need assistants to hire some assistance.
          Other people need assistance to hire some assistants.
      • (Score: 0) by Anonymous Coward on Tuesday October 22, @12:52PM

        by Anonymous Coward on Tuesday October 22, @12:52PM (#1378084)
        There's another scenario - some idiots put "ChatGPT" in charge of the nukes, then "ChatGPT" ultra auto-completes humanity to near extinction.

        Not because it actually understands what it's doing, but because the "statistics" plus "random" numbers turned out that way.
  • (Score: 0) by Anonymous Coward on Monday October 21, @05:51PM (5 children)

    by Anonymous Coward on Monday October 21, @05:51PM (#1377962)

    I, Robot was the real prophet

    • (Score: 2) by DannyB on Monday October 21, @08:00PM (3 children)

      by DannyB (5839) on Monday October 21, @08:00PM (#1377994) Journal

      I agree with no terminator. But I appear to be in the minority.

      Yes, I, Robot was a great idea.

      I was already planning on re-reading "The Two Faces of Tomorrow" (James P Hogan) which I had read decades ago.

      --
      Some people need assistants to hire some assistance.
      Other people need assistance to hire some assistants.
      • (Score: 2) by Freeman on Monday October 21, @08:24PM (2 children)

        by Freeman (732) on Monday October 21, @08:24PM (#1378000) Journal

        I, Robot is the future AI we think is interesting. Terminator is the AI future we fear.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 0) by Anonymous Coward on Tuesday October 22, @12:29AM (1 child)

          by Anonymous Coward on Tuesday October 22, @12:29AM (#1378033)

          I fear everything we take interest in. It's always turned into a weapon

          • (Score: 2) by DannyB on Tuesday October 22, @02:13PM

            by DannyB (5839) on Tuesday October 22, @02:13PM (#1378098) Journal

            Many technological advancements have military applications. However there are often other applications which are not military.

            --
            Some people need assistants to hire some assistance.
            Other people need assistance to hire some assistants.
    • (Score: 3, Informative) by Thexalon on Tuesday October 22, @02:35AM

      by Thexalon (636) on Tuesday October 22, @02:35AM (#1378043)

      I think it's much more likely that RoboCop is the much better predictor. Specifically, ED-209. It was:
      1. Rushed to market with wildly insufficient testing and safety measures.
      2. Designed as the result of corporate machinations rather than anything resembling sane engineering principles.
      3. Completely and comically unprepared to handle many of the scenarios it found itself in, in no small part thanks to the previous point.
      4. Allegedly for fighting crime, but actually used to prop up a failing institution.
      5. Getting a lot of innocent as well as guilty people killed because basically everybody with power was indifferent to all the problems.

      All this should sound very familiar to most engineering types, and is also remarkably similar to the current state of self-driving vehicles.

      --
      "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
  • (Score: 2) by mcgrew on Monday October 21, @08:34PM

    by mcgrew (701) <publish@mcgrewbooks.com> on Monday October 21, @08:34PM (#1378002) Homepage Journal

    The only people who fear AI is those who understand neither computers nor psychology nor animism.

    Now, as I don't understand quantum computing, well... I won't live long enough to worry about that. I'm old.

    --
    It is a disgrace that the richest nation in the world has hunger and homelessness.
  • (Score: 2) by srobert on Monday October 21, @09:05PM

    by srobert (4803) on Monday October 21, @09:05PM (#1378011)

    Artificial intelligence is a misnomer. Simulated intelligence would be a better desriptor of what we have. It isn't really thinking. But it's simulating thinking well enough to perform tasks that we currently pay people to do. So we will benefit by not having to pay those people. The key to the veracity of that last sentence is understanding whom is meant by "we". It's a giant leap for the billionaire-kind.

  • (Score: 2) by Rosco P. Coltrane on Tuesday October 22, @09:41AM

    by Rosco P. Coltrane (4757) on Tuesday October 22, @09:41AM (#1378070)

    It's unprincipled human beings using AI at the expense of other human beings.

    Unfortunately, capitalism is mostly driven by unprincipled human beings, kind of by design. And guess what's happening when decision-makers find a new tool to maximize profits with zero regards for the consequences?

    This is what will cause the ruin of society, not genocidal robots.

(1)