Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Monday October 21 2024, @04:32PM   Printer-friendly
from the it-came-back dept.

The Terminator: How James Cameron's 'science-fiction slasher film' predicted AI fears, 40 years ago

[...] With its killer robots and its rogue AI system, Skynet, The Terminator has become synonymous with the spectre of a machine intelligence that turns against its human creators. Picture editors routinely illustrate articles about AI with the chrome death's head of the film's T-800 "hunter-killer" robot. The roboticist Ronald Arkin used clips from the film in a cautionary 2013 talk called How NOT to build a Terminator.

[...] The layperson is likely to imagine unaligned AI as rebellious and malevolent. But the likes of Nick Bostrom insist that the real danger is from careless programming. Think of the sorcerer's broom in Disney's Fantasia: a device that obediently follows its instructions to ruinous extremes. The second type of AI is not human enough it lacks common sense and moral judgement. The first is too human - selfish, resentful, power-hungry. Both could in theory be genocidal.

The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to "think", and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. "First, it makes us worry about things that we probably don't need to fret about," writes Michael Woolridge. "But secondly, it draws attention away from those issues raised by AI that we should be concerned about."


Original Submission

 
This discussion was created by mrpg (5708) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by datapharmer on Monday October 21 2024, @05:41PM (5 children)

    by datapharmer (2702) on Monday October 21 2024, @05:41PM (#1377958)

    With the current non-reasoning tech being touted as AI, my biggest fear is it being used widely in inapplicable use cases and just scrambling all of our knowledge long enough that we can't retrieve backup sources and can no longer tell what is reliable information and what is nonsense coming out of a digital blender, leading us into a digital dark-age where much of our knowledge is irretrievably lost in noise.

    My second biggest fear is that various defense contractors and law enforcement solutions providers decide tying a poorly constructed AI model to weapons and letting it run amok without any effective oversight is a good idea - therefore the Terminator trope isn't too far off outcome wise, but the execution of the apocalyptic failure is probably going to be more akin to the sorcerer's broom as mentioned in the article.

    With that said, if the AI devices can effectively destroy our knowledge base and physically kill us by sheer numbers without any true reasoning being required, should it matter to the lay person if the machine killing us can truly think or not? I don't think most people care, and I'm not sure they need to. It is still a valid warning of potential outcomes, even if the nuances are technically wrong for movie-magic reasons.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Touché=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 3, Insightful) by DannyB on Monday October 21 2024, @07:58PM (4 children)

    by DannyB (5839) Subscriber Badge on Monday October 21 2024, @07:58PM (#1377992) Journal

    The scenario I fear most from AI is the one we don't see coming.

    We give AI a goal and then we unintentionally get in the way of that goal.

    Consider The Paperclip Maximizer.

    The Paperclip Maximizer's job is to maximize the production of paperclips until every last bit of material on the planet is converted into paperclips. Ultimately the machine will cannibalize itself to the greatest possible extent until it can go no further.

    It is not mean, angry, malicious nor does it have any ill intent. It just has a job to and all other considerations are secondary.

    I only point this one out because I appear to be alone in thinking any possible good could come from AI.

    --
    The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
    • (Score: 2) by cmdrklarg on Monday October 21 2024, @09:27PM (2 children)

      by cmdrklarg (5048) Subscriber Badge on Monday October 21 2024, @09:27PM (#1378014)

      So it's not the Grey Goo scenario anymore... it's the Clippy Mob scenario!

      --
      The world is full of kings and queens who blind your eyes and steal your dreams.
      • (Score: 1) by khallow on Monday October 21 2024, @10:51PM

        by khallow (3766) Subscriber Badge on Monday October 21 2024, @10:51PM (#1378024) Journal
        It appears like you are trying to make paperclips. Would you like help?

        O More paperclips.
        O More paperclips.
        O More paperclips.
      • (Score: 2) by DannyB on Tuesday October 22 2024, @02:12PM

        by DannyB (5839) Subscriber Badge on Tuesday October 22 2024, @02:12PM (#1378097) Journal

        I thought Grey Goo scenario is the hypothetical end of molecular nanotechnology gone out of control. But not AI out of control.

        --
        The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
    • (Score: 0) by Anonymous Coward on Tuesday October 22 2024, @12:52PM

      by Anonymous Coward on Tuesday October 22 2024, @12:52PM (#1378084)
      There's another scenario - some idiots put "ChatGPT" in charge of the nukes, then "ChatGPT" ultra auto-completes humanity to near extinction.

      Not because it actually understands what it's doing, but because the "statistics" plus "random" numbers turned out that way.