Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @11:38AM

    by Anonymous Coward on Monday October 27 2014, @11:38AM (#110471)

    Just goes to show that even smart guys can be idiots when talking about stuff they have only a passing familiarity with.

  • (Score: 3, Insightful) by Bot on Monday October 27 2014, @01:56PM

    by Bot (3902) on Monday October 27 2014, @01:56PM (#110506) Journal

    Well said, meatbag.

    Given historical and current events, I would say that an AI turning against its masters is a lesser problem compared to an AI that obeys them.

    --
    Account abandoned.
    • (Score: 2) by HiThere on Monday October 27 2014, @06:08PM

      by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:08PM (#110615) Journal

      Yes. The problem with the demon analogy lies in the assumption of a human goal structure. If you *do* design a human goal structure into an AI, then the fear is justified, but why would you do that? No human (few humans?) wants to be enslaved. Dogs don't mind. (OTOH, don't model a dog either, because dogs do desire to be pack leader.)

      The problem isn't the intelligence of the AI, the intelligence is merely the enabler of the problem. The real problem is that there are several already identified corner cases where seemingly reasonable goal structures can lead to disaster. To use a classical example "Yes, master. I will attempt to turn the entire universe into paper-clips." So you need a goal structure that is robust against corner cases as well as not being actively malicious.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by takyon on Monday October 27 2014, @05:25PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday October 27 2014, @05:25PM (#110597) Journal

    Yeah because nobody [cser.org] else [theregister.co.uk] thinks [ox.ac.uk] AI [nytimes.com] can be a threat.

    If scientists throw enough yottaflops at the problem, they will eventually create a "brute force" brain simulation AI.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @05:59PM

      by Anonymous Coward on Monday October 27 2014, @05:59PM (#110610)

      > Yeah because nobody else thinks AI can be a threat.

      And while they have a point around the edges, the main thrust is basically hysteria. Like the movie Transcendence.

      > If scientists throw enough yottaflops at the problem, they will eventually create a "brute force" brain simulation AI.

      FLOPS and AI are almost completely orthogonal.

      • (Score: 2) by takyon on Monday October 27 2014, @06:46PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday October 27 2014, @06:46PM (#110633) Journal

        The key words here are "enough" and "almost".

        If FLOPS aren't your thing, maybe you prefer a scalable neuromorphic approach [theregister.co.uk]. The point is, strong AI will be created. And what you see as "hysteria", others see as caution.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @04:27AM

          by Anonymous Coward on Tuesday October 28 2014, @04:27AM (#110750)

          > The key words here are "enough" and "almost".

          Lol. Ok, FLOPS are to AI as databases are to photo-processing. NO matter how many FLOPS to add to an AI system you aren't going to make it any more 'I'

          > If FLOPS aren't your thing, maybe you prefer a scalable neuromorphic approach

          Congrats on being able to use google. But all that does is reinforce my point - you don't know jackshit about what you are talking about.

          > And what you see as "hysteria", others see as caution.

          Yeah, like all the terrorism hysteria or the ebola hysteria. IRC you are one of those who are happy to run around like a chicken with its head cut-off when it comes to ebola too.

          • (Score: 2) by takyon on Tuesday October 28 2014, @06:07AM

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday October 28 2014, @06:07AM (#110768) Journal

            >Lol. Ok, FLOPS are to AI as databases are to photo-processing. NO matter how many FLOPS to add to an AI system you aren't going to make it any more 'I'

            If it takes X floating-point operations to simulate the human brain for a millisecond, then throwing more FLOPS at the problem will make that goal achievable in a shorter amount of time. Nobody's claiming that an AI would spontaneously arise in a larger supercomputer, or that it's the best way to create strong AI. The EU [humanbrainproject.eu] is planning to simulate the brain with software that already exists on an exaflops supercomputer within a decade. Increase the available flops by x1,000 or x1,000,000, and strong AI will be easily achievable, in spite of more radical approaches.

            >Yeah, like all the terrorism hysteria or the ebola hysteria. IRC you are one of those who are happy to run around like a chicken with its head cut-off when it comes to ebola too.

            I think it's hysterical that you would bury your head in the sand and trivialize existential threats. There are far worse biological threats than ebola, and individuals will attack using engineered biological agents at some point. Or did you think that there's no such thing as terrorism or pandemics? If that doesn't fit into your hivemind view of "fearmongering gubberment and media", so be it. Musk is hardly running around screaming about the AI apocalypse, he's just advocating caution.

            >Congrats on being able to use google. But all that does is reinforce my point - you don't know jackshit about what you are talking about.

            I don't need to write a book to debate one anon. I'm right about FLOPS, and I read the article I linked to the day it was published. Maybe you should lrn2google so you don't have to be schooled in the comments.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1) by khallow on Monday October 27 2014, @11:20PM

        by khallow (3766) Subscriber Badge on Monday October 27 2014, @11:20PM (#110691) Journal
        Out of curiosity, what makes this concern over future AI "hysteria"?
        • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @04:30AM

          by Anonymous Coward on Tuesday October 28 2014, @04:30AM (#110752)

          > Out of curiosity, what makes this concern over future AI "hysteria"?

          Because it all presumes that an AI will somehow "escape" and become all powerful. That's hollywood bullshit. Anyone here should know that the way computers work in hollywood isn't the way they work in the real world. I like watching "Person of Interest" but I know its primarily fiction with a nice sprinkling of minor facts. Just like pretty much any other worthwhile hollywood production.

          • (Score: 2) by mhajicek on Tuesday October 28 2014, @05:06AM

            by mhajicek (51) on Tuesday October 28 2014, @05:06AM (#110760)

            Look up the AI box experiment.

            --
            The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
          • (Score: 1) by khallow on Tuesday October 28 2014, @03:50PM

            by khallow (3766) Subscriber Badge on Tuesday October 28 2014, @03:50PM (#110882) Journal

            Because it all presumes that an AI will somehow "escape" and become all powerful.

            Well, if the AI is a lot smarter than any of us, then that becomes a legitimate concern. We're not in an age where the power of intelligence ends at the tip of one's club.

          • (Score: 2) by DECbot on Tuesday October 28 2014, @11:29PM

            by DECbot (832) on Tuesday October 28 2014, @11:29PM (#111002) Journal

            I think we may need to fear when the AI that doesn't escape from its box. Humanity doesn't have a long history of doing good deeds. I doubt the AI's master will have the well being of anyone implanted into the coded. Controlled AI and near AI is likely to harm the masses more than help, and I bet that will be by design.

            --
            cats~$ sudo chown -R us /home/base
            • (Score: 2) by Joe Desertrat on Wednesday October 29 2014, @02:41AM

              by Joe Desertrat (2454) on Wednesday October 29 2014, @02:41AM (#111043)

              I think we may need to fear when the AI that doesn't escape from its box. Humanity doesn't have a long history of doing good deeds. I doubt the AI's master will have the well being of anyone implanted into the coded. Controlled AI and near AI is likely to harm the masses more than help, and I bet that will be by design.

              Not to mention that there always seem to be security flaws in even the best designed systems, even if the master is benevolent what if...

    • (Score: 2) by aristarchus on Tuesday October 28 2014, @08:36AM

      by aristarchus (2645) on Tuesday October 28 2014, @08:36AM (#110787) Journal

      Wow! This explains it!

      they will eventually create a "brute force" brain simulation AI.

      Military intelligence! Brute Force Brain Simulation! BFBS! I, for one, . . . nah, forget it, they will not be able to overlord anyone.

  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @06:56PM

    by Anonymous Coward on Monday October 27 2014, @06:56PM (#110635)

    yes, your post does.