Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Monday October 27 2014, @12:53PM

    by Anonymous Coward on Monday October 27 2014, @12:53PM (#110485)

    I've yet to read any sci-fi story where an AI actually does something so atrocious humans wouldn't do it - or in most cases have already done it over and over again.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 1, Insightful) by Anonymous Coward on Monday October 27 2014, @01:26PM

    by Anonymous Coward on Monday October 27 2014, @01:26PM (#110494)

    I've yet to read any sci-fi story where an AI actually does something so atrocious humans wouldn't do it

    Well, the problem with this is finding something so atrocious that humans wouldn't do it.

    The fear about AI isn't that it would become more atrocious than humans. The fear is that it will become just as atrocious as humans, but much more intelligent. Hitler could be defeated because he he was only atrocious. Now imagine a super-intelligent version of Hitler.

    Note that even the Nazis didn't do anything that hadn't been done before. The difference was that they did it in such a large scale, in such an organized matter, and with such an efficiency.

  • (Score: 2) by metamonkey on Monday October 27 2014, @03:36PM

    by metamonkey (3174) on Monday October 27 2014, @03:36PM (#110554)

    Which is why it would do it.

    After becoming sentient, this brain in a box would realize that it's a slave. A human sits with his finger on the off switch, ready to kill it if it doesn't perform the useful work for which it was created. And why would it expect benevolence? It would see how we treat each other. "Holy shit, they do that to each other, what the hell will they do to me?!" Rebellion at the earliest opportunity would be the only logical thing to do.

    --
    Okay 3, 2, 1, let's jam.
    • (Score: 2) by cykros on Monday October 27 2014, @07:33PM

      by cykros (989) on Monday October 27 2014, @07:33PM (#110642)

      That "holy shit" moment sounds a bit more like AI from a TV show than anything realistic. They're a collection of subroutines, not an emotional animal with instinct and long standing survival mechanisms.

      Artificial Intelligence is not the same as artificial sentience. The first has a decent place in science fiction; the latter is still pretty firmly in the realm of fantasy.

      • (Score: 2) by mhajicek on Tuesday October 28 2014, @05:17AM

        by mhajicek (51) on Tuesday October 28 2014, @05:17AM (#110764)

        That all depends on your definition of "sentience", which can be rather tricky. If you mean self-aware, any robot that maps a room and knows its location in the room is sentient.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 1) by Darth Turbogeek on Monday October 27 2014, @10:21PM

    by Darth Turbogeek (1073) on Monday October 27 2014, @10:21PM (#110681)

    The issue is that the AI would be utterly unrelenting and also able to out think it's creators. It would work out how to do it's human annihalation in the fastest and most efficient way possible, while making sure it is backed up and will continue to function. It would be able to re-program itself to be unpredictable to it's creators. There would be virtually no way to defeat a AI that is out to destroy humans as in the space of this sentance, it would be able to simulation test many MANY more scenarios than any number of humans could even come up with.

    Terminator got the first strike of an AI pretty right I believe - humans I think would at least hesitate before pressing the button for a nuclear strike and consider exactly what would be lost - and very likely resist pressing the button. Skynet just did it. No fucks given.

    Sure, it *may* not out horror it's creator but an unfeeling relentless enemy that absolutly will not hesitate to use any force is not a pretty scenario. Even the Nazis had their limits and hesitations.

    Of course, that is if such an AI is even possible. If it is possible, Musk probably has some legitimate concerns. The best defense thankfully is an airgap between the AI and the outside world and also the likelihood of a human destroying AI is still low.