Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @01:48PM

    by Anonymous Coward on Monday October 27 2014, @01:48PM (#110504)

    Self-preservation is probably the biggest problem.

    An AI that doesn't have the drive of self-preservation will likely not understand the value of life, and therefore have no problem sacrificing a life for a "higher" goal. On the other hand, an AI that does have the drive of self-preservation will sooner or later come into a conflict between self-preservation and whatever is good for us.

    The solution is to never give the AI any control. Imagine HAL from 2001 would not have been able to actually control anything in the space ship, but only had been able to give the humans information about the available options and the consequences. No human would e.g. have acted on the suggestion "kill all those not currently awake". And HAL knowing that it is to be switched off would have been harmless, because there's not much it could have done against it.

  • (Score: 2) by VLM on Monday October 27 2014, @02:00PM

    by VLM (445) on Monday October 27 2014, @02:00PM (#110508)

    and the consequences

    What if the AI lied?

    "HAL I don't remember the cryogenic sleeping chamber periodic defrost cycle procedure (you don't want freezer burned scientists)" "Oh no problemo Dave, just stick an ice pick in the refrigerant line, seriously, would I lie you you?"

    Probably HAL would be smart enough to know that won't work, but some elaborately scheduled ridiculous maintenance procedure would work. Have one crew shut off the monitoring alert system to dust it, another shut down the primary and 1st and second backups in different ways to do maint on electrical switchgear or locate an insulation fault in the power cabling or test the operation of a circuit breaker.

    "Whoa Dave, looks like we need a huge 50 m/s course correction burn, no it wouldn't send us into the heart of the planet Jupiter weeks later, how silly of you, I've done the math and its correct."

  • (Score: 2) by cafebabe on Monday October 27 2014, @02:05PM

    by cafebabe (894) on Monday October 27 2014, @02:05PM (#110510) Journal

    *Spoiler Warning*

    I was thinking of HAL as an example. I believe that the root cause of failure for HAL was conflicting axioms supplied by compartmentalized departments of government. Even without the creeping secrecy, this scenario should very alarming to anyone who understands incompleteness theory [wikipedia.org]. Even the axioms of geometry remain unresolved after 2,300 years [wikipedia.org].

    --
    1702845791×2
  • (Score: 2) by q.kontinuum on Monday October 27 2014, @03:06PM

    by q.kontinuum (532) on Monday October 27 2014, @03:06PM (#110544) Journal

    I think the point here is to distinguish between being intelligent and having emotions. As long as the machine has no emotions, it won't have any personality as such and will follow the goals defined by it's operator, including "Human-preservation" even without "Self-preservation". There is no conflict to decide between self-preservation and human-preservation in that case.

    However, I don't think we want that either. Along the lines of "I, Robot" (the movie, not the book) human-preservation might require to take liberty from humans to prevent them from destroying themselves.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 2) by HiThere on Monday October 27 2014, @06:40PM

      by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:40PM (#110630) Journal

      That was from Jack Williamson's "The Humanoids" or, in serial form "With Folded Hands".

      "To serve and protect and guard men from harm" was their basic instruction.

      (The sequel, "... And Searching Mind" was insisted upon by Campbell, but I never found it convincing, and neither did Jack Williamson. I don't recall whether or not it was incorporated into the novel version.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by HiThere on Monday October 27 2014, @06:43PM

    by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:43PM (#110632) Journal

    Do be aware that people have already crossed this barrier. Think about the roving robot security patrols that are equipped with weapons that they can use when they decide to. I believe that the US Army has vetoed their use, but I have heard that some companies in Japan use them. And if they were in use in a secret installation, one wouldn't expect to hear about it.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.