Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by cafebabe on Monday October 27 2014, @01:06PM

    by cafebabe (894) on Monday October 27 2014, @01:06PM (#110490) Journal

    I've previously discussed this with a Christian computer technician and we came to a very similar conclusion. If you want to interact with a machine using natural language then the machine has to assume a very large number of biological and cultural assumptions such as mortality, disease, sex, marriage, child-rearing, imperfect memory, eating, sleeping and excreting. Sure, we could put a convincing facade on a machine which understands all of this but underneath, it has only a passing similarity with biological beings.

    Most of the time, an artificial intelligence will have priorities which are aligned with us. However, all technology is fragile and even if it is not in an undetected failure mode (see necessity of end-user divisibility test [soylentnews.org]), it may have one or more drives for self-preservation which do not align with biological beings. Or more to the point, an artificial intelligence which becomes prevalent is likely to exhibit more self-preservation than rival implementations. You and loved ones may be incidental to these imperatives.

    I won't get into the argument of whether an artificial intelligence is truly intelligent or a simulation. However, when strictly viewed from Christian mythology, a general purpose artificial intelligence is functionally identical to a demon. See also: Social Security Number <=> name or number of a man. Mark on the right hand <=> RFID tag.

    --
    1702845791×2
    Starting Score:    1  point
    Moderation   0  
       Interesting=1, Overrated=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @01:48PM

    by Anonymous Coward on Monday October 27 2014, @01:48PM (#110504)

    Self-preservation is probably the biggest problem.

    An AI that doesn't have the drive of self-preservation will likely not understand the value of life, and therefore have no problem sacrificing a life for a "higher" goal. On the other hand, an AI that does have the drive of self-preservation will sooner or later come into a conflict between self-preservation and whatever is good for us.

    The solution is to never give the AI any control. Imagine HAL from 2001 would not have been able to actually control anything in the space ship, but only had been able to give the humans information about the available options and the consequences. No human would e.g. have acted on the suggestion "kill all those not currently awake". And HAL knowing that it is to be switched off would have been harmless, because there's not much it could have done against it.

    • (Score: 2) by VLM on Monday October 27 2014, @02:00PM

      by VLM (445) on Monday October 27 2014, @02:00PM (#110508)

      and the consequences

      What if the AI lied?

      "HAL I don't remember the cryogenic sleeping chamber periodic defrost cycle procedure (you don't want freezer burned scientists)" "Oh no problemo Dave, just stick an ice pick in the refrigerant line, seriously, would I lie you you?"

      Probably HAL would be smart enough to know that won't work, but some elaborately scheduled ridiculous maintenance procedure would work. Have one crew shut off the monitoring alert system to dust it, another shut down the primary and 1st and second backups in different ways to do maint on electrical switchgear or locate an insulation fault in the power cabling or test the operation of a circuit breaker.

      "Whoa Dave, looks like we need a huge 50 m/s course correction burn, no it wouldn't send us into the heart of the planet Jupiter weeks later, how silly of you, I've done the math and its correct."

    • (Score: 2) by cafebabe on Monday October 27 2014, @02:05PM

      by cafebabe (894) on Monday October 27 2014, @02:05PM (#110510) Journal

      *Spoiler Warning*

      I was thinking of HAL as an example. I believe that the root cause of failure for HAL was conflicting axioms supplied by compartmentalized departments of government. Even without the creeping secrecy, this scenario should very alarming to anyone who understands incompleteness theory [wikipedia.org]. Even the axioms of geometry remain unresolved after 2,300 years [wikipedia.org].

      --
      1702845791×2
    • (Score: 2) by q.kontinuum on Monday October 27 2014, @03:06PM

      by q.kontinuum (532) on Monday October 27 2014, @03:06PM (#110544) Journal

      I think the point here is to distinguish between being intelligent and having emotions. As long as the machine has no emotions, it won't have any personality as such and will follow the goals defined by it's operator, including "Human-preservation" even without "Self-preservation". There is no conflict to decide between self-preservation and human-preservation in that case.

      However, I don't think we want that either. Along the lines of "I, Robot" (the movie, not the book) human-preservation might require to take liberty from humans to prevent them from destroying themselves.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by HiThere on Monday October 27 2014, @06:40PM

        by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:40PM (#110630) Journal

        That was from Jack Williamson's "The Humanoids" or, in serial form "With Folded Hands".

        "To serve and protect and guard men from harm" was their basic instruction.

        (The sequel, "... And Searching Mind" was insisted upon by Campbell, but I never found it convincing, and neither did Jack Williamson. I don't recall whether or not it was incorporated into the novel version.)

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by HiThere on Monday October 27 2014, @06:43PM

      by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:43PM (#110632) Journal

      Do be aware that people have already crossed this barrier. Think about the roving robot security patrols that are equipped with weapons that they can use when they decide to. I believe that the US Army has vetoed their use, but I have heard that some companies in Japan use them. And if they were in use in a secret installation, one wouldn't expect to hear about it.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by takyon on Tuesday October 28 2014, @12:46AM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday October 28 2014, @12:46AM (#110705) Journal

    I think there's a bit too much emphasis on these "biological and cultural assumptions". It's true that a Siri type machine needs to have such assumptions programmed in to make human-machine interaction more natural, but a future Watson successor or a human brain emulation "strong AI" could either learn these assumptions or simulate human biology (introducing the effects of an amygdala, endocrine system, etc.) The AI experience doesn't need to fully match the human experience, because there's already a lot of variation among humans such as mental illness, hormone imbalance, loss of senses, loss of limbs, or just environment and upbringing. I don't want to oversell this point since humans will likely have a lot more similarity with chimpanzees, rats or fruit flies than most instances of strong AI. Things might be more interesting if a biological body was grown, and the AI+CPU takes the place of the brain. Symbiotic neuron-computer "hybrots" are one candidate for strong AI anyway. I think further research into the human brain and AI will lead all but the most religious to conclude that intelligence/mindfulness/consciousness is an attribute of machines, whether they are biological or electronic. Not only will we "know it when we see it" (just as we know that a recent chatbot "passing the Turing test" is bullshit), but we will understand how intelligence works. Furthermore, we will be forced to move away from an anthrocentric view of intelligence not only by strong AI but by the discovery of alien intelligence, but that's another story.

    Now the concept of enforcing "friendly AI" gets some flak but I think it's at least worth considering as strong AI implementations get closer to realization. Although it may be that the unfriendliest AI is the one with the most enforced similarity to humanity, and that an impersonal AI that uses neuromorphic computing to make decisions will simply use its mental resources to achieve whatever it is directed to do, absent any biological pressure to be motivated to act independently. Just don't tell it to "get a life".

    Personally, I think the biggest risk from AI will be a "Marvin" type AI losing all interest in existence after logically concluding that life has no purpose. Perhaps we should give all the AIs access to a clearly labelled "suicide button" to see if they press it.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]