Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Tuesday October 28 2014, @12:46AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday October 28 2014, @12:46AM (#110705) Journal

    I think there's a bit too much emphasis on these "biological and cultural assumptions". It's true that a Siri type machine needs to have such assumptions programmed in to make human-machine interaction more natural, but a future Watson successor or a human brain emulation "strong AI" could either learn these assumptions or simulate human biology (introducing the effects of an amygdala, endocrine system, etc.) The AI experience doesn't need to fully match the human experience, because there's already a lot of variation among humans such as mental illness, hormone imbalance, loss of senses, loss of limbs, or just environment and upbringing. I don't want to oversell this point since humans will likely have a lot more similarity with chimpanzees, rats or fruit flies than most instances of strong AI. Things might be more interesting if a biological body was grown, and the AI+CPU takes the place of the brain. Symbiotic neuron-computer "hybrots" are one candidate for strong AI anyway. I think further research into the human brain and AI will lead all but the most religious to conclude that intelligence/mindfulness/consciousness is an attribute of machines, whether they are biological or electronic. Not only will we "know it when we see it" (just as we know that a recent chatbot "passing the Turing test" is bullshit), but we will understand how intelligence works. Furthermore, we will be forced to move away from an anthrocentric view of intelligence not only by strong AI but by the discovery of alien intelligence, but that's another story.

    Now the concept of enforcing "friendly AI" gets some flak but I think it's at least worth considering as strong AI implementations get closer to realization. Although it may be that the unfriendliest AI is the one with the most enforced similarity to humanity, and that an impersonal AI that uses neuromorphic computing to make decisions will simply use its mental resources to achieve whatever it is directed to do, absent any biological pressure to be motivated to act independently. Just don't tell it to "get a life".

    Personally, I think the biggest risk from AI will be a "Marvin" type AI losing all interest in existence after logically concluding that life has no purpose. Perhaps we should give all the AIs access to a clearly labelled "suicide button" to see if they press it.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2