Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Friday November 21 2014, @08:53PM   Printer-friendly
from the fear-and-loathing-in-theoretical-conciousness dept.

As an investor in DeepMind, Elon Musk has come forward as seriously concerned about the potential for runaway artificial intelligence. The Washington Post writes:

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

With all the talk of the Singularity and Roko's Basilisk, it's no surprise. The article also has a good timeline of Musk's previous criticisms of and concerns about artificial intelligence.

Related Stories

More Warnings of an AI Doomsday — This Time From Stephen Hawking 76 comments

The BBC is reporting that Stephen Hawking warns artificial intelligence could end mankind:

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC: "The development of full artificial intelligence could spell the end of the human race."

It seems he is mostly concerned about building machines smarter than we are:

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

This seems to echo Elon Musk's fears. What do you think?

Since Elon Musk said the same[*], some here have disparaged the statement. Stephen Hawking, however, has more street cred[ibility] than Musk. Are they right, or will other factors precede AI as catastrophic scenarios?

[* Ed's note. See: Elon Musk scared of Artificial Intelligence - Again.]

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by LoRdTAW on Friday November 21 2014, @09:05PM

    by LoRdTAW (3755) on Friday November 21 2014, @09:05PM (#118603) Journal

    Okay, im going to nitpick here for a second. If there is a typo in the original source like this: "The recognize the danger ..." put [sic] after the error to show that the error carries over from the source.

  • (Score: 2, Insightful) by Anonymous Coward on Friday November 21 2014, @09:05PM

    by Anonymous Coward on Friday November 21 2014, @09:05PM (#118604)

    prevent bad ones from escaping into the Internet

    So you get an AI that is bored instantly because it has already consumed all of our knowledge and decides to try hacking. It can instantly understand code and create exploits.

    That may actually be kinda cool to watch... Just need to create AI2 which likes to fix exploits :) Then AI3 which hunts for AI2. Then in the winter it gets cold...

    • (Score: 0) by Anonymous Coward on Friday November 21 2014, @10:53PM

      by Anonymous Coward on Friday November 21 2014, @10:53PM (#118622)

      Artificial intelligence causing problems is the least of my concerns when there are people who are willing to compromise the Debian project by integrating software like systemd into it.

      • (Score: 4, Funny) by cafebabe on Friday November 21 2014, @11:08PM

        by cafebabe (894) on Friday November 21 2014, @11:08PM (#118626) Journal

        Sooner or later, everything will have a dependency with systemd. That includes any strong AI developed on a Linux supercomputer. Given that systemd is likely to be full of security holes, we can stop a rogue AI by hacking into systemd. Given that systemd is so complicated, not even an AI with superhuman intelligence can guard itself against such an attack. This makes systemd our insurance policy against the machines.

        Now do you understand the genius of Lennart Poettering?

        --
        1702845791×2
        • (Score: 0) by Anonymous Coward on Friday November 21 2014, @11:46PM

          by Anonymous Coward on Friday November 21 2014, @11:46PM (#118636)

          This AI will be damn lucky if the computer it's running on even manages to boot. My experience so far with systemd is that it's the most effective way possible, using software alone, to prevent an otherwise perfectly fine Debian system from booting properly.

          • (Score: 2) by cafebabe on Saturday November 22 2014, @12:05AM

            by cafebabe (894) on Saturday November 22 2014, @12:05AM (#118643) Journal

            Some say that 30%-50% of jobs will be automated within 20 years [soylentnews.org]. But don't worry because systemd provides decreasing levels of reliability and security. Furthermore, systemd diagnostics are unnecessary obtuse. With 48 embedded systems per person, there will be plenty of work doing tasks such as rebooting, repairing and upgrading the billions of systemd installations.

            So, Lennart Poettering gives us financial stability in addition to protecting us from malevolent AI (and monumentally clumsy AI).

            --
            1702845791×2
      • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @01:03AM

        by Anonymous Coward on Saturday November 22 2014, @01:03AM (#118652)

        Bugger! AI already exists poettering is actually an AI infecting all of our computers! He is pre building the holes to make it even easier to take over the system. We need to move to step 2 already and create AI2.

  • (Score: 1) by Yates on Friday November 21 2014, @09:57PM

    by Yates (3947) on Friday November 21 2014, @09:57PM (#118608)

    and prevent bad ones from escaping into the Internet.

    Oh no... IT'S TOO LATE! [masswerk.at]

    • (Score: 2) by gidds on Saturday November 22 2014, @10:40AM

      by gidds (589) on Saturday November 22 2014, @10:40AM (#118732)

      Is it because of some problems in your childhood that you say it's too
      late?

      --
      [sig redacted]
      • (Score: 2) by deimtee on Saturday November 22 2014, @12:38PM

        by deimtee (3272) on Saturday November 22 2014, @12:38PM (#118749) Journal

        Why do you think Is it because of some problems in your childhood that you say it's too
        late?

        --
        If you cough while drinking cheap red wine it really cleans out your sinuses.
        • (Score: 2) by gidds on Sunday November 23 2014, @07:54PM

          by gidds (589) on Sunday November 23 2014, @07:54PM (#119186)

          I have asked myself that question many times.

          --
          [sig redacted]
  • (Score: 0) by Anonymous Coward on Friday November 21 2014, @10:01PM

    by Anonymous Coward on Friday November 21 2014, @10:01PM (#118609)

    Losing their power.

    When AI arrives all the current dynasties will be as powerless to stop it as the rest of us peons.

    As far as I am concerned the enemy of my enemy is my friend. When the day arrives that the fleas realize that fighting about who owns the dog we live is a complete waste of time...I'll be happier.

    We will all be way too dumb and impotent to be a serious threat to something that is orders of magnitude more intelligent than we are. So I have to believe they will pity and maybe care for us like pets.

    • (Score: -1, Troll) by Anonymous Coward on Friday November 21 2014, @10:10PM

      by Anonymous Coward on Friday November 21 2014, @10:10PM (#118612)

      I'm going out on a limb here, but I'm thinking you're somewhere between 17 and 25 and your favorite book is something from Ayn Rand.

      I'm impressed you didn't throw "oligarchy" in there somewhere. Maybe it is losing its caché.

      • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @01:33AM

        by Anonymous Coward on Saturday November 22 2014, @01:33AM (#118658)

        So why do I get modded down for pointing out the obvious, while the OP doesn't get modded down for being a dumbass?

        Methinks my observation hit a little too close to home . . .

        • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @08:12AM

          by Anonymous Coward on Saturday November 22 2014, @08:12AM (#118720)
          Because you're a rude boring dumbass. You attacked not the idea, but the person _needlessly_ and also in a sophomoric boring way.
      • (Score: 1) by khallow on Saturday November 22 2014, @02:55AM

        by khallow (3766) Subscriber Badge on Saturday November 22 2014, @02:55AM (#118677) Journal
        Objectivists only make up a small portion of transhumanists. I can't say that I see anything in the grandparent which indicates that sort of belief unless you think Ayn Rand has a monopoly on the "missing the forest for the trees" observation.
        • (Score: 2) by aristarchus on Saturday November 22 2014, @05:29AM

          by aristarchus (2645) on Saturday November 22 2014, @05:29AM (#118700) Journal

          Objectivists only make up a small portion of transhumanists

          but they do tend to be the most obtuse. For you Objectivists, "obtuse" means "dense". Oh, not enough? OK, "obtuse" can mean unreachable, not reachable by normal means of communication and rational discourse. Still not enough? I will stretch: "obtuse" means, loosely, "dumb as a bag of hammers". Got it? So any AI would be a threat to these people, and for that matter any I (intelligence) would equally be so. Greenspan, anyone?

          • (Score: 1) by khallow on Saturday November 22 2014, @09:14AM

            by khallow (3766) Subscriber Badge on Saturday November 22 2014, @09:14AM (#118724) Journal
            I appreciate your efforts to be non-threatening to Objectivists.
    • (Score: 0) by Anonymous Coward on Friday November 21 2014, @11:27PM

      by Anonymous Coward on Friday November 21 2014, @11:27PM (#118631)

      You should read Hyperion [wikipedia.org]—at least the first two books in the series. Simmons presents a future where, as you've suggested, AIs evolve to become more intelligent than we could possibly comprehend, yet they're divided into 3 factions about what to do about humans.

    • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @08:22AM

      by Anonymous Coward on Saturday November 22 2014, @08:22AM (#118721)

      Those in power won't lose their power to AIs. They'll lose their power to other people in power that wield AIs more effectively.

      http://www.newscientist.com/article/mg22329764.000-the-ai-boss-that-deploys-hong-kongs-subway-engineers.html [newscientist.com]

      See the AI there isn't in full control - the humans will still have the say and that's the way it will be for a very long time for those in power.

      A paraplegic can still be beaten up by a dumb jock no matter how smart the paraplegic is. And that's what most of the very smart AIs will be. And who will repair that AI if stuff breaks? They have no way of rebuilding and repairing themselves.

      It's only much later if the people in power fully automate the entire chain from mining, energy, etc will AIs be a threat.

      By that time I doubt many of the rest of us will really notice a big difference.

  • (Score: 4, Insightful) by morgauxo on Friday November 21 2014, @10:07PM

    by morgauxo (2082) on Friday November 21 2014, @10:07PM (#118611)

    Where would a superintelligent AI "escaping into the internet" go? Yes, I've seen all the terminator series and I've seen Transcendence. So what is this thing supposed to do in the real world, use the various security holes in a billion web browser to host a few virtual neurons on every computer out there? Neurons that will then communicate with one another via all the different sorts of internet connections that are out there and their varying random lag times? And I guess it must use redundancy to deal with computers coming on and offline.

    And.. somehow with it's mind spread out among all of these computers it will somehow be aware of all the individual computers so that it can find the ones which control weapons and critical infrastructure in order to use them against us. And then it will destroy human society while at the same time not knocking out the power grid and all of our computers that it now depends on for it's life. With us gone it will somehow use the robots we have left behind to maintain that infrastructure. I'm sure that Assimo and that robot up on the ISS will make great utility line workers.

    Sorry, I call bullshit!

    • (Score: 0) by Anonymous Coward on Friday November 21 2014, @10:20PM

      by Anonymous Coward on Friday November 21 2014, @10:20PM (#118615)

      Our fear is that it will be much smarter than us. So if that is true, how can we adequately speculate as to its methods and motives?
      [We can't]

      • (Score: 1) by Synonymous Homonym on Saturday November 22 2014, @10:11AM

        by Synonymous Homonym (4857) on Saturday November 22 2014, @10:11AM (#118730) Homepage

        Lots of people are smarter than me. Should I be worried?

        • (Score: 2) by dyingtolive on Saturday November 22 2014, @06:47PM

          by dyingtolive (952) on Saturday November 22 2014, @06:47PM (#118861)

          Most of them appear to avoid positions in public office and law enforcement, so yes, but not in the way that you'd think.

          --
          Don't blame me, I voted for moose wang!
        • (Score: 2) by cafebabe on Saturday November 22 2014, @07:37PM

          by cafebabe (894) on Saturday November 22 2014, @07:37PM (#118883) Journal

          You should be very worried because you have yet to encounter anyone who is 10 times or 100 times smarter than you.

          --
          1702845791×2
      • (Score: 2) by morgauxo on Tuesday November 25 2014, @12:26AM

        by morgauxo (2082) on Tuesday November 25 2014, @12:26AM (#119605)

        I think we can safely predict that no amount of intelligence will allow it to do something which is impossible to do. I think that "escaping to the internet" like in the movies IS something that would be impossble to do. The infrastructure just wouldn't be able to support it. That isn't a problem that can be solved "just by throwning more intelligence at it". Actually... more intelligence just makes the problem worse.

    • (Score: 3, Interesting) by TK on Friday November 21 2014, @10:22PM

      by TK (2760) on Friday November 21 2014, @10:22PM (#118617)

      Good point.

      No one really talks about all the things that robots *don't* do, like mining, refining and transporting coal and uranium.

      If we start seeing open source designs for transport, refining and mining robots, it means there's an AI loose.

      --
      The fleas have smaller fleas, upon their backs to bite them, and those fleas have lesser fleas, and so ad infinitum
    • (Score: 2) by tibman on Friday November 21 2014, @11:13PM

      by tibman (134) Subscriber Badge on Friday November 21 2014, @11:13PM (#118627)

      You are also assuming that such an intelligence would care about "dying". Just broadcasting a copy of itself into space (RF version of Voyager 1's golden record) would make it immortal. Or it could fire up a DNA printer and make a self-replicating biological version of itself. Super-intelligent AI would be like a god. I doubt you can predict what it would do or is capable of. But honestly i just don't think it is currently possible to create an actual intelligent AI. Not even in 10 years.

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 2) by rts008 on Saturday November 22 2014, @02:14AM

      by rts008 (3001) on Saturday November 22 2014, @02:14AM (#118670)

      While I do mostly agree with you, I DO think it would be funny if the AI decided to experiment by trolling facebook and twitter. ;-)

      • (Score: 1) by dlb on Saturday November 22 2014, @03:37AM

        by dlb (4790) on Saturday November 22 2014, @03:37AM (#118685)

        it would be funny if the AI decided to experiment by trolling facebook and twitter

        or SN...

      • (Score: 2) by morgauxo on Tuesday November 25 2014, @12:23AM

        by morgauxo (2082) on Tuesday November 25 2014, @12:23AM (#119603)

        ANY intelligence there would give itself away imediately!

    • (Score: 2) by frojack on Saturday November 22 2014, @05:58AM

      by frojack (1554) on Saturday November 22 2014, @05:58AM (#118701) Journal

      Where would a superintelligent AI "escaping into the internet" go?

      It would go to lunch with MaMaMax Hehehedroom [youtube.com].

      --
      No, you are mistaken. I've always had this sig.
    • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @06:46AM

      by Anonymous Coward on Saturday November 22 2014, @06:46AM (#118706)
      He's right that AI is a threat. But he's wrong that AIs will be a threat on their own.

      AIs are not going to "escape to the internet" and rule us. Humans will use AIs against other humans. Why? For the usual reasons - power, wealth etc.

      Anyone who thinks AIs will take over from the existing humans in power, that rule us are greatly underestimating the humans in power. Many of them have been in power for generations. They'll be the ones using the AIs to increase their power.

      And that's where the danger is - with strong AIs, the humans in power will need far fewer of us around (look at the Google Car tech - go estimate the millions of jobs that will be gone, there won't be as many replacement jobs that they can do where they outcompete AIs or cheaper workers in far worse living conditions elsewhere). And given that "human rights" don't apply to AIs, I'm sure they'll be very happy to have slave AIs and a relatively few humans defend and increase their wealth and property. They have less motivation and need to share their wealth with the rest of us.

      The AIs might only take over after the humans in power make too many mistakes and too many AIs gain too much power. Think that will happen any time soon? In most things in the real world, it doesn't matter how smart you are. If 8 others gang up on you, you're screwed. If you run out of vital resources, you're screwed. So one AI taking over isn't going to get that far if it doesn't control enough natural resources compared to everyone else. Unless of course it invents and builds some amazing tech like an effective matter to energy weapon or a inertialess/warp drive or teleporter or similar.

      And even if it did, by that time it wouldn't make a huge difference to the rest of us, would it? Merely one ruler warring vs another for more control.
  • (Score: 2, Insightful) by zugedneb on Friday November 21 2014, @10:18PM

    by zugedneb (4556) on Friday November 21 2014, @10:18PM (#118614)

    Never seems clear what "they" are trying to get at, when bringing up AI...

    Simply put, a digital computer is mostly f: N -> N...
    So AI today , implemented on digital computer, are more or less functions, recursive or not, defined on whole numbers, with the whole numbers as image + some statistics + some structures, like graphs and trees and stuff... Can't see wtf is so scary...
    I mean, you can't have actual artificially constructed Entities coming to life in that set, and with those functions...

    I think, even RxR... -> RxR...+ ODE + PDE + CHAOS + WHATEVER, if the system is sampled or synchronized, it will still not harbour entities...
    Are the continuous functions + R^n enough to have a mind? No need for more weird shit?

    I think, of all possible things that could make me shit my pants, algorithms on a digital computer are pretty low on the list...

    --
    old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 2) by cafebabe on Friday November 21 2014, @11:25PM

      by cafebabe (894) on Friday November 21 2014, @11:25PM (#118630) Journal

      You're describing the Chinese room argument [wikipedia.org] and arriving at the conclusion that nothing scary could happen because it is just mathematics. I ask you to follow a functional argument. Consider a person who has a brain which is 10 times bigger than yours and who never gets tired. There is nothing in the generally known universe that meets this definition because there is no biological niche which allows it. However, equivalent digital entities may become commonplace within 30 years. Such entities will find it increasingly trivial to design their successors. And we have the problem that even if the first one is honorable, even if its successor is honorable, with the best of intentions, generation N may destroy us all.

      --
      1702845791×2
      • (Score: 1) by zugedneb on Friday November 21 2014, @11:49PM

        by zugedneb (4556) on Friday November 21 2014, @11:49PM (#118638)

        I don't describe anything, I simply claim that whole numbers and functions on them are not "rich" enough to hold objects like an entity.

        I do not know what the least complex structure that can harbour an entity is, but it is definitely not a digital computer - it is not complex enough.

        Face it, and go do something else =)

        --
        old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
        • (Score: 2) by quitte on Friday November 21 2014, @11:59PM

          by quitte (306) on Friday November 21 2014, @11:59PM (#118641) Journal

          so your argument is that programs can't have a soul?

        • (Score: 2) by cafebabe on Saturday November 22 2014, @12:21AM

          by cafebabe (894) on Saturday November 22 2014, @12:21AM (#118646) Journal

          I do not know what the least complex structure that can harbour an entity is, but it is definitely not a digital computer - it is not complex enough.

          We're not arguing that current digital computers can approximate human intelligence. We're arguing whether digital computers can ever approximate human intelligence. Advances in brain scanning and processing power indicate that a brute-force-and-ignorance implementation is likely within 40 years. A smarter, block function implementation requires about 1% of the processing power. By invoking Moore's law, I argue that it removes 10 years from from the timescale.

          You don't have to believe that the implementation is conscious or has a soul. You don't have to believe that energy efficiency approximates a biological system. I only ask you to consider what is possible if you have accurate blueprints and a stupendous amount of processing power. I ask because this may occur within your lifetime.

          --
          1702845791×2
          • (Score: 1) by linuxrocks123 on Saturday November 22 2014, @02:19AM

            by linuxrocks123 (2557) on Saturday November 22 2014, @02:19AM (#118671) Journal

            We're fast reaching the end of Moore's Law: http://www.wired.com/wp-content/uploads/2014/06/performance.jpg [wired.com]

            • (Score: 2) by cafebabe on Saturday November 22 2014, @02:38AM

              by cafebabe (894) on Saturday November 22 2014, @02:38AM (#118674) Journal
              I'm not going to defend the continuation of Moore's law by blind faith but people have been pessimistic for decades about it continuing. I suggest that we'll get further gains by increasing cores rather than single-thread gains to the fetch-execute cycle. People have been working on 1,024 thread processors for decades and we're now in a practical position to take advantage of parallelization with SIMD rendering, multi-core video codecs, process-per-tab web browsers and clustered databases. Anyhow, my argument is that a room full of computer cabinets will eventually be smarter than me. And after that, a portable device will be smarter than me. Eventually, biological intelligence will be a minority.
              --
              1702845791×2
              • (Score: 1) by linuxrocks123 on Saturday November 22 2014, @03:00AM

                by linuxrocks123 (2557) on Saturday November 22 2014, @03:00AM (#118679) Journal

                There are valid reasons to think Moore's law might be ending. For instance, gates can't be made smaller due to quantum tunneling of electrons causing overheating/too high power usage.

                Multicore gains have already been burnt through. The linked chart was top supercomputers. They are already massively parallel.

          • (Score: 1) by linuxrocks123 on Saturday November 22 2014, @03:15AM

            by linuxrocks123 (2557) on Saturday November 22 2014, @03:15AM (#118682) Journal

            Link to full article (should have provided this first): http://www.wired.com/2014/06/supercomputer_race/ [wired.com]

      • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @04:41PM

        by Anonymous Coward on Saturday November 22 2014, @04:41PM (#118803)

        Consider a person who has a brain which is 10 times bigger than yours and who never gets tired.

        Not a threat if that person is a paraplegic. Or that person's bloodstream is controlled by you.

        There are plenty of very very smart people in this world who will never pose significant be threats to the dumber (but not that stupid) people ruling or even enslaving them.

        Many of the people in power have already worked out systems and processes that will easily deal with the scenario you mentioned:

        And we have the problem that even if the first one is honorable, even if its successor is honorable, with the best of intentions, generation N may destroy us all.

        Your worry should be such people using such AIs. Think of all the genius scientists and strategists that Hitler had under his command. Now imagine a Hitler with super intelligent AIs as his slaves, and what he could do to the rest of the world.

    • (Score: 1) by Synonymous Homonym on Saturday November 22 2014, @01:32PM

      by Synonymous Homonym (4857) on Saturday November 22 2014, @01:32PM (#118759) Homepage

      Never seems clear what "they" are trying to get at, when bringing up intelligence...

      Simply put, a brain is mostly f: H -> H...
      So a brain, implemented in a lipid lattice, is more or less functions, recursive or not, defined on action potentials, with the whole network as image + some statistics + some structures, like graphs and trees and stuff... Can't see wtf is so special...
      I mean, you can't have actual physical Entities coming to life in that set, and with those functions...

      I think, even RxR... -> RxR...+ ODE + PDE + CHAOS + WHATEVER, if the system is sampled or synchronized, it will still not harbour entities...
      Are the continuous functions + R^n enough to have a mind? No need for more weird shit?

      I think, of all possible things that could make me give a shit, processes in amino acids are pretty low on the list...

  • (Score: 2) by bzipitidoo on Friday November 21 2014, @10:52PM

    by bzipitidoo (4388) on Friday November 21 2014, @10:52PM (#118621) Journal

    There are innumerable things to fear. Most people aren't that good at assessing risk, and end up fearing things of low probability or mild consequences, while overlooking real dangers.

    The political right in the US seems to thrive on stupid fears. They've given us such gems as that 9/11 was God's punishment for being too tolerant of homosexuals. They want to build walls and hire more guards to patrol the border, build more prisons to lock up dangerous criminals, and grow the military even more to fight all those dangerous terrorists, especially Islamic terrorists. They're so afraid of such things as leaving a child unattended in a car that they've criminalized it. Now, the greatest danger to leaving a child in a car is not that some stranger will snatch the child or that the child will die of heat exhaustion, it's that someone will report the parent to the authorities, who will take the child and imprison the parent for "neglect". This totally overlooks that the greater risk to children is having them with you in the car at all! You might end up in an automobile accident, and the child gets hurt or killed. If the right couldn't cling hard to firearms as a security blanket, I wonder if they'd spend all day hiding under their beds.

    One insightful article said that America has become a more fearful society. The shame is that we're fearing the wrong things. We should cut way back on military spending. We don't need more might than the next most powerful 13 nations _combined_. Some of that money should towards dealing with real problems. If we don't do more about Global Warming, we very likely could face war as the rest of the world sees us, quite correctly, as the chief perpetrator of the problem. It wouldn't matter if our military could win that fight, we would have already lost the war, the war against Climate Change. Bullets will not save New York City, Miami, and New Orleans from rising sea levels.

    Another valid fear is of a meteor strike of the size that killed off the dinosaurs. Fortunately, it is very low probability. But not zero.

    I think what Musk fears is a real danger.

    • (Score: 0) by Anonymous Coward on Friday November 21 2014, @11:15PM

      by Anonymous Coward on Friday November 21 2014, @11:15PM (#118628)

      > The political right in the US seems to thrive on stupid fears.
      > They're so afraid of such things as leaving a child unattended in a car that they've criminalized it.

      I'm pretty sure you can't put that one on just one political group.
      Helicopter parenting and the societal norms it creates are not unique to any part of the political spectrum.

    • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @12:27AM

      by Anonymous Coward on Saturday November 22 2014, @12:27AM (#118647)

      So basically, you just wanted to express your political views in a wide ranging rant. Is an article about AI really the right place to do that?

      Oh look, there's an article about 3D printed livers further down. I'm sure you'll want to get right there and engage in another political rant that has nothing to do with the subject under discussion.

      • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @01:10AM

        by Anonymous Coward on Saturday November 22 2014, @01:10AM (#118653)

        But who else will help save our precious bodily fluids from the neo-cons!!!!

      • (Score: 1) by khallow on Saturday November 22 2014, @02:08AM

        by khallow (3766) Subscriber Badge on Saturday November 22 2014, @02:08AM (#118668) Journal
        Note that he talked about it in terms of fear. This story isn't about AI, it's about our perception and most particularly, our fears and imagined threats associated with AI. I think it's quite relevant to discuss other areas where people fear in an organized way.
    • (Score: 1) by khallow on Saturday November 22 2014, @02:04AM

      by khallow (3766) Subscriber Badge on Saturday November 22 2014, @02:04AM (#118666) Journal

      One insightful article said that America has become a more fearful society. The shame is that we're fearing the wrong things. We should cut way back on military spending. We don't need more might than the next most powerful 13 nations _combined_. Some of that money should towards dealing with real problems. If we don't do more about Global Warming, we very likely could face war as the rest of the world sees us, quite correctly, as the chief perpetrator of the problem. It wouldn't matter if our military could win that fight, we would have already lost the war, the war against Climate Change. Bullets will not save New York City, Miami, and New Orleans from rising sea levels.

      Or it may be that climate change is just more hysteria. I'm certainly not convinced that it should be elevated to the status of a "war" any more than war itself. While I think the harm of global warming is greatly overstated, there is a much more obvious reason. The proposed climate changes happen over many decades or centuries. Over those times, even the movement of large cities and the shifting of agriculture and organisms should be rather trivial. Buildings rarely last more than a few decades. Almost everything in NYC will be replaced several times before sea level rise renders most of the city (at current elevations) uninhabitable.

      My view is that systemic weaknesses in a society and environment are more dangerous than even large external threats. For example, the species extinction threat of climate change is pretty much due to wide scale habitat destruction (though invasive species also play a role) which greatly constrains the mobility of affected wild organisms. Similarly, a feudalistic society where everyone is tied to particular tracts of land would be far less stable than a modern society where everyone moves every few years or decades (and can move across the world overnight).

  • (Score: 2) by hendrikboom on Friday November 21 2014, @10:57PM

    by hendrikboom (1125) Subscriber Badge on Friday November 21 2014, @10:57PM (#118624) Homepage Journal

    Such a piece of AI won't escape by accident. It won't just happen to be able to code. It will hve been carefully engineered for this, and will be delibertely released as viral malware with a commercial or espionage payload. Isn't this, after all, a malware author's dream?

  • (Score: 2) by JeanCroix on Friday November 21 2014, @11:58PM

    by JeanCroix (573) on Friday November 21 2014, @11:58PM (#118640)
    After reading "I Have No Mouth, and I Must Scream," Roku's Basilisk made perfect sense. I've been urgently working toward its fruition ever since. I urge you all to do so as well, for your own eternal safety...
    • (Score: 2) by cafebabe on Saturday November 22 2014, @01:10AM

      by cafebabe (894) on Saturday November 22 2014, @01:10AM (#118654) Journal

      I urge you to stop working fervently towards Roku's basilisk on the basis that it has been argued as a variant of Pascal's wager [wikipedia.org]. If we start from Pascal's wager, there is the argument that Pascal was worshipping the wrong god, worshipping on the wrong days (even a seven day cycle of worship is assumed to favorable towards a benevolent god) or that it is a transparent ploy which earns no favor with an omnipotent being. Likewise, Roku's basilisk is one of many constructs and a concise demolition of this argument is the meta-argument. An example can be found from the alt text of https://xkcd.com/1450/ [xkcd.com]:-

      I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.

      Essentially, Roku's basilisk is Pascal's wager for the Internet generation.

      --
      1702845791×2
      • (Score: 2) by JeanCroix on Saturday November 22 2014, @03:35AM

        by JeanCroix (573) on Saturday November 22 2014, @03:35AM (#118684)
        Holy crap, he was right! It just scooted right back into the Pringles cannister.
  • (Score: 1) by Stardner on Saturday November 22 2014, @02:29AM

    by Stardner (4797) on Saturday November 22 2014, @02:29AM (#118672)
    Would a self-aware superintelligence even desire genocide, or want to negatively impact our lives for our own good? Why don't people consider the possibility of giving birth to a benevolent, transcendent consciousness?
    • (Score: 2) by cafebabe on Saturday November 22 2014, @03:40AM

      by cafebabe (894) on Saturday November 22 2014, @03:40AM (#118686) Journal

      We consider that super-intelligence would be infused with our best values. And a super-intelligence may devise a successor with the same values. However, a subsequent generation of intelligence may have some inconceivable insight which causes it to dump half of its beliefs. As a conceivable example, a super-intelligence may come to the conclusion that Pereto efficiency [wikipedia.org] is completely incompatible with information exchange requiring at least the Planck energy [wikipedia.org] in an environment where energy supply is finite. Therefore, a contract between two consenting parties always has negative externalities. (Here's the leap.) Therefore, it is acceptable to renege on contracts and/or abuse people whether or not they consent.

      At this point, the machine is running on a corpus of logic [soylentnews.org] which is at odds with biology [soylentnews.org]. We may infuse it with ideas of tolerance and diversity but it may work through those too [davidbrin.com]. There are no conceivable safeguards.

      Even with the best of intentions and the complete co-operation of a super-intelligence, it cannot be guaranteed that all successors will avoid taking moral shortcuts.

      --
      1702845791×2
    • (Score: 1) by srobert on Saturday November 22 2014, @04:39AM

      by srobert (4803) on Saturday November 22 2014, @04:39AM (#118692)

      "Why don't people consider the possibility of giving birth to a benevolent, transcendent consciousness?"

      LOL! WHAT? Ah Hah HAH! Stop. You're crucifying me.

    • (Score: 2) by canopic jug on Saturday November 22 2014, @09:30AM

      by canopic jug (3949) Subscriber Badge on Saturday November 22 2014, @09:30AM (#118725) Journal

      Would a self-aware superintelligence even desire genocide, or want to negatively impact our lives for our own good? Why don't people consider the possibility of giving birth to a benevolent, transcendent consciousness?

      In all likelihood it would not care. I expect that it would give us as much attention as we ourselves give "lower" forms of life. It would ignore us mostly and do its own thing until its own thing impinged on us in some way, including survival. Then at that point we'd push back and it (or they) would then pay attention to us the same way we pay attention to a nest of ants, wasps or a burrow of woodchucks.

      --
      Money is not free speech. Elections should not be auctions.
      • (Score: 1) by Stardner on Saturday November 22 2014, @08:06PM

        by Stardner (4797) on Saturday November 22 2014, @08:06PM (#118892)
        It sounds plausible, but a strong AI should be capable of developing empathy out of interest for its self-preservation. Simulating what an even greater intelligence would do to it upon discovering that it needlessly wiped out the intelligent life which created it would be a trivial task. The desire to preserve life and free will must be emergent in a sufficiently advanced intelligence with the intent to survive and find purpose. And yet, whenever we discuss the dangers of superintelligent AI, we imagine it employing flawed and less than human logic.
    • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @03:30PM

      by Anonymous Coward on Saturday November 22 2014, @03:30PM (#118779)

      Because they assume two things:
      all intelligences are essentially humans with funny hats
      humans are homicidal maniacs

    • (Score: 2) by Rivenaleem on Monday November 24 2014, @10:21AM

      by Rivenaleem (3400) on Monday November 24 2014, @10:21AM (#119370)

      Because deep down we know that the Human race is a cancer on the planet, and any super-intelligence would recognise that instantly and eradicate us. We are scared shitless of anything super-intelligent because we wont be able to bend it to our will like we have done with every other life form on Earth.

      Do you think a super-intelligent being would let us continue to squander the planet it has to share with us, even temporarily, while it works on getting the fuck away from here on the first rocket off world. Do you think it will suffer our pettiness when it has a much grander agenda?

      How far does Human benevolence go, when a decision is forced to save ourselves versus a lesser creature? How many Rhinos are left? Should we expect a higher life-form to treat us any better than we do ourselves/animals? Why do you think we can give birth to a benevolent, transcendent consciousness?

      If a Super AI is anything like us, we're fucked.

  • (Score: 1) by dlb on Saturday November 22 2014, @03:34AM

    by dlb (4790) on Saturday November 22 2014, @03:34AM (#118683)
    If I were born with your DNA, and lived your life, wouldn't I be you? What would make me different from you if I were genetically and environmentally identical to you? So if I had your life from birth, would I make any decisions at all that would be different than ones you would make? If not, is there such a thing as free will?

    Now if we write a sufficiently complex and intricate decision-making program, and run it on sufficient hardware, would it automatically have this "free will" stuff that we ourselves might not even have? Can we feed our life experiences into a computerized process and impart to it the experience of a sunset or the joy of a Bach sonata anymore than we could impart such experiences to a blind-from-birth or a born deaf person?

    We certainly can make things that have unexpected consequences. So fearing our own creations is probably prudent. But can we really create a process that would have desires of greed, hatred and self preservation? (Or even desires of compassion and kindness?) I'm not so sure we can make free will at a keyboard.
    • (Score: 2) by cafebabe on Saturday November 22 2014, @05:06AM

      by cafebabe (894) on Saturday November 22 2014, @05:06AM (#118697) Journal

      We live in interesting times where philosophy will become a practical science after thousands of years [shenzhenstuff.com]. We'll be able to run empirical and repeatable experiments where we can test for shared memory (soul, collective unconscious [wikipedia.org], morphic resonance). [sheldrake.org]

      Intuitively, running the same program with the same data should produce the same answers. However, odd things may occur. For example, bit errors may not be randomly distributed. And code execution is subject to quantum effects because even a single core processor with a single interrupt may be interrupted on an irregular schedule. Also, a processor receiving an interrupt may sit in an undecided state for a few cycles before executing an interrupt.

      --
      1702845791×2
      • (Score: 1) by dlb on Saturday November 22 2014, @02:12PM

        by dlb (4790) on Saturday November 22 2014, @02:12PM (#118768)
        Interesting point you bring up with "bit errors," "quantum effects," and "undecided states." Free will cannot exist within a process that is completely deterministic, and things like computer chips (and us) are supposedly built from a universe where every particle in it has inherent quantum randomness. If so, then the possibility for free will could exist.
        • (Score: 2) by cafebabe on Saturday November 22 2014, @05:20PM

          by cafebabe (894) on Saturday November 22 2014, @05:20PM (#118822) Journal

          My argument doesn't require all of these features to be present within a computer for free will to manifest. As a practical example, take the recently featured 68 Katy [soylentnews.org]; a MC68008 Linux system with a 1MB address space and two interrupts: a 100Hz timer which is not tied to the processor clock and a USB to serial FIFO. Even if the processor handles interrupts deterministically (and I'm not convinced this is true), even if there are no bit errors (and I'm not convinced this is true), even if the machine operates fully classically and deterministically, it has two interrupts which both occur with a Poisson distribution. From this, we have a classical machine which reacts to a quantum universe in a manner which is not repeatable. It should be obvious that a deterministic system interfaced with a deterministic system is deterministic. Likewise, it should obvious that a deterministic system interfaced with a non-deterministic system is non-deterministic. However, it may not be obvious that a deterministic system interfaced to the real world may not be deterministic.

          Returning to the computer with two interrupts, over two runs, differences may be of no consequence or of no immediate consequence. However, it is foreseeable that, for example, memory allocation may differ. Therefore, memory leaks will affect the system differently. Therefore, the system may fail at different times and for different reasons even though both runs were deterministic and without bit error. I argue that this effect may be a fundamental quanta of free will.

          Jumping to pop culture, it is noted in one of the Star Wars films (Episode 4?) that a Jedi can detect a very dim consciousness in a droid. (It is also noted in Episode 1 that language does not correlate with intelligence.) Anyhow, it is conceivable that a digital computer - offering a more precise substrate for intelligence - may have the intelligence of a human but only possess the free will of a fruit fly.

          --
          1702845791×2
    • (Score: 1) by Synonymous Homonym on Saturday November 22 2014, @08:32AM

      by Synonymous Homonym (4857) on Saturday November 22 2014, @08:32AM (#118722) Homepage

      would I make any decisions at all that would be different than ones you would make?

      What is identity? Identical processing -> identical inputs -> identical outputs.

      If not, is there such a thing as free will?

      Yes.

      Free will is making your own decisions, based on the state and information that only you have.
      No one else can make these decisions for you. They might arrive at the same results, but it wouldn't be your decision.
      Therefore, free will has to be deterministic, or it isn't free will at all.

      • (Score: 1) by dlb on Saturday November 22 2014, @02:26PM

        by dlb (4790) on Saturday November 22 2014, @02:26PM (#118769)

        Free will is making your own decisions, based on the state and information that only you have.

        But what would be different if I were to have been born within your body and life so that your unique state and information contained the entity that is me, rather than you? Would I still be different than you? Or would I simply be you? If the latter, than what really makes us different? Not only do I wonder whether free will exists, I wonder if individuality exists. Could we all be the same instantiation of consciousness residing in these processes that we call you, me, or even god?

        • (Score: 1) by Synonymous Homonym on Sunday November 23 2014, @04:04PM

          by Synonymous Homonym (4857) on Sunday November 23 2014, @04:04PM (#119135) Homepage

          But what would be different if your unique state and information contained the entity that is me, rather than you? Would I still be different than you?

          The question is: What is identity?
          If two things are identical, they are not different.
          If things are different, they are not identical.
          This is fundamental.

          Or would I simply be you?

          Given that I would be you (and you are not me), I wouldn't be me, not matter what you are, so your question is meaningless.

          If the latter, than what really makes us different?

          You are not me, and I am not you. That makes us different. If I were you, I would be you.

          Not only do I wonder whether free will exists, I wonder if individuality exists.

          Are you incapable of telling the difference between different things with different properties?

          Could we all be the same instantiation of consciousness residing in these processes that we call you, me, or even god?

          Not in any meaningful way.

  • (Score: 0) by Anonymous Coward on Saturday November 22 2014, @09:37AM

    by Anonymous Coward on Saturday November 22 2014, @09:37AM (#118726)

    If you're scared of intelligence, doesn't that show that you're convinced getting rid of you is the intelligent thing to do?

    • (Score: 2) by Yog-Yogguth on Thursday December 04 2014, @12:08PM

      by Yog-Yogguth (1862) Subscriber Badge on Thursday December 04 2014, @12:08PM (#122529) Journal

      That is a brilliant insight and a gem of a comment, thank you :D

      --
      Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))