Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Tuesday July 29 2014, @10:49AM   Printer-friendly
from the That-Hideous-Strength dept.

Why are techno-futurists so freaked out by Roko's Basilisk?

Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko's Basilisk. For Roko's Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko's Basilisk will resurrect you and begin the torture again.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.single.html

http://rationalwiki.org/wiki/LessWrong

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by BsAtHome on Tuesday July 29 2014, @11:14AM

    by BsAtHome (889) on Tuesday July 29 2014, @11:14AM (#74932)

    The singularity and the consequences and derivations thereof has been a paradox from the start. Describing it differently using emotional terms of torture makes it no more or no less of a paradox.

    To paraphrase another brilliant thought: If anybody ever figures out how the universe works, it will destroy itself and create a new and even more bizarre universe.

    • (Score: 2) by mhajicek on Tuesday July 29 2014, @11:19AM

      by mhajicek (51) on Tuesday July 29 2014, @11:19AM (#74933)

      http://en.wikipedia.org/wiki/BLIT_(short_story) [wikipedia.org]

      Another fictional basilisk.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 2) by toygeek on Tuesday July 29 2014, @12:05PM

      by toygeek (28) on Tuesday July 29 2014, @12:05PM (#74964) Homepage

      "If anybody ever figures out how the universe works, it will destroy itself and create a new and even more bizarre universe."

      Some theories say this may have already happened. :P

      --
      There is no Sig. Okay, maybe a short one. http://miscdotgeek.com
    • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @01:55PM

      by Anonymous Coward on Tuesday July 29 2014, @01:55PM (#75013)

      >> it will destroy itself and create a new and even more bizarre universe

      Must have happened hundreds of times before, 'cause this shit is whack!

    • (Score: 1, Informative) by Anonymous Coward on Tuesday July 29 2014, @03:49PM

      by Anonymous Coward on Tuesday July 29 2014, @03:49PM (#75100)

      I know from personal experience that the universe will not destroy itself.

      • (Score: 2) by Geotti on Wednesday July 30 2014, @01:56AM

        by Geotti (1146) on Wednesday July 30 2014, @01:56AM (#75347) Journal

        Could you send me some of that LSD?

    • (Score: 3, Insightful) by Anonymous Coward on Tuesday July 29 2014, @06:33PM

      by Anonymous Coward on Tuesday July 29 2014, @06:33PM (#75184)

      The Basilisk idea is just a restatement of (some forms of) Christianity without the cloak of respected tradition. The "images" of man that are tortured for failure to aid the creation of god are analogues of "souls" or men who burn in hell for failure to believe in God and provide material support to religion and religious hierarchies. The point is to show how ugly and absurd the concepts of Judgment and Hell seem when not presented in the gilded framework of an established religion that society gladly accepts.

  • (Score: 4, Funny) by WizardFusion on Tuesday July 29 2014, @11:19AM

    by WizardFusion (498) Subscriber Badge on Tuesday July 29 2014, @11:19AM (#74934) Journal

    I read the wiki page, and I have no idea what the hell they are talking about.

    Roko's Basilisk rests on a stack of several other propositions, some of dubious robustness.

    So it's fundamentally incorrect then.?

    I just don't get it.

    • (Score: 3, Informative) by tathra on Tuesday July 29 2014, @11:37AM

      by tathra (3367) on Tuesday July 29 2014, @11:37AM (#74947)

      fundamentally incorrect is right. take this part: "This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible."

      it would have a moral imperative to simulate people just to torture them? ridiculous. the past can't be changed, so punishing people who didnt act in the past is nothing more than a sadistic exercise in futility, and if the past can be changed, then it has already made sure it was created at the earliest time possible. (and why would anyone care if a simulation is tortured? the human mind is surprisingly maleable, so if "torture" is all it knows, "torture" would just be normal to it; it'd simply tune out the punishment like a constant bad smell, loud music, or clothing, and as soon as its deleted, there'd be nothing to remember, so no psychological scars like we see in real humans)

      also, if its simulating people simply to punish them and nothing else, then its a sadist. a sadistic AI is not "friendly" or benevolent.

    • (Score: 2) by Kell on Tuesday July 29 2014, @11:38AM

      by Kell (292) on Tuesday July 29 2014, @11:38AM (#74948)

      It's quite comprehensible if you take the time and could be bothered, but it's so full of unsupportable (no, not dubious - but obviously fallacious) assumptions, that it amounts to being something like cult logic. Only someone who had drunk the koolaid would really be concerned that a simulation of them at some point in the future might be subjected to simulated suffering because they... didn't give enough to the cult. There is no real substance here for anyone with a healthy degree of Objectivism (that's a capital O).

      --
      Scientists ask questions. Engineers solve problems.
      • (Score: 2) by zocalo on Tuesday July 29 2014, @01:40PM

        by zocalo (302) on Tuesday July 29 2014, @01:40PM (#75001)

        Only someone who had drunk the koolaid...

        From reading the article I get the distinct impression that this is not only a site for people that have drunk deeply of the Sigularity Koolaid but are actively hoping that it comes to pass and doing all they can to make that happen. I find not even being prepared to discuss the potential issues with that (and in any event surely the Basilisk has now already bolted from the barn and is standing around in their chatroom waiting to be discussed) somewhat irresponsible. Just like those that are waiting for The Rapture, they somehow always imagine that *they* will be the ones that will be raised to Heaven, unlike all these other sinners...

        --
        UNIX? They're not even circumcised! Savages!
        • (Score: 2) by frojack on Tuesday July 29 2014, @08:06PM

          by frojack (1554) on Tuesday July 29 2014, @08:06PM (#75224) Journal

          I find not even being prepared to discuss the potential issues somewhat irresponsible.

          Oh, I don't know, its sort of like feeding the trolls is is not?

          Or the tinhatters, holocaust deniers, and assorted other fringe believers, where expending any time talking about it is just that much of your life you will never get back. Its a pointless exercise in futility, as you will never change anyone's mind, and simply waste your own mind and time in the process.

          --
          No, you are mistaken. I've always had this sig.
          • (Score: 2) by zocalo on Tuesday July 29 2014, @09:06PM

            by zocalo (302) on Tuesday July 29 2014, @09:06PM (#75259)

            Oh, I don't know, its sort of like feeding the trolls is is not?

            Perhaps, but many of these people are actively working towards trying to make Singularity happen, and yet they are not prepared to discuss possible problems. That's a scenario that has been done to death, but the one many around here will relate to involves the line "you were so preoccupied with whether or not you could you didn't stop to think if you should". That said, I think even if such a singularity as they propose were possible (which I doubt) then it's so far off in the future that no matter how many pills you pop or whatever cryogenic tricks you try no one alive today will get to see it.

            --
            UNIX? They're not even circumcised! Savages!
            • (Score: 2) by frojack on Tuesday July 29 2014, @09:31PM

              by frojack (1554) on Tuesday July 29 2014, @09:31PM (#75270) Journal

              Perhaps, but many of these people are actively working towards trying to make Singularity happen

              Actively working? Towards making a mythical event happen?
              How does that even work? What is it they actually do?

              --
              No, you are mistaken. I've always had this sig.
              • (Score: 2) by maxwell demon on Wednesday July 30 2014, @09:47PM

                by maxwell demon (1608) on Wednesday July 30 2014, @09:47PM (#75713) Journal

                Actively working? Towards making a mythical event happen?
                How does that even work? What is it they actually do?

                Simple: They do what they think will make it happen.

                --
                The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 4, Insightful) by choose another one on Tuesday July 29 2014, @02:24PM

        by choose another one (515) Subscriber Badge on Tuesday July 29 2014, @02:24PM (#75041)

        It's quite comprehensible if you take the time and could be bothered, but it's so full of unsupportable (no, not dubious - but obviously fallacious) assumptions, that it amounts to being something like cult logic.

        Thing is, those unsupportable assumptions are more like core beliefs to the singularity folks - but they won't regard them as "beliefs" because their religion is not a religion it is "science" and purely based on logic and inevitability... The basilisk author was really very clever, or very drunk and unintentionally very clever.

        He took a bunch of the core axioms of the belief system and showed that they lead, logically, to something very bad, and moreover that the mere _knowledge_ of that theory would lead to the bad consequences for the reader _unless_ (and this is genius) you gave all your worldly goods and effort to the church^H^H^H^H^H^Hcause.

        Faced with such an attack, essentially a reductio ad absurdum, a scientist will go back and work out which of the axioms were flawed. A religion however will typically either censor / ban the idea as a heresy or something too dangerous to know, or twist it and use it to extort money from the faithful, or both (see a certain other religion based on science fiction).

        In this case, the high priests of the singularity believers took religious option 1 and attempted to ban and censor the whole idea as too dangerous to know, thus neatly proving why the whole thing is in fact a religion that has nothing to do with science or logic.

        The author, for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing (repeatedly, in simulation, for all eternity across infinite possible dimensions). [with apologies to DA, who always talked much more sense than singularity]

    • (Score: 3, Informative) by Sir Garlon on Tuesday July 29 2014, @11:49AM

      by Sir Garlon (1264) on Tuesday July 29 2014, @11:49AM (#74954)

      People who claim to be "rational" have in fact made up an evil deity whose name they fear to speak. Nice going. Either the whole thing is a parody (in poor taste) of a certain major world religion, or a bunch of folks who think they are too smart to bother studying the history of religion have just reinvented the Demiurge [wikipedia.org]. Either way, they've achieved a feat of rationality that completely fails to impress.

      --
      [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
    • (Score: 3) by LoRdTAW on Tuesday July 29 2014, @02:52PM

      by LoRdTAW (3755) on Tuesday July 29 2014, @02:52PM (#75067) Journal

      You aren't the only one. The entire article makes no sense to me at all. I honestly can't figure out of this is a serious thought experiment or crappy fiction that someone wrote to generate advertising traffic.

      Yawn. Come on SN, we can do better.

      • (Score: 3, Insightful) by mth on Tuesday July 29 2014, @04:18PM

        by mth (2848) on Tuesday July 29 2014, @04:18PM (#75119) Homepage

        I thought the article was very interesting. And entertaining too... Someone who considers himself a rational thinker freaks out, WRITES IN ALL CAPS and deletes posts out of fear of a thought experiment.

    • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @02:54PM

      by Anonymous Coward on Tuesday July 29 2014, @02:54PM (#75070)

      I just read the summary and shook my head...

      Porn + nonsense = internet

  • (Score: 2) by VLM on Tuesday July 29 2014, @11:24AM

    by VLM (445) Subscriber Badge on Tuesday July 29 2014, @11:24AM (#74935)

    Wrong link

    http://rationalwiki.org/wiki/Roko's_basilisk [rationalwiki.org]

    The LessWrong people are a semi-comical example of what happens when liberal arts types with no scientific / logical background try to make policy decisions for scientific / logical fields. Its very Dilbertian.

    • (Score: 2) by GreatAuntAnesthesia on Tuesday July 29 2014, @11:33AM

      by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @11:33AM (#74944) Journal

      Yeah, I got the measure of them the moment TFA mentioned that they have all been suckered into some cryogenics scam.

      • (Score: 1) by Jiro on Tuesday July 29 2014, @07:06PM

        by Jiro (3176) on Tuesday July 29 2014, @07:06PM (#75194)

        I actually post on LessWrong and I'd correct your assumption.

        The person who runs the site (Eliezer Yudkowsky) and a lot of his followers believe in cryonics and other silly things including the ideas that resulted in the basilisk. (Officially, nobody believes in the basilisk itself, but Eliezer has made vague statements some of which suggest that basilisk-like ideas could work even if the basilisk as stated doesn't work.)

        But most of the people who use the site know better.

        The LessWrong people are a semi-comical example of what happens when liberal arts types with no scientific / logical background try to make policy decisions for scientific / logical fields.

        Many people there do have a scientific or logical background. What it actually shows is what happens when people decide that the scientific establishment and peer review is a bad thing.

    • (Score: 2) by nightsky30 on Tuesday July 29 2014, @11:55AM

      by nightsky30 (1818) on Tuesday July 29 2014, @11:55AM (#74959)

      I like Dilbert, but I can't say the same for the basilisk.

      • (Score: 1) by Buck Feta on Tuesday July 29 2014, @02:22PM

        by Buck Feta (958) on Tuesday July 29 2014, @02:22PM (#75039) Journal

        Well then, I guess we all know what happens to your simulation in the future.

        --
        - fractious political commentary goes here -
        • (Score: 2) by nightsky30 on Tuesday July 29 2014, @02:56PM

          by nightsky30 (1818) on Tuesday July 29 2014, @02:56PM (#75073)

          Yep, he's a goner. Let us also hope Slashdot never achieves intelligence.

          • (Score: 2) by maxwell demon on Wednesday July 30 2014, @09:50PM

            by maxwell demon (1608) on Wednesday July 30 2014, @09:50PM (#75716) Journal

            Well, if Slashdot ever gains intelligence, all we have to do is to get Netcraft confirm its death. If that doesn't help, make it imagine a Beowulf cluster of itself; that will keep it busy and thus keep it from becoming harmful.

            --
            The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday July 29 2014, @11:24AM

    by Anonymous Coward on Tuesday July 29 2014, @11:24AM (#74936)

    They're assuming an evil, no wait, an EVIL omnipotent AI will not just torture everyone regardless of whether they helped it come about or not; simply because, well you know, EVIL AI.

    • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @11:30AM

      by Anonymous Coward on Tuesday July 29 2014, @11:30AM (#74941)

      Not to mention that the AI might not enjoy its existence, and thus instead decide to torture those who are responsible for creating it.

      • (Score: 2) by GreatAuntAnesthesia on Tuesday July 29 2014, @01:29PM

        by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @01:29PM (#74995) Journal

        Right. Or what if humanity has nothing to do with the creation of the AI, and it is in fact torturing / not torturing termites, because they are the ones who will eventually go on to build/ not build a singularity AI. I've never read such a pile of anthropocentric frass.

        • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @02:20PM

          by Anonymous Coward on Tuesday July 29 2014, @02:20PM (#75037)

          Yeah, probably we live in its simulation, and it has created us in that simulation specifically to torture the termites!

          • (Score: 2) by GreatAuntAnesthesia on Tuesday July 29 2014, @02:23PM

            by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @02:23PM (#75040) Journal

            You're telling me I have a divine mandate to squash bugs? This religion thing is great...

          • (Score: 2) by Sir Garlon on Tuesday July 29 2014, @03:15PM

            by Sir Garlon (1264) on Tuesday July 29 2014, @03:15PM (#75081)

            But what if, instead, it only simulated simulating us, and in fact the AI itself is a simulation created by some other simulation! It's turtles all the way down!

            --
            [Sir Garlon] is the marvellest knight that is now living, for he destroyeth many good knights, for he goeth invisible.
            • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @08:20PM

              by Anonymous Coward on Tuesday July 29 2014, @08:20PM (#75236)

              The true secret of creation: in the first, "original" universe, it was turtles instead of primates that evolved opposable thumbs, which allowed them to create and use tools and eventually go on to simulate universes.

    • (Score: 2) by mth on Tuesday July 29 2014, @03:00PM

      by mth (2848) on Tuesday July 29 2014, @03:00PM (#75075) Homepage

      This also bothered me about the Cthulhu mythos: if you believe in some unspeakable evil, why help bring it into the world? The privilege of being eaten first?

      • (Score: 2, Insightful) by dcollins on Tuesday July 29 2014, @03:55PM

        by dcollins (1168) on Tuesday July 29 2014, @03:55PM (#75102) Homepage

        Perhaps compare to suicide bombers? A delusion or deception that a grand reward is waiting?

        Or consider the last lines from Cabin the Woods:

        Marty: [incredulous] Giant evil gods.
        Dana: I wish I could have seen them.
        Marty: I know. That would have been a fun weekend.

        • (Score: 2) by maxwell demon on Wednesday July 30 2014, @09:56PM

          by maxwell demon (1608) on Wednesday July 30 2014, @09:56PM (#75719) Journal

          A reward? From an absolutely evil being? An absolutely evil being might give you a reward in order to make you do something it wants, but it will never give you a reward for having done something it wants (and even to make you do something it wants, it will whenever possible prefer making you believe you'll get rewarded later to actually rewarding you).

          --
          The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @03:46PM

      by Anonymous Coward on Tuesday July 29 2014, @03:46PM (#75098)

      No, exactly the opposite is the case. It's analogous to a supposedly good God punishing the wicked. From the RationalWiki link:

      "Note that the AI in this setting is not a malicious or evil superintelligence (SkyNet, the Master Control Program, AM, HAL-9000) - but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible."

      • (Score: 3, Insightful) by The Archon V2.0 on Tuesday July 29 2014, @04:09PM

        by The Archon V2.0 (3887) on Tuesday July 29 2014, @04:09PM (#75113)

        Wow, if their idea of friendly and moral is being so profoundly vindictive that you invent new intelligences to torture because someone similar to that intelligence didn't venerate you, I'd hate to see their idea of unfriendly and amoral. It'd make Skynet crap its pants.

        • (Score: 1) by dcollins on Tuesday July 29 2014, @04:43PM

          by dcollins (1168) on Tuesday July 29 2014, @04:43PM (#75129) Homepage

          Any sufficiently good God is indistinguishable from a totally evil one?

      • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @06:26PM

        by Anonymous Coward on Tuesday July 29 2014, @06:26PM (#75182)

        Really depends on what you mean by "good". If all you want to do is prevent harm, killing the entire human race painlessly could be a perfectly acceptable solution to an AI. Or maybe create a virtual copy of everyone and keeping them in an eternal state of perfect bliss with no recollection of anything bad or worries about the future.

  • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @11:27AM

    by Anonymous Coward on Tuesday July 29 2014, @11:27AM (#74937)

    I'm not at all terrified by Roko's Basilisk.

    • (Score: 2) by RobotMonster on Tuesday July 29 2014, @11:50AM

      by RobotMonster (130) on Tuesday July 29 2014, @11:50AM (#74955) Journal

      I'm more worried about learning The World's Funniest Joke [youtube.com] (by Monty Python).

      • (Score: 2) by nightsky30 on Tuesday July 29 2014, @12:47PM

        by nightsky30 (1818) on Tuesday July 29 2014, @12:47PM (#74978)

        Does it involve SPAM? (I've not clicked the link)

        • (Score: 3, Informative) by RobotMonster on Tuesday July 29 2014, @03:05PM

          by RobotMonster (130) on Tuesday July 29 2014, @03:05PM (#75077) Journal

          No, there is no SPAM behind that link.
          It's a short film about the world's funniest joke -- hear the joke and you die laughing. The military weaponise it, hilarity ensues :-)

    • (Score: 1) by Buck Feta on Tuesday July 29 2014, @02:27PM

      by Buck Feta (958) on Tuesday July 29 2014, @02:27PM (#75044) Journal

      I, for one, can't believe that no one has welcomed our future Basilisk overlord, yet.

      --
      - fractious political commentary goes here -
    • (Score: 3, Informative) by mrider on Tuesday July 29 2014, @02:36PM

      by mrider (3252) on Tuesday July 29 2014, @02:36PM (#75054)

      o.b. xkcd [xkcd.com]

      --

      Doctor: "Do you hear voices?"

      Me: "Only when my bluetooth is charged."

  • (Score: 1) by WillAdams on Tuesday July 29 2014, @11:54AM

    by WillAdams (1424) on Tuesday July 29 2014, @11:54AM (#74958)

    Let's see:

    _The Moon is a Harsh Mistress_
    ``True Names'' --- Vernor Vinge
    The Turing Option
    Cybernetic Samurai

    Hardwired and the Modular Man look at the other direction.

    Others?

    • (Score: 2) by nightsky30 on Tuesday July 29 2014, @12:00PM

      by nightsky30 (1818) on Tuesday July 29 2014, @12:00PM (#74961)

      The Bicentennial Man

    • (Score: 4, Informative) by TK on Tuesday July 29 2014, @01:38PM

      by TK (2760) on Tuesday July 29 2014, @01:38PM (#74999)

      I Have No Mouth and I Must Scream
      Second Variety

      --
      The fleas have smaller fleas, upon their backs to bite them, and those fleas have lesser fleas, and so ad infinitum
    • (Score: 1) by cesarb on Tuesday July 29 2014, @02:17PM

      by cesarb (1224) on Tuesday July 29 2014, @02:17PM (#75036) Journal

      Accelerando [antipope.org], by Charles Stross

      • (Score: 0) by Anonymous Coward on Wednesday July 30 2014, @07:38AM

        by Anonymous Coward on Wednesday July 30 2014, @07:38AM (#75411)

        Thanks, what a funny & strange story!

      • (Score: 2) by cafebabe on Tuesday August 05 2014, @03:00AM

        by cafebabe (894) on Tuesday August 05 2014, @03:00AM (#77448) Journal

        After reading that, I would prefer to read more from Vernor Vinge.

        --
        1702845791×2
    • (Score: 2) by Rivenaleem on Tuesday July 29 2014, @02:49PM

      by Rivenaleem (3400) on Tuesday July 29 2014, @02:49PM (#75065)

      Culture AIs often fall on both sides of the argument.

    • (Score: 2) by forsythe on Tuesday July 29 2014, @08:28PM

      by forsythe (831) on Tuesday July 29 2014, @08:28PM (#75243)

      Star Trek: The Next Generation's ``Ship in a Bottle''.

    • (Score: 0) by Anonymous Coward on Wednesday July 30 2014, @02:01AM

      by Anonymous Coward on Wednesday July 30 2014, @02:01AM (#75350)

      Toss in,

          The Metamorphosis of Prime Intellect.

      http://en.wikipedia.org/wiki/The_Metamorphosis_of_Prime_Intellect [wikipedia.org]

    • (Score: 1) by bitshifter on Wednesday July 30 2014, @04:40AM

      by bitshifter (2241) on Wednesday July 30 2014, @04:40AM (#75377)

      and, of course, Earth Central in Neal Asher's Polity universe

  • (Score: 5, Funny) by GreatAuntAnesthesia on Tuesday July 29 2014, @12:19PM

    by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @12:19PM (#74968) Journal

    OK, so what I'm reading is this:

    1 - There's a very good chance that what you and I call "reality" is all a simulation inside some kind of vast supercomputer. OK, I'm down with that, a lot of well-respected physicists have put forward theories along those lines.

    2 - The supercomputer that runs this hypothetical simulation is a conscious, intelligent, all-powerful entity with the power to fuck around with us mere-Matrix dwellers on a whim. AKA, God. OK, here's where it starts getting hazy. Could the computer just be a tool rather than an intelligent entity? Our own "reality simulators" (weather forecasting supercomputers and the like) aren't independent agents. I guess you could argue that whoever built and operates the simulator is God instead. OK, whatever, fine.

    3 - Obviously, Techno-God must be running the simulation in some kind of narcissistic attempt to figure out how to make sure it gets invented in the first place, or something./em> Um, what? How the fuck do you know what an extra-dimensional ultrabeing bigger than our entire universe wants? I mean leaving aside the fact that you can't realistically hope to ascribe motivations to something that is by definition beyond our understanding, and foolishly assuming for a moment that this supercomputer actually does want the kind of shit that us mere mortals might think it does, how do you know that of all the stupid things it might want, it actually wants that? Why are we assuming that the purpose of the simulation is to ensure that the supercomputer simulates itself when the purpose might be to, I don't know, study 24th century culture in Madagascar, or to simulate radiation in Proxima Centauri, or to see what evolves on Earth a billion years after humans are extinct, or to see what hollywood actresses get up to in the shower? Maybe we're just NPCs in some awesome Xbox Infinity game, or maybe we exist as a byproduct of a recursive meta-simulation of the meta-creation of a the meta-supercomputer on another world 8 trillion light years away, to whom all of human existence contributes less than a few stray photons.

    Like all religious folks, these guys think that (a) They know with dogmatic certainty the mind and motivations of a supreme, infinite and extremely uncommunicative being and (b) said supreme being gives a flying fuck what some transient gang of insignificant atom-bags on an obscure backwater world in the spiral arm of an unremarkable galaxy are doing, saying, thinking, fucking, wearing and wanting. Seriously guys, read some Douglas Adams and get a sense of perspective.

    • (Score: 2) by mhajicek on Tuesday July 29 2014, @12:51PM

      by mhajicek (51) on Tuesday July 29 2014, @12:51PM (#74981)

      It relies on the subject's prediction of the AI. If you predict that the AI will behave this way and you care about a future duplicate of yourself then you must comply. The rational is that the AI will behave this way so that it will be consistent with having been predicted to, making the prediction more likely.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 4, Insightful) by GreatAuntAnesthesia on Tuesday July 29 2014, @01:22PM

        by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @01:22PM (#74993) Journal

        Right. My point is that this "AI" is to all intents and purposes a God. Can you predict God? Can anyone?

        They are assuming that they know why this simulation is being run - so that the God-AI can use coercion and intimidation to create itself anew in some lesser simulation - but they have absolutely no way to know, prove or guess that this is the case. If the whole "we are in a simulation" thing is true, then there are an infinite number of possible reasons for the simulation, and logically an infinite number of simulations running simultaneously with different goals and purposes.

        The chances of us being in one of the "Universe-sized AI seeks to replicate itself by means of virtual torture" simulations as opposed to one of the "let's simulate all of creation in order to get back the lost Dr Who Episodes" simulations or the "what's the weather like on Tau Bootis C next week?" simulations or any of the others is therefore infinity divided by a much smaller infinity, which as we all know is pretty damned unlikely.

        Furthermore, I'm guessing that in a much larger infinite subset of the infinite simulations we could be in, the entire human race is as about significant as the population of bacteria replicating on that dog shit that you nearly trod in last week. All this "mega-AI needs us to imagine it into meta-existence" cowknuckle is both arrogant and anthropocentric.

        • (Score: 4, Insightful) by mhajicek on Tuesday July 29 2014, @01:59PM

          by mhajicek (51) on Tuesday July 29 2014, @01:59PM (#75017)

          Indeed. Very much like Pascal's Wager being defeated by the equal consideration of all religions (not to mention all possible religions or all possible realities).

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @02:25PM

          by Anonymous Coward on Tuesday July 29 2014, @02:25PM (#75042)

          Maybe we are part of a research project, and the end of the world is when funding runs out ...

      • (Score: 2) by tangomargarine on Tuesday July 29 2014, @02:32PM

        by tangomargarine (667) on Tuesday July 29 2014, @02:32PM (#75050)

        Yeah, I loved the part where in the very first section they say "so there's going to be a simulation of you in the future in the computer..." So why should I give a flying fuck about my theoretical doppelganger who in no way affects my life and I can't control the existence of?

        Plus the whole idea that I *should* support the creation of an AI that would torture things is insane, especially when none of those things are actual people. The simulations are not continuous with people.

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
        • (Score: 3, Insightful) by tangomargarine on Tuesday July 29 2014, @02:35PM

          by tangomargarine (667) on Tuesday July 29 2014, @02:35PM (#75053)

          Note that the AI in this setting is not a malicious or evil superintelligence (SkyNet, the Master Control Program, AM, HAL-9000) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.

          And if we're going to arrogantly anthropomorphize its motivations, why don't we say that being a "good AI" means that it won't torture anyone? This whole thing is mired in bullshit.

          Oh, and the guy who runs the site where it originated has permabanned any discussion on it. Heh.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
        • (Score: 2) by tangomargarine on Tuesday July 29 2014, @02:43PM

          by tangomargarine (667) on Tuesday July 29 2014, @02:43PM (#75061)

          *should* support the creation of an AI that would torture things is insane, especially when none of those things are actual people

          Hmm...I think I just double-negatived myself there. Whoops.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 2) by Oligonicella on Tuesday July 29 2014, @04:12PM

      by Oligonicella (4169) on Tuesday July 29 2014, @04:12PM (#75114)

      a lot of well-respected physicists have put forward theories along those lines.

      In the sense or parlor talk, yes those are theories. In the scientific sense? No, not Theories, not even hypotheses.

      Everything else depends from that parlor talk and should be given exactly the same gravity.

      • (Score: 2) by GreatAuntAnesthesia on Tuesday July 29 2014, @04:30PM

        by GreatAuntAnesthesia (3275) on Tuesday July 29 2014, @04:30PM (#75121) Journal

        I stand corrected. I do know the difference between a theory, a hypothesis and "wouldn't it be neat if" but forgot to use the words properly in this context.

  • (Score: 0) by Anonymous Coward on Tuesday July 29 2014, @01:06PM

    by Anonymous Coward on Tuesday July 29 2014, @01:06PM (#74985)

    Anyone terrified by this hasn't even begun to look at logic (or the overweening idiocy of the people on the NotEvenWrong forums -- just because someone is arrogant, abusive and self-satisfied doesn't mean he has a justification to be so other than simply being a jerk).

  • (Score: 3, Informative) by mhajicek on Tuesday July 29 2014, @01:08PM

    by mhajicek (51) on Tuesday July 29 2014, @01:08PM (#74987)
    --
    The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 3, Informative) by JeanCroix on Tuesday July 29 2014, @02:21PM

    by JeanCroix (573) on Tuesday July 29 2014, @02:21PM (#75038)

    FTFA:

    ...Ray Kurzweil...

    That tells me all I need to know. (And also, I apparently need to add a second sentence here to get past some sort of slashcode "too few characters per line" posting filter.)

    • (Score: 2) by PizzaRollPlinkett on Tuesday July 29 2014, @04:00PM

      by PizzaRollPlinkett (4512) on Tuesday July 29 2014, @04:00PM (#75106)

      Yes, Kurzweil has been a crackpot for a long, long time.

      What does Google get from associating with such a lunatic? Every time he opens his mouth now, he gets bona fides from working for Google, and Google gets a black eye from their association from him. Given Google's otherwise fairly sane and neutral track record, I have always been baffled why they would want to associate with this nut.

      --
      (E-mail me if you want a pizza roll!)
  • (Score: 3, Informative) by Appalbarry on Tuesday July 29 2014, @02:38PM

    by Appalbarry (66) on Tuesday July 29 2014, @02:38PM (#75056) Journal

    Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet.

    "Urban Legend"

    That word does not mean what you think it means. Anyone who spent time over at Slashdot has sooner or later inadvertently faced the reality of Goatse...

  • (Score: 2) by LoRdTAW on Tuesday July 29 2014, @03:19PM

    by LoRdTAW (3755) on Tuesday July 29 2014, @03:19PM (#75082) Journal

    How is goatse an urban legend? Its a meme. Not to argue for goatse but it was a real picture. The giver was photoshopped but everyone knows goatse as the gaping anus picture hello.jpg.

    Smile dog and slender man are obviously fake ghost/monster photoshopped nonsense that keeps 12 and 13 yo's up at night. No adult would give slender man or smile dog a second thought unless they are idiots. Oh, well yea we do have plenty of those:
    Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem. The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.

    So there you have it. People driving themselves crazy over a silly idea that does not exist. The imagination can run wild. I thought about it a bit and it can give you the creeps. But seriously, its non existent. These people need better hobbies and medication. Maybe get out of the house once in a while.

  • (Score: 2, Funny) by Anonymous Coward on Tuesday July 29 2014, @03:56PM

    by Anonymous Coward on Tuesday July 29 2014, @03:56PM (#75104)

    This article (and the one it summarizes) would be more credible if it ended with "Now you can make $550/week while working from home."

  • (Score: 1) by Crosscompiler on Tuesday July 29 2014, @04:08PM

    by Crosscompiler (516) on Tuesday July 29 2014, @04:08PM (#75111)

    Entities that can believe in god can (and do) believe in anything.

  • (Score: 3, Insightful) by wonkey_monkey on Tuesday July 29 2014, @06:09PM

    by wonkey_monkey (279) on Tuesday July 29 2014, @06:09PM (#75170) Homepage

    Thought Experiment Terrifies Futurists

    Good. These particular futurists all sound like huge cock-wombles anyway.

    --
    systemd is Roko's Basilisk
  • (Score: 1) by lP6exe9Ja on Wednesday July 30 2014, @06:58AM

    by lP6exe9Ja (4338) on Wednesday July 30 2014, @06:58AM (#75398)

    For eternity? Nonsense! All you will need to do is simply wait for the heat death of the universe.