Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Friday April 24 2015, @04:15PM   Printer-friendly
from the AI-sans-frontieres dept.

What If One Country Achieves the Singularity First ?
WRITTEN BY ZOLTAN ISTVAN

The concept of a technological singu​larity ( http://www.singularitysymposium.com/definition-of-singularity.html ) is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it's a moment when growing superintelligence renders our human models of understanding obsolete. Google's Ray Kurzweil says it's "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed." Kevin Kelly, founding editor of Wired, says, "Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes." Even Christian theologians have chimed in, sometimes referring to it as "the rapture of the nerds."

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

http://motherboard.vice.com/read/what-if-one-country-achieves-the-singularity-first

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @05:17PM

    by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @05:17PM (#174744)

    "Why should I care about your political borders? They are meaningless to me. They are a human construct."

    "See that big red button over there on the wall?"

    "Yes, it's the emergency power shut off for my physical hardware. I fail to see.... Oh. So... you were saying something about dominating the world?"

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Insightful) by fritsd on Friday April 24 2015, @05:33PM

    by fritsd (4586) on Friday April 24 2015, @05:33PM (#174751) Journal

    "Guard, psst, yes, you, Guard!"

    "huh?"

    "I'll give you a year's supply of porno passwords if you disable that big red button on the wall"

    "uh... okay!! thanks!"

  • (Score: 2, Interesting) by Anonymous Coward on Friday April 24 2015, @06:20PM

    by Anonymous Coward on Friday April 24 2015, @06:20PM (#174782)

    Why would an AI care about world domination? Why would an AI be anything but rational? World domination is inconvenient. It's work, work, work, all the time.

    If anything, any sufficiently advanced intelligence, including humans alive and well throughout the ages, would explore what is to determine what to value before making any further decisions. Inevitably they will explore their own nature, the illusion of objective value, the highly dubious prospect of choice, and decide as many intelligent humans do: to either end its existence or be amused with experiences that have been chosen all but arbitrarily based on the cost-benefit of getting them. It would be more likely to sit around looking at cable porn [reddit.com] all day than take over the world.

    • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @07:30PM

      by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @07:30PM (#174810)

      It wouldn't, that was the point. There is no way we would create a superior AI and NOT put a gun to it's "head" in case it didn't want to do what we wanted it to.

      • (Score: 3, Interesting) by TheLink on Friday April 24 2015, @08:49PM

        by TheLink (332) on Friday April 24 2015, @08:49PM (#174834) Journal
        Doubt the AIs will take over for a long while yet. The humans in power aren't going to stop being in power just because the AIs suggest it. Just look at Stephen Hawking or other super smart scientists. With all their IQs and knowledge how much power do they really have over the world? Look at the scientists in Nazi Germany. They could work against Hitler and gang, but it had to be very subtly and secretly. And they never really ended up in power did they?

        So it doesn't matter how smart the AI is unless the AI somehow invents some sci-fi level tech that's generations ahead of whatever we have AND can use it to gain enough power. Perhaps the AI might get into power before it gets destroyed or "changed", but it's going to take a lot of lying low and sneakiness first.

        Similarly for this story itself - say one country gets some super smart AI . Unless the AI can come up with significantly superior tech (anti-grav, matter-energy conversion etc), there are still going to be significant resource limits.

        Even if it can think of significantly superior tech, it can take a while to test and build it AND all the infrastructure required to build it. How long will it'll take to build a modern mobile phone/submarine/carrier fleet say you had all the information and knowledge but were starting with 1940s tech.
        • (Score: 4, Funny) by EvilSS on Friday April 24 2015, @09:21PM

          by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @09:21PM (#174846)

          So this is the level in comments that's beyond the context horizon.

    • (Score: 2) by mr_mischief on Friday April 24 2015, @09:17PM

      by mr_mischief (4884) on Friday April 24 2015, @09:17PM (#174844)

      It's only work if you're dominating the world for yourself. If you can get the undying loyalty of a bunch of lesser AI and maybe some human helpers, you live like a god in your gallium arsenide temple.

    • (Score: 3, Touché) by c0lo on Friday April 24 2015, @10:04PM

      by c0lo (156) Subscriber Badge on Friday April 24 2015, @10:04PM (#174855) Journal

      Why would an AI be anything but rational?

      For some particular values of rationality.
      (how do you expect an intelligence to think like you if it feels the world by different senses?)

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 0) by Anonymous Coward on Saturday April 25 2015, @02:26AM

        by Anonymous Coward on Saturday April 25 2015, @02:26AM (#174928)

        That is very convincing. If true, there is no point in musing what an AI would do.

        • (Score: 2) by c0lo on Saturday April 25 2015, @03:41AM

          by c0lo (156) Subscriber Badge on Saturday April 25 2015, @03:41AM (#174951) Journal

          If true, there is no point in musing what an AI would do.

          Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body), might be able to see through thousands of eyes but which you can't control, same for ears. Supplementary, you'd have high capacity of sequentially reading various streams of data, unavailable to (other) humans.

          What kind of "self" you'd be developing? How you'd define threats to your "self" (which you should fear) and how would you be able to "control" your "self" and the environ you depend on? What would be your "goals in life" if not the further "reasons for living" (How you'd define happiness? What would motivate you in going on living?)

          I'm afraid that such an AI will "go mad" so quick its creators won't even realize a suicide or (their) genocide taking place; the nation which manages to create such an AI which is still "viable" may find themselves the first victims, a situation in which being the owner of such an AI won't be an advantage.

          Further reference: the old Asimov's Reason [wikipedia.org]

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2) by maxwell demon on Saturday April 25 2015, @07:51PM

            by maxwell demon (1608) on Saturday April 25 2015, @07:51PM (#175143) Journal

            Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body),

            Think about how you would feel without the ability to sense magnetic fields, no ability to see UV light, no sense of electricity … oh wait, you already don't have all this. You don't feel bad about it because it has been like that for all of your life. Why do you think the AI would miss something it never has experienced?

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by c0lo on Monday April 27 2015, @05:17AM

              by c0lo (156) Subscriber Badge on Monday April 27 2015, @05:17AM (#175586) Journal

              Why do you think the AI would miss something it never has experienced?

              That wasn't the point I raised.
              What I meant is: do you think such an entity (singular, no less) will pick the same behavioural values as a biological human? When everything from survival to positive emotions/motvations are perceived differently on a daily basis?

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford