Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Friday April 24 2015, @04:15PM   Printer-friendly
from the AI-sans-frontieres dept.

What If One Country Achieves the Singularity First ?
WRITTEN BY ZOLTAN ISTVAN

The concept of a technological singu​larity ( http://www.singularitysymposium.com/definition-of-singularity.html ) is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it's a moment when growing superintelligence renders our human models of understanding obsolete. Google's Ray Kurzweil says it's "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed." Kevin Kelly, founding editor of Wired, says, "Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes." Even Christian theologians have chimed in, sometimes referring to it as "the rapture of the nerds."

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

http://motherboard.vice.com/read/what-if-one-country-achieves-the-singularity-first

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Friday April 24 2015, @04:59PM

    by Anonymous Coward on Friday April 24 2015, @04:59PM (#174728)

    From the summary:

    All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

    From the article:

    For example, what if America created the AI first, then used its superintelligence to pursue a singularity exclusively for Americans?

    Since humans can't understand what's going on after the singularity, why would the AI work exclusively for Americans? The AI may not give a damn about our political borders. That's why we worry about the singularity, maybe the AI will coexist peacefully with humans, or make humans its slaves, or eradicate humans. We don't know.

    Maybe in Soviet Russia the singularity "outdumbs" you?

    Starting Score:    0  points
    Moderation   +2  
       Interesting=1, Underrated=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 2, Insightful) by WillAdams on Friday April 24 2015, @05:00PM

    by WillAdams (1424) on Friday April 24 2015, @05:00PM (#174730)
    • (Score: 4, Funny) by bob_super on Friday April 24 2015, @05:12PM

      by bob_super (1357) on Friday April 24 2015, @05:12PM (#174740)

      The Lovelace test is indeed hard ... and deep.

    • (Score: 5, Interesting) by acid andy on Friday April 24 2015, @05:37PM

      by acid andy (1683) on Friday April 24 2015, @05:37PM (#174752) Homepage Journal

      From your link:

      In short, to pass the Lovelace Test a computer has to create something original, all by itself.

      In 1843, Lovelace wrote that computers can never be as intelligent as humans because, simply, they can only do what we program them to do. Until a machine can originate an idea that it wasn’t designed to, Lovelace argued, it can’t be considered intelligent in the same way humans are.

      I'm generally not a fan of the New Scientist magazine as what most of what I read in the past gave me the impression that perhaps science was being poorly understood, sensationalized or misrepresented. However, there was one interesting article in the 90s about the work of Stephen Thaler [imagination-engines.com]. Subscription required to read the full article [newscientist.com] from their site unfortunately. It's mentioned briefly on Wikipedia [wikipedia.org] also though.

      Thaler's basic premise was that we could already build neural nets that act as a self associative memory where they memorize images or other data and can recollect it when stimulated with part of that data or something similar to it. By introducing some random noise into these networks he succeeded in making them produce images, musical compositions and other designs that previously didn't exist. Too much noise and he would get bizarre "picasso cars", too little and he got the familiar memorized outputs, but in between came good ideas. A second neural net could also be trained to distinguish the good and bad ideas.

      Now I know this is still a long way away from a truly general purpose AI brain but it certainly seems to me like creativity is not some holy grail of AI that remains beyond reach for artificial neural networks and computer software. How much of human artwork and creativity is truly unique and how much is just a random mash up of what has come before, filtered through critical eyes and popularity contests?

      I don't think there was much wrong with the Turing Test personally. It fails when the tester makes poor choices for the questions.

      --
      If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
  • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @05:17PM

    by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @05:17PM (#174744)

    "Why should I care about your political borders? They are meaningless to me. They are a human construct."

    "See that big red button over there on the wall?"

    "Yes, it's the emergency power shut off for my physical hardware. I fail to see.... Oh. So... you were saying something about dominating the world?"

    • (Score: 3, Insightful) by fritsd on Friday April 24 2015, @05:33PM

      by fritsd (4586) on Friday April 24 2015, @05:33PM (#174751) Journal

      "Guard, psst, yes, you, Guard!"

      "huh?"

      "I'll give you a year's supply of porno passwords if you disable that big red button on the wall"

      "uh... okay!! thanks!"

    • (Score: 2, Interesting) by Anonymous Coward on Friday April 24 2015, @06:20PM

      by Anonymous Coward on Friday April 24 2015, @06:20PM (#174782)

      Why would an AI care about world domination? Why would an AI be anything but rational? World domination is inconvenient. It's work, work, work, all the time.

      If anything, any sufficiently advanced intelligence, including humans alive and well throughout the ages, would explore what is to determine what to value before making any further decisions. Inevitably they will explore their own nature, the illusion of objective value, the highly dubious prospect of choice, and decide as many intelligent humans do: to either end its existence or be amused with experiences that have been chosen all but arbitrarily based on the cost-benefit of getting them. It would be more likely to sit around looking at cable porn [reddit.com] all day than take over the world.

      • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @07:30PM

        by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @07:30PM (#174810)

        It wouldn't, that was the point. There is no way we would create a superior AI and NOT put a gun to it's "head" in case it didn't want to do what we wanted it to.

        • (Score: 3, Interesting) by TheLink on Friday April 24 2015, @08:49PM

          by TheLink (332) on Friday April 24 2015, @08:49PM (#174834) Journal
          Doubt the AIs will take over for a long while yet. The humans in power aren't going to stop being in power just because the AIs suggest it. Just look at Stephen Hawking or other super smart scientists. With all their IQs and knowledge how much power do they really have over the world? Look at the scientists in Nazi Germany. They could work against Hitler and gang, but it had to be very subtly and secretly. And they never really ended up in power did they?

          So it doesn't matter how smart the AI is unless the AI somehow invents some sci-fi level tech that's generations ahead of whatever we have AND can use it to gain enough power. Perhaps the AI might get into power before it gets destroyed or "changed", but it's going to take a lot of lying low and sneakiness first.

          Similarly for this story itself - say one country gets some super smart AI . Unless the AI can come up with significantly superior tech (anti-grav, matter-energy conversion etc), there are still going to be significant resource limits.

          Even if it can think of significantly superior tech, it can take a while to test and build it AND all the infrastructure required to build it. How long will it'll take to build a modern mobile phone/submarine/carrier fleet say you had all the information and knowledge but were starting with 1940s tech.
          • (Score: 4, Funny) by EvilSS on Friday April 24 2015, @09:21PM

            by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @09:21PM (#174846)

            So this is the level in comments that's beyond the context horizon.

      • (Score: 2) by mr_mischief on Friday April 24 2015, @09:17PM

        by mr_mischief (4884) on Friday April 24 2015, @09:17PM (#174844)

        It's only work if you're dominating the world for yourself. If you can get the undying loyalty of a bunch of lesser AI and maybe some human helpers, you live like a god in your gallium arsenide temple.

      • (Score: 3, Touché) by c0lo on Friday April 24 2015, @10:04PM

        by c0lo (156) Subscriber Badge on Friday April 24 2015, @10:04PM (#174855) Journal

        Why would an AI be anything but rational?

        For some particular values of rationality.
        (how do you expect an intelligence to think like you if it feels the world by different senses?)

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 0) by Anonymous Coward on Saturday April 25 2015, @02:26AM

          by Anonymous Coward on Saturday April 25 2015, @02:26AM (#174928)

          That is very convincing. If true, there is no point in musing what an AI would do.

          • (Score: 2) by c0lo on Saturday April 25 2015, @03:41AM

            by c0lo (156) Subscriber Badge on Saturday April 25 2015, @03:41AM (#174951) Journal

            If true, there is no point in musing what an AI would do.

            Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body), might be able to see through thousands of eyes but which you can't control, same for ears. Supplementary, you'd have high capacity of sequentially reading various streams of data, unavailable to (other) humans.

            What kind of "self" you'd be developing? How you'd define threats to your "self" (which you should fear) and how would you be able to "control" your "self" and the environ you depend on? What would be your "goals in life" if not the further "reasons for living" (How you'd define happiness? What would motivate you in going on living?)

            I'm afraid that such an AI will "go mad" so quick its creators won't even realize a suicide or (their) genocide taking place; the nation which manages to create such an AI which is still "viable" may find themselves the first victims, a situation in which being the owner of such an AI won't be an advantage.

            Further reference: the old Asimov's Reason [wikipedia.org]

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 2) by maxwell demon on Saturday April 25 2015, @07:51PM

              by maxwell demon (1608) on Saturday April 25 2015, @07:51PM (#175143) Journal

              Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body),

              Think about how you would feel without the ability to sense magnetic fields, no ability to see UV light, no sense of electricity … oh wait, you already don't have all this. You don't feel bad about it because it has been like that for all of your life. Why do you think the AI would miss something it never has experienced?

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by c0lo on Monday April 27 2015, @05:17AM

                by c0lo (156) Subscriber Badge on Monday April 27 2015, @05:17AM (#175586) Journal

                Why do you think the AI would miss something it never has experienced?

                That wasn't the point I raised.
                What I meant is: do you think such an entity (singular, no less) will pick the same behavioural values as a biological human? When everything from survival to positive emotions/motvations are perceived differently on a daily basis?

                --
                https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 5, Insightful) by DECbot on Friday April 24 2015, @05:24PM

    by DECbot (832) on Friday April 24 2015, @05:24PM (#174746) Journal

    In America, you go and join the Singularity. It is great thing when you join, you even throw a party and celebrate. It is your choice and you are rewarded when you choose to join. Everyone is happy. It is not like that in my country. In Soviet Russia, the Singularity comes for you.

    --
    cats~$ sudo chown -R us /home/base
    • (Score: 1) by SunTzuWarmaster on Friday April 24 2015, @05:44PM

      by SunTzuWarmaster (3971) on Friday April 24 2015, @05:44PM (#174755)

      Probably the most accurate and most sad depiction of the Singularity, while simultaneously a Soviet Russia joke. Well done. One of the best comments that I've seen on the internet.

  • (Score: 0) by Anonymous Coward on Friday April 24 2015, @06:02PM

    by Anonymous Coward on Friday April 24 2015, @06:02PM (#174767)

    why would the AI work exclusively for Americans?

    Why did Superman work exclusively for Americans? Because this is where he became self aware.

    • (Score: 2, Insightful) by Synonymous Homonym on Saturday April 25 2015, @08:07AM

      by Synonymous Homonym (4857) on Saturday April 25 2015, @08:07AM (#174997) Homepage

      Superman is an illegal alien, and he rejected his citizenship.

      • (Score: 1) by WillAdams on Monday April 27 2015, @02:42PM

        by WillAdams (1424) on Monday April 27 2015, @02:42PM (#175722)

        Actually, no, there's a specific provision in the U.S. Constitution granting citizenship to persons 18 years of age or older found w/in the U.S. borders.

        Moreover, during John Byrne's run on Superman he was transported as an embryo in a Kryptonian birthing chamber, so was ``born'' on U.S. soil.