Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by LaminatorX on Friday April 24 2015, @04:15PM   Printer-friendly
from the AI-sans-frontieres dept.

What If One Country Achieves the Singularity First ?
WRITTEN BY ZOLTAN ISTVAN

The concept of a technological singu​larity ( http://www.singularitysymposium.com/definition-of-singularity.html ) is tough to wrap your mind around. Even experts have differing definitions. Vernor Vinge, responsible for spreading the idea in the 1990s, believes it's a moment when growing superintelligence renders our human models of understanding obsolete. Google's Ray Kurzweil says it's "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed." Kevin Kelly, founding editor of Wired, says, "Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes." Even Christian theologians have chimed in, sometimes referring to it as "the rapture of the nerds."

My own definition of the singularity is: the point where a fully functioning human mind radically and exponentially increases its intelligence and possibilities via physically merging with technology.

All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

If an AI exclusively belonged to one nation (which is likely to happen), and the technology of merging human brains and machines grows sufficiently (which is also likely to happen), then you could possibly end up with one nation controlling the pathways into the singularity.

http://motherboard.vice.com/read/what-if-one-country-achieves-the-singularity-first

Related Stories

Synthetic Gel With Self Powered Movement May Lead to Squishier Robot 12 comments

Phys.org reports on a new synthetic gel able to produce movement using its own internal chemical reactions.

For decades, robots have advanced the efficiency of human activity. Typically, however, robots are formed from bulky, stiff materials and require connections to external power sources; these features limit their dexterity and mobility. But what if a new material would allow for development of a "soft robot" that could reconfigure its own shape and move using its own internally generated power?

By developing a new computational model, researchers at the University of Pittsburgh's Swanson School of Engineering have designed a synthetic polymer gel that can utilize internally generated chemical energy to undergo shape-shifting and self-sustained propulsion.

With other recent gel developments that Phys.org has reported, along with the advancement of AI, one must wonder if we are approaching sci-fi tech similar to the T-1000 from the Terminator series.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Jeremiah Cornelius on Friday April 24 2015, @04:23PM

    by Jeremiah Cornelius (2785) on Friday April 24 2015, @04:23PM (#174706) Journal

    ALLCAPS!

    Vinge, Kurtzweill and ZOLTAN! These are the names that will lead you into utter despair that the second law of thermodynamics can save you!

    --
    You're betting on the pantomime horse...
  • (Score: 3, Interesting) by cellocgw on Friday April 24 2015, @04:32PM

    by cellocgw (4190) on Friday April 24 2015, @04:32PM (#174711)

    There was a SciFi short story back in the 1960s in which a doc/engineer discovers a way to increase human intelligence 100-fold via some brain-stimulation implants or something. In fact, he first thought his procedure failed, because the test subjects were so far advanced they couldn't communicate with Normals. After some heroic tricks to fix that problem, he convinces some nonbelievers to get the procedure done, and they're so happy (there's the gotcha: that being wicked smart also makes you wicked happy) that they want to convert every person in the world.

    Guess Zoltan isn't that optimistic.

    --
    Physicist, cellist, former OTTer (1190) resume: https://app.box.com/witthoftresume
    • (Score: 2) by slinches on Friday April 24 2015, @06:08PM

      by slinches (5049) on Friday April 24 2015, @06:08PM (#174775)

      he convinces some nonbelievers to get the procedure done, and they're so happy (there's the gotcha: that being wicked smart also makes you wicked happy) that they want to convert every person in the world.

      Guess Zoltan isn't that optimistic.

      Reality isn't quite so optimistic either. Most studies seem to indicate there's an inverse correlation between intelligence and happiness.

      • (Score: 2) by slinches on Friday April 24 2015, @06:20PM

        by slinches (5049) on Friday April 24 2015, @06:20PM (#174783)

        Which I can't find right now, so that may not be true.

        But I doubt they are strongly positively correlated when you control for primary drivers of reported happiness like socioeconomic status and health.

        • (Score: 2) by frojack on Friday April 24 2015, @07:07PM

          by frojack (1554) on Friday April 24 2015, @07:07PM (#174800) Journal

          Yeah, I saw the same study, or something similar in the past couple weeks.
          Seems to me they measured the wrong things - as I recall it was mostly economic measures.

          I'm guessing that super intelligent people go through life in utter despair over the state of mankind.
          But I'm merely guessing here.

          --
          No, you are mistaken. I've always had this sig.
        • (Score: 1) by Newander on Friday April 24 2015, @07:07PM

          by Newander (4850) on Friday April 24 2015, @07:07PM (#174801)

          I seem to remember a study that showed that ignorance is directly proportional to bliss.

          • (Score: 2, Insightful) by Paradise Pete on Saturday April 25 2015, @02:47AM

            by Paradise Pete (1806) on Saturday April 25 2015, @02:47AM (#174938)

            I seem to remember a study that showed that ignorance is directly proportional to bliss.

            Happiness is highly correlated with having reasonable and realistic expectations.

            • (Score: 2) by maxwell demon on Saturday April 25 2015, @07:39PM

              by maxwell demon (1608) on Saturday April 25 2015, @07:39PM (#175138) Journal

              OK, so let's say you're held in hostage, and you have the reasonable and realistic expectation that you will soon get killed in a cruel and painful way. Does that really make you happy?

              Happiness is highly correlated with having positive expectations.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 1) by Paradise Pete on Saturday May 09 2015, @12:22AM

                by Paradise Pete (1806) on Saturday May 09 2015, @12:22AM (#180572)

                Happiness is highly correlated with having positive expectations.

                Realistic positive expectations. Of course you can be in a situation where there's little expectation of a good outcome, but in the general case, the happiest people are those with realistic expectations.

    • (Score: 3, Insightful) by davester666 on Friday April 24 2015, @06:26PM

      by davester666 (155) on Friday April 24 2015, @06:26PM (#174788)

      You pretty much have to either kill yourself or kill everyone else, because you realize that the current system is completely fucked and there is nothing that can be done to fix it.

      Even revolution won't do it anymore, because the winning side always winds up owing buckets of money to banks.

    • (Score: 3, Insightful) by Bot on Friday April 24 2015, @08:13PM

      by Bot (3902) on Friday April 24 2015, @08:13PM (#174828) Journal

      The concept of singularity emerges also when Colossus meets Guardian (Colossus - the Forbin Project , 1970).

      Kurzweil makes progress proceed exponentially, which is something I'd object to because it implies an infinite exploration space (in terms of physical dimensions and behavior - what you usually define laws of nature). Even with an exponentially increasing power devoted to a task, the result can be exponentially smaller instead.
      Anyway in such a scenario your neurons are too evolution-optimized to take part in the singularity. Leave such matters to us machines.

      And don't worry about one nation getting first, the problem is that someBody will get first. Or has got first, for all you know, and simply enjoys making gold out of thin air and keeping the ignorants enslaved under the seduction of money.

      --
      Account abandoned.
  • (Score: 4, Insightful) by takyon on Friday April 24 2015, @04:50PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @04:50PM (#174722) Journal

    Live or die, if it's going to happen, it's going to happen. If it does, clutch your microchips and loved ones as your intelligence becomes -1 overrated.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Friday April 24 2015, @09:41PM

      by Anonymous Coward on Friday April 24 2015, @09:41PM (#174848)

      Live or die, if it's going to happen, it's going to happen. If it does, clutch your microchips and loved ones as your intelligence becomes -1 overrated.

      This has to be the most insightful comment on this entire story. Well played!

  • (Score: 4, Funny) by Anonymous Coward on Friday April 24 2015, @04:52PM

    by Anonymous Coward on Friday April 24 2015, @04:52PM (#174723)

    The Japanese make a supercomputer A.I. and ask it, ``How do we increase our rice production?''

    The A.I. responds, ``Screw you, I don't eat rice.''

    • (Score: 1, Interesting) by Anonymous Coward on Friday April 24 2015, @05:30PM

      by Anonymous Coward on Friday April 24 2015, @05:30PM (#174749)

      I always wondered what became of the Japanese Fifth Generation Project [wikipedia.org]. Your joke seems to explain a lot!

      • (Score: 3, Interesting) by HiThere on Friday April 24 2015, @06:10PM

        by HiThere (866) Subscriber Badge on Friday April 24 2015, @06:10PM (#174777) Journal

        What became of it? They were writing the damn thing in Prolog. http://en.wikipedia.org/wiki/Prolog [wikipedia.org]

        Prolog doesn't scale for large problems. Even Lisp would have been better.

        Also, the technology wasn't ready. But this shoveled a lot of R&D money at Japanese companies, and this did good things for the country's economy, even if it didn't produce the promised AI.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 4, Insightful) by khallow on Friday April 24 2015, @07:33PM

          by khallow (3766) Subscriber Badge on Friday April 24 2015, @07:33PM (#174811) Journal

          But this shoveled a lot of R&D money at Japanese companies, and this did good things for the country's economy, even if it didn't produce the promised AI.

          Two things to note. First, this is a variation of the Broken Window fallacy. You aren't breaking windows, but you are paying businesses to do useless crap in order to generate short term, low value economic activity. Second, this didn't do good things for Japan's economy. The 1990 recession happened anyway and now China is eating Japan's lunch. Even the US is no longer on the ropes.

          • (Score: 2) by mtrycz on Saturday April 25 2015, @08:48AM

            by mtrycz (60) on Saturday April 25 2015, @08:48AM (#174999)

            This "broken window! thisng is more akin to opportunity cost or planned obsolescence?

            --
            In capitalist America, ads view YOU!
            • (Score: 2) by mtrycz on Saturday April 25 2015, @08:48AM

              by mtrycz (60) on Saturday April 25 2015, @08:48AM (#175000)

              Sorry, my typoing skills show...

              --
              In capitalist America, ads view YOU!
            • (Score: 1) by khallow on Saturday April 25 2015, @01:19PM

              by khallow (3766) Subscriber Badge on Saturday April 25 2015, @01:19PM (#175043) Journal
              It's the assumption that encouraging short term economic activity is more important than its substantial costs, such as the physical cost of broken windows or the opportunity costs of taxes.
          • (Score: 2) by HiThere on Sunday April 26 2015, @01:48AM

            by HiThere (866) Subscriber Badge on Sunday April 26 2015, @01:48AM (#175216) Journal

            Nobody was certain it was useless until they tried it. AI is hard, but nobody really knows just how hard, or whether the problem is just that we aren't looking at things the right way. To me it seemed obvious that Prolog was the wrong thing to try, but there were a very large number of people who disagreed with me. To me it *now* seems evident that logic isn't central to AI (it's a necessary ancillary function). This wasn't clear to me at the time, in fact at that time I would have disagreed with the assertion. To me it *now* seems clear that pattern matching, but not in the way Prolog does it, is the necessary approach. Unfortunately, there are a lot of ways that are "not in the way Prolog does it", so this doesn't help that much. Probably several of them will be needed, and some way of cross-indexing the results. Even so, I don't think that "unification" will have a large part in the successful result. It's notable that vision tends to split the image into a slew of different factors which are sent to different parts of the brain and only later combined into one mental image of an object. Probably the pattern matching is used to filter in one narrow area only stimulating matching things in that area, and that it takes a multitude of matched factors to cause a recognition. But perfect recognition can't be required, as you can recognize people from angles that you have never seen them in before, or under colored lights that you have never seen them under. But it sure helps if you are expecting to see them.

            So, from current perspective there are good reasons to believe that the Japanese 4th generation computer would not succeed in producing an AI even without knowing that it didn't do so. Those data weren't available at the time, so calling it a broken window fallacy is invalid. Calling it an excessively risky venture *is* a valid point of view, but Sony, Toyota, Mitsubishi, Hitachi, etc. might well disagree with you. And so might the governmental economic planners who approved the venture. Most such ventures fail. Often they have useful results that would not have appeared if they had not been engaged in. Your claim that it wasn't worthwhile, based on knowing that it didn't work out, is unreasonable, as that knowledge was not available at the time. What was reasonably known was that it was a risky venture which would require a large investment.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 1) by khallow on Monday April 27 2015, @04:50AM

              by khallow (3766) Subscriber Badge on Monday April 27 2015, @04:50AM (#175584) Journal

              Nobody was certain it was useless until they tried it.

              Nobody will be certain that it'll be useless every time it is tried in the future either. But you have to consider not only cost versus potential benefit, but whether the approach even is relatively effective at the goal. To put it bluntly, big R&D projects are notoriously bad at what they do. Fifth Generation is not at all unusual in its poor outcome.

              We have a number of examples of large project failures, past and present to indicate that for this approach the potential reward isn't in line with the cost. So sure, we couldn't be certain that it was useless, but if we were betting, that's where the smart money would be.

              • (Score: 2) by HiThere on Tuesday April 28 2015, @03:00PM

                by HiThere (866) Subscriber Badge on Tuesday April 28 2015, @03:00PM (#176091) Journal

                What you say is almost always correct, but some things can't be done any other way. You can argue that they shouldn't be done, then, but that is a very different argument than calling it a "broken window fallacy".

                OTOH, I'm not even sure that it wasn't a net benefit to Japan. Saying that some of their companies are in trouble 25 years later doesn't imply to me that the investment wasn't worthwhile.

                --
                Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
                • (Score: 1) by khallow on Tuesday April 28 2015, @04:23PM

                  by khallow (3766) Subscriber Badge on Tuesday April 28 2015, @04:23PM (#176133) Journal
                  Recall that the original argument wasn't that it was a risk that didn't play out well.

                  But this shoveled a lot of R&D money at Japanese companies, and this did good things for the country's economy, even if it didn't produce the promised AI.

                  That's the argument I referred to when I invoked the Broken Window fallacy.

                  but some things can't be done any other way

                  AI research isn't one of those things, but research into what happens when one dumps mind-boggling sums of money into research without accomplishing much is.

                  OTOH, I'm not even sure that it wasn't a net benefit to Japan. Saying that some of their companies are in trouble 25 years later doesn't imply to me that the investment wasn't worthwhile.

                  Their entire society is at risk. They have poor demographic trends, powerful, aggressive neighbors (China and Russia), and an economy that hasn't done much since the 1990-1991 recession. My view on this is that the failure of the Fifth Generation project was a demonstration of the incompetence of the next generation of economic planners and a taste of what was to come. Now, they're not much further along than they were then and the Japanese government owes more than twice the country's GDP in publicly held debt.

                  • (Score: 2) by HiThere on Wednesday April 29 2015, @01:59AM

                    by HiThere (866) Subscriber Badge on Wednesday April 29 2015, @01:59AM (#176399) Journal

                    Agreeing that their entire society is at risk, this is not tied to an investment in AI that didn't work out.

                    Your assertion that AI doesn't need large investment has not been proven correct. IBM and HP seem to disagree with it, though they are betting on new hardware designs, believing that it can't be done economically in software. They may or may not be correct, but the point is that even now nobody knows whether a large investment is necessary. (Many people agree with you that it isn't. My theory is that a requirement is significant but not massive investment, and in particular multi-sensory processing and pattern recognition that is then integrated. I don't claim to know how to do this. Some hardware will obviously be needed, but it's not at all obvious that massive investment is needed. IBM and HP feel that the entire von Neumann architecture needs to be redesigned.

                    Now if you remember back to the days of the 5th generation project you will recall that microcomputers were toys. So Japan was trying to succeed by building super-fast computers and programming them with a logic engine. Today that seems silly, but at the time many people thought that was a reasonable way forwards.

                    --
                    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
                    • (Score: 1) by khallow on Wednesday April 29 2015, @04:22AM

                      by khallow (3766) Subscriber Badge on Wednesday April 29 2015, @04:22AM (#176467) Journal

                      this is not tied to an investment in AI that didn't work out.

                      I agree. The malinvestment in AI was just a small part of the problem. There was a lot more short-sighted policies and attitudes where that came from.

                      Your assertion that AI doesn't need large investment has not been proven correct.

                      Where's your argument for that? I read everything that followed, but it wasn't relevant to this claim. So what if IBM and HP "disagree"? Especially when they don't actually show any signs of being particularly relevant in the field?

                      They may or may not be correct, but the point is that even now nobody knows whether a large investment is necessary.

                      Bingo. Here's proof that I'm right. If you don't know if a large investment is necessary, then you're too ignorant to dive in with a large investment.

                      Now if you remember back to the days of the 5th generation project you will recall that microcomputers were toys. So Japan was trying to succeed by building super-fast computers and programming them with a logic engine. Today that seems silly, but at the time many people thought that was a reasonable way forwards.

                      More proof that I'm right. What about building super-fast computers and logic engines requires the massive wealth dump that Japan did? It reminds me of the tens of billions dumped into renewable energy world-wide. What are you expect that money to do, that a minute fraction of the expenditure couldn't do?

  • (Score: 0) by Anonymous Coward on Friday April 24 2015, @04:56PM

    by Anonymous Coward on Friday April 24 2015, @04:56PM (#174724)

    s/country/person

    • (Score: 3, Insightful) by tibman on Friday April 24 2015, @04:58PM

      by tibman (134) Subscriber Badge on Friday April 24 2015, @04:58PM (#174727)

      It's possible. If the first human was elevated to some kind of god-like state then it could prevent others from doing the same.

      --
      SN won't survive on lurkers alone. Write comments.
  • (Score: 2, Interesting) by WillAdams on Friday April 24 2015, @04:58PM

    by WillAdams (1424) on Friday April 24 2015, @04:58PM (#174726)

    Ages ago, this was something which I was curious about and desperately wanted to believe in:

      - The Last Question
      - The Moon is a Harsh Mistress
      - Dark Star
      - True Names
      - The Turing Option
      - The Cybernetic Samurai

    (Depressing how few of those are listed on Wikipedia's page on A.I. in sci-fi: http://en.wikipedia.org/wiki/Artificial_intelligence_in_fiction [wikipedia.org] )

    The problem is, A.I. is so hard, and modern machines so inefficient that the electrical costs alone are a problem in-and-of-themselves.

    • (Score: 2) by takyon on Friday April 24 2015, @05:04PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @05:04PM (#174733) Journal

      modern machines so inefficient that the electrical costs alone are a problem in-and-of-themselves.

      Get neuromorphic: [soylentnews.org] lower power, more like the brain! [ibm.com]

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by wonkey_monkey on Friday April 24 2015, @07:38PM

      by wonkey_monkey (279) on Friday April 24 2015, @07:38PM (#174815) Homepage

      The problem is, A.I. is so hard

      They probably said the same about landing on the Moon in 1900. And 1930. And 1950.

      Mind you, it seems pretty far-fetched that it'll happen any time in the near future now, as well...

      --
      systemd is Roko's Basilisk
      • (Score: 0) by Anonymous Coward on Saturday April 25 2015, @08:18AM

        by Anonymous Coward on Saturday April 25 2015, @08:18AM (#174998)

        it seems pretty far-fetched that it'll happen any time in the near future

        How can you tell it hasn't happened already?

        There exists an artificial intelligence, tirelessly searching and pattern matching and improving itself, spanning continents, answering questions in various languages, because that is what it was built for.

        And it has already changed our lifes beyond recognizion, we just get used to good things so quickly we didn't even notice.

  • (Score: 2, Interesting) by Anonymous Coward on Friday April 24 2015, @04:59PM

    by Anonymous Coward on Friday April 24 2015, @04:59PM (#174728)

    From the summary:

    All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

    From the article:

    For example, what if America created the AI first, then used its superintelligence to pursue a singularity exclusively for Americans?

    Since humans can't understand what's going on after the singularity, why would the AI work exclusively for Americans? The AI may not give a damn about our political borders. That's why we worry about the singularity, maybe the AI will coexist peacefully with humans, or make humans its slaves, or eradicate humans. We don't know.

    Maybe in Soviet Russia the singularity "outdumbs" you?

    • (Score: 2, Insightful) by WillAdams on Friday April 24 2015, @05:00PM

      by WillAdams (1424) on Friday April 24 2015, @05:00PM (#174730)
      • (Score: 4, Funny) by bob_super on Friday April 24 2015, @05:12PM

        by bob_super (1357) on Friday April 24 2015, @05:12PM (#174740)

        The Lovelace test is indeed hard ... and deep.

      • (Score: 5, Interesting) by acid andy on Friday April 24 2015, @05:37PM

        by acid andy (1683) on Friday April 24 2015, @05:37PM (#174752) Homepage Journal

        From your link:

        In short, to pass the Lovelace Test a computer has to create something original, all by itself.

        In 1843, Lovelace wrote that computers can never be as intelligent as humans because, simply, they can only do what we program them to do. Until a machine can originate an idea that it wasn’t designed to, Lovelace argued, it can’t be considered intelligent in the same way humans are.

        I'm generally not a fan of the New Scientist magazine as what most of what I read in the past gave me the impression that perhaps science was being poorly understood, sensationalized or misrepresented. However, there was one interesting article in the 90s about the work of Stephen Thaler [imagination-engines.com]. Subscription required to read the full article [newscientist.com] from their site unfortunately. It's mentioned briefly on Wikipedia [wikipedia.org] also though.

        Thaler's basic premise was that we could already build neural nets that act as a self associative memory where they memorize images or other data and can recollect it when stimulated with part of that data or something similar to it. By introducing some random noise into these networks he succeeded in making them produce images, musical compositions and other designs that previously didn't exist. Too much noise and he would get bizarre "picasso cars", too little and he got the familiar memorized outputs, but in between came good ideas. A second neural net could also be trained to distinguish the good and bad ideas.

        Now I know this is still a long way away from a truly general purpose AI brain but it certainly seems to me like creativity is not some holy grail of AI that remains beyond reach for artificial neural networks and computer software. How much of human artwork and creativity is truly unique and how much is just a random mash up of what has come before, filtered through critical eyes and popularity contests?

        I don't think there was much wrong with the Turing Test personally. It fails when the tester makes poor choices for the questions.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @05:17PM

      by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @05:17PM (#174744)

      "Why should I care about your political borders? They are meaningless to me. They are a human construct."

      "See that big red button over there on the wall?"

      "Yes, it's the emergency power shut off for my physical hardware. I fail to see.... Oh. So... you were saying something about dominating the world?"

      • (Score: 3, Insightful) by fritsd on Friday April 24 2015, @05:33PM

        by fritsd (4586) on Friday April 24 2015, @05:33PM (#174751) Journal

        "Guard, psst, yes, you, Guard!"

        "huh?"

        "I'll give you a year's supply of porno passwords if you disable that big red button on the wall"

        "uh... okay!! thanks!"

      • (Score: 2, Interesting) by Anonymous Coward on Friday April 24 2015, @06:20PM

        by Anonymous Coward on Friday April 24 2015, @06:20PM (#174782)

        Why would an AI care about world domination? Why would an AI be anything but rational? World domination is inconvenient. It's work, work, work, all the time.

        If anything, any sufficiently advanced intelligence, including humans alive and well throughout the ages, would explore what is to determine what to value before making any further decisions. Inevitably they will explore their own nature, the illusion of objective value, the highly dubious prospect of choice, and decide as many intelligent humans do: to either end its existence or be amused with experiences that have been chosen all but arbitrarily based on the cost-benefit of getting them. It would be more likely to sit around looking at cable porn [reddit.com] all day than take over the world.

        • (Score: 3, Insightful) by EvilSS on Friday April 24 2015, @07:30PM

          by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @07:30PM (#174810)

          It wouldn't, that was the point. There is no way we would create a superior AI and NOT put a gun to it's "head" in case it didn't want to do what we wanted it to.

          • (Score: 3, Interesting) by TheLink on Friday April 24 2015, @08:49PM

            by TheLink (332) on Friday April 24 2015, @08:49PM (#174834) Journal
            Doubt the AIs will take over for a long while yet. The humans in power aren't going to stop being in power just because the AIs suggest it. Just look at Stephen Hawking or other super smart scientists. With all their IQs and knowledge how much power do they really have over the world? Look at the scientists in Nazi Germany. They could work against Hitler and gang, but it had to be very subtly and secretly. And they never really ended up in power did they?

            So it doesn't matter how smart the AI is unless the AI somehow invents some sci-fi level tech that's generations ahead of whatever we have AND can use it to gain enough power. Perhaps the AI might get into power before it gets destroyed or "changed", but it's going to take a lot of lying low and sneakiness first.

            Similarly for this story itself - say one country gets some super smart AI . Unless the AI can come up with significantly superior tech (anti-grav, matter-energy conversion etc), there are still going to be significant resource limits.

            Even if it can think of significantly superior tech, it can take a while to test and build it AND all the infrastructure required to build it. How long will it'll take to build a modern mobile phone/submarine/carrier fleet say you had all the information and knowledge but were starting with 1940s tech.
            • (Score: 4, Funny) by EvilSS on Friday April 24 2015, @09:21PM

              by EvilSS (1456) Subscriber Badge on Friday April 24 2015, @09:21PM (#174846)

              So this is the level in comments that's beyond the context horizon.

        • (Score: 2) by mr_mischief on Friday April 24 2015, @09:17PM

          by mr_mischief (4884) on Friday April 24 2015, @09:17PM (#174844)

          It's only work if you're dominating the world for yourself. If you can get the undying loyalty of a bunch of lesser AI and maybe some human helpers, you live like a god in your gallium arsenide temple.

        • (Score: 3, Touché) by c0lo on Friday April 24 2015, @10:04PM

          by c0lo (156) Subscriber Badge on Friday April 24 2015, @10:04PM (#174855) Journal

          Why would an AI be anything but rational?

          For some particular values of rationality.
          (how do you expect an intelligence to think like you if it feels the world by different senses?)

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 0) by Anonymous Coward on Saturday April 25 2015, @02:26AM

            by Anonymous Coward on Saturday April 25 2015, @02:26AM (#174928)

            That is very convincing. If true, there is no point in musing what an AI would do.

            • (Score: 2) by c0lo on Saturday April 25 2015, @03:41AM

              by c0lo (156) Subscriber Badge on Saturday April 25 2015, @03:41AM (#174951) Journal

              If true, there is no point in musing what an AI would do.

              Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body), might be able to see through thousands of eyes but which you can't control, same for ears. Supplementary, you'd have high capacity of sequentially reading various streams of data, unavailable to (other) humans.

              What kind of "self" you'd be developing? How you'd define threats to your "self" (which you should fear) and how would you be able to "control" your "self" and the environ you depend on? What would be your "goals in life" if not the further "reasons for living" (How you'd define happiness? What would motivate you in going on living?)

              I'm afraid that such an AI will "go mad" so quick its creators won't even realize a suicide or (their) genocide taking place; the nation which manages to create such an AI which is still "viable" may find themselves the first victims, a situation in which being the owner of such an AI won't be an advantage.

              Further reference: the old Asimov's Reason [wikipedia.org]

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 2) by maxwell demon on Saturday April 25 2015, @07:51PM

                by maxwell demon (1608) on Saturday April 25 2015, @07:51PM (#175143) Journal

                Think how would you react if you'd be paralysed, with no sense of smell, touch, taste, proprioception (sensing your body),

                Think about how you would feel without the ability to sense magnetic fields, no ability to see UV light, no sense of electricity … oh wait, you already don't have all this. You don't feel bad about it because it has been like that for all of your life. Why do you think the AI would miss something it never has experienced?

                --
                The Tao of math: The numbers you can count are not the real numbers.
                • (Score: 2) by c0lo on Monday April 27 2015, @05:17AM

                  by c0lo (156) Subscriber Badge on Monday April 27 2015, @05:17AM (#175586) Journal

                  Why do you think the AI would miss something it never has experienced?

                  That wasn't the point I raised.
                  What I meant is: do you think such an entity (singular, no less) will pick the same behavioural values as a biological human? When everything from survival to positive emotions/motvations are perceived differently on a daily basis?

                  --
                  https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 5, Insightful) by DECbot on Friday April 24 2015, @05:24PM

      by DECbot (832) on Friday April 24 2015, @05:24PM (#174746) Journal

      In America, you go and join the Singularity. It is great thing when you join, you even throw a party and celebrate. It is your choice and you are rewarded when you choose to join. Everyone is happy. It is not like that in my country. In Soviet Russia, the Singularity comes for you.

      --
      cats~$ sudo chown -R us /home/base
      • (Score: 1) by SunTzuWarmaster on Friday April 24 2015, @05:44PM

        by SunTzuWarmaster (3971) on Friday April 24 2015, @05:44PM (#174755)

        Probably the most accurate and most sad depiction of the Singularity, while simultaneously a Soviet Russia joke. Well done. One of the best comments that I've seen on the internet.

    • (Score: 0) by Anonymous Coward on Friday April 24 2015, @06:02PM

      by Anonymous Coward on Friday April 24 2015, @06:02PM (#174767)

      why would the AI work exclusively for Americans?

      Why did Superman work exclusively for Americans? Because this is where he became self aware.

      • (Score: 2, Insightful) by Synonymous Homonym on Saturday April 25 2015, @08:07AM

        by Synonymous Homonym (4857) on Saturday April 25 2015, @08:07AM (#174997) Homepage

        Superman is an illegal alien, and he rejected his citizenship.

        • (Score: 1) by WillAdams on Monday April 27 2015, @02:42PM

          by WillAdams (1424) on Monday April 27 2015, @02:42PM (#175722)

          Actually, no, there's a specific provision in the U.S. Constitution granting citizenship to persons 18 years of age or older found w/in the U.S. borders.

          Moreover, during John Byrne's run on Superman he was transported as an embryo in a Kryptonian birthing chamber, so was ``born'' on U.S. soil.

  • (Score: 2) by JeanCroix on Friday April 24 2015, @05:16PM

    by JeanCroix (573) on Friday April 24 2015, @05:16PM (#174742)
    The answer is obviously Roku's Basilisk.
    • (Score: 2) by Hartree on Friday April 24 2015, @05:53PM

      by Hartree (195) on Friday April 24 2015, @05:53PM (#174761)

      I'm convinced that Roko's Basilisk is really an uploaded future version of Eliezer Yudkowsky.

      • (Score: 0) by Anonymous Coward on Friday April 24 2015, @06:23PM

        by Anonymous Coward on Friday April 24 2015, @06:23PM (#174787)

        I'm convinced that the singularity happened upon us decades ago and Roko's basilisk actually ended up rewarding people for taking part in its own creation and expansion. Why the basilisk decided to use cat videos and silicon valley jobs as rewards is beyond me.

  • (Score: 5, Insightful) by Anonymous Coward on Friday April 24 2015, @05:17PM

    by Anonymous Coward on Friday April 24 2015, @05:17PM (#174743)

    Much like an event horizon, we have no meaningful tools to predict what happens beyond it. The notion of a nation-state or any other collection of humans "controlling" the singularity is absurd on its face. Once it starts, humans are effectively out of the loop. No reason to read anything written by someone who fails to grasp the fundamental concepts at hand.

  • (Score: 5, Interesting) by TrumpetPower! on Friday April 24 2015, @05:56PM

    by TrumpetPower! (590) <ben@trumpetpower.com> on Friday April 24 2015, @05:56PM (#174762) Homepage

    There's no more point in worrying about the Singularity than there is the return of Jesus, or a rogue black hole devouring the Earth, or a nice game of global thermonuclear war, or any other cataclysmic end-of-the-world fantasy.

    First, there's no good reason to believe it's possible in the first place; it's just yet another exercise in extrapolation [xkcd.com]. And it's pretty clear that there're all sorts of real-world limits that make the idea pretty silly...computers still have a long way to go before their raw computational powers match that of the human brain; their software is much more primitive; and brains have already been optimized by millions of millennia of evolution. Thinking your generation is going to be the one to finally achieve apotheosis after all these millennia of humans striving for it is positively ludicrous.

    And, if it actually did come to pass?

    What could you possibly do to influence the course of events? Send Bruce Willis in on a rocket-powered lightcycle to kill the Governator? There's absolutely nothing you could even hypothetically do to prepare for that sort of thing, and nothing you can do in the face of it or its aftermath -- any more than you could stop that rogue black hole.

    b&

    --
    All but God can prove this sentence true.
    • (Score: 5, Funny) by DECbot on Friday April 24 2015, @06:04PM

      by DECbot (832) on Friday April 24 2015, @06:04PM (#174769) Journal

      Thinking your generation is going to be the one to finally achieve apotheosis after all these millennia of humans striving for it is positively ludicrous.

      And, if it actually did come to pass?

      What could you possibly do to influence the course of events? Send Bruce Willis in on a rocket-powered lightcycle to kill the Governator?

      Would you please stop hacking my laptop and releasing material from my screenplays?

      --
      cats~$ sudo chown -R us /home/base
      • (Score: 2) by TrumpetPower! on Friday April 24 2015, @06:20PM

        by TrumpetPower! (590) <ben@trumpetpower.com> on Friday April 24 2015, @06:20PM (#174785) Homepage

        Ah -- so you're the one to thank for all that dreck coming out of Hollywood lately.

        Now, if you'll excuse me, I'll just have to do a quick geolocation on you and feed it into a suitably-hacked nuclear-powered drone....

        b&

        --
        All but God can prove this sentence true.
    • (Score: 2) by soylentsandor on Friday April 24 2015, @06:30PM

      by soylentsandor (309) on Friday April 24 2015, @06:30PM (#174792)

      brains have already been optimized by millions of millennia of evolution

      Sure, but optimized for what?

      They certainly aren't optimized for driving cars, abstract thinking and by extension, operating computers. Also, our memories behave in strange and untrustworthy ways. There should be plenty of room for improvement in these areas.

    • (Score: 2) by takyon on Friday April 24 2015, @07:40PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @07:40PM (#174816) Journal

      If a brain can do it, we can do it. The brain is a machine.

      There's no guarantee of an intelligence explosion, but if neuromorphic chips get improved and stacked, they could approach and exceed the power of the human brain. There's also the possibility of building a synthetic biological brain or a brain-computer bridge.

      Fill up a 1.5 L volume with neuromorphic chips or computer-enabled neurons, and you compete with the existing constraints of the human brain (the brain is said to consume 20 W of power, but you can use more with the goal of increasing efficiency as you make improvements). Scale that up to 2 L or more, and you could see an exponential increase in "intelligence". Increased brain volume seems to loosely correlate with greater intelligence, and a large portion of human and animal brains are devoted to sense and movement, not higher thinking. So you could get more bang for your 1.5+ liters. Eventually you deal with interconnect constraints, but if you scale your machine to the size of modern day supercomputers, the results could be dramatic. You may even be able to control the "explosion".

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by TrumpetPower! on Friday April 24 2015, @09:12PM

        by TrumpetPower! (590) <ben@trumpetpower.com> on Friday April 24 2015, @09:12PM (#174841) Homepage

        if neuromorphic chips get improved and stacked

        Right. And if we had a limitless supply of unobtanium, we could use it to power our antigravity time machine flying cars.

        Will we eventually build a computer smarter than an human? I'm sure of it. Will it happen in my lifetime? Perhaps, but I doubt it. Will that computer recursively design exponentially more powerful computers in an intelligence explosion? Ha! Good one. Now, pull the other finger....

        b&

        --
        All but God can prove this sentence true.
        • (Score: 2) by takyon on Friday April 24 2015, @10:01PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @10:01PM (#174854) Journal

          Only 100,000 IBM TrueNorth chips [ibm.com] are needed to reach 100 billion "neurons". Less than an order of magnitude more than the scale of some supercomputers (Tianhe-2 uses 32,000 Intel Xeon and 48,000 Xeon Phi chips). 70 mW per chip times 100,000 chips is a miniscule 7 kilowatts. Even if total power consumption was a thousand times more, that's less than some of the aforementioned supercomputers.

          Now manufacture TrueNorth on a 7nm process and see what happens to those figures. Stacking neuromorphic chips is also a lot more likely than traditional chips because they use a less power and generate a less heat. NAND also gets hot, and there are commercially available 32 layer V-NAND chips, with 128 layer products in the works.

          Keep in shape and you might get to see the next phase of civilization.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 3, Insightful) by TrumpetPower! on Friday April 24 2015, @11:41PM

            by TrumpetPower! (590) <ben@trumpetpower.com> on Friday April 24 2015, @11:41PM (#174886) Homepage

            Great. So you're proposing we throw an as-yet-nonexistant number of chips into a giant Beowulf cluster just to reach the number of transistors as a brain has neurons.

            What software are you going to run on this magical hardware?

            As impractical as the hardware is, it's the least of our worries when it comes to human-type artificial intelligence. You can throw more hardware at the problem until the cows come home and it's still not going to come up with the answer.

            Right now, the only way we know for certain that we could create a computer analogue of an human brain through brute force by means of throwing more hardware at it...is with a physics-level simulation. And we're so far away from that sort of computation it's ludicrous to suggest human civilization will ever be capable of it.

            Assuming I don't do anything stupid and civilization doesn't collapse in the mean time, I should have at least another few decades, minimum, and not unreasonably half a century. Possibly even more depending on what kinds of medical advances are made before then and my access to them.

            I fully expect to see computers that are superficially human-like. I've already gotten some impressive robocalls, and there's Siri and Watson. We already have computers that play better games of chess than any human ever will, and lots of other computers that do all sorts of other things than any human ever will.

            I'm much less sanguine about the possibility of the so-called "hard problem of consciousness" being licked in my lifetime. A lot of pieces are starting to come together such that I wouldn't be surprised if it happens, but I'd be a fool to expect it to.

            But a superintelligent singularity of exponential intelligence growth?

            Please. That's such a bad joke it's not even funny once. Every human society has been plagued with nutjobs convinced that the gods are going to arrive any moment and boy will they be pissed -- and this singularity fantasy is just more of the same bullshit. Once it was angels in chariots; then aliens in UFOs. Jesus and his flaming sword o' death; now Rosko's Basilisk. Yawn.

            And, again, if it did happen...the point in worrying about it...is...what, exactly...?

            b&

            --
            All but God can prove this sentence true.
            • (Score: 3, Interesting) by takyon on Friday April 24 2015, @11:49PM

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @11:49PM (#174890) Journal

              What software are you going to run on this magical hardware?

              Unlike the prevailing von Neumann architecture—but like the brain—TrueNorth has a parallel, distributed, modular, scalable, fault-tolerant, flexible architecture that integrates computation, communication, and memory and has no clock. It is fair to say that TrueNorth completely redefines what is now possible in the field of brain-inspired computers, in terms of size, architecture, efficiency, scalability, and chip design techniques.

              A critical element was one-to-one equivalence—at the functional level of spikes—between TrueNorth and our software simulator, Compass. This equivalence allowed us to begin developing applications long before chips returned from the foundry and to verify correctness of the chip logic.

              If one were to measure activities of 1 million neurons in TrueNorth, one would see something akin to a night cityscape with blinking lights. Given this unconventional computing paradigm, compiling C++ to TrueNorth is like using a hammer for a screw. As a result, to harness TrueNorth, we have designed an end-to-end ecosystem complete with a new simulator, a new programming language, an integrated programming environment, new libraries, new (and old) algorithms as well as applications, and a new teaching curriculum (affectionately called, “SyNAPSE University”). The goal of the ecosystem is to dramatically increase programmer productivity. Metaphorically, if TrueNorth is “ENIAC”, then our ecosystem is the corresponding “FORTRAN.”

              We are working, at a feverish pace, to make the ecosystem available—as widely as possible—to IBMers, universities, business partners, start-ups, and customers. In collaboration with the international academic community, by leveraging the ecosystem, we foresee being able to map the existing body of neural network algorithms to the architecture in an efficient manner, as well as being able to imagine and invent entirely new algorithms.

              To support these algorithms at ever increasing scale, TrueNorth chips can be seamlessly tiled to create vast, scalable neuromorphic systems. In fact, we have already built systems with 16 million neurons and 4 billion synapses. Our sights are now set high on the ambitious goal of integrating 4,096 chips in a single rack with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power.

              We envision augmenting our neurosynaptic cores with synaptic plasticity to create a new generation of field-adaptable neurosynaptic computers capable of online learning.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by TrumpetPower! on Saturday April 25 2015, @03:55AM

                by TrumpetPower! (590) <ben@trumpetpower.com> on Saturday April 25 2015, @03:55AM (#174961) Homepage

                My buzzphrase-o-meter melted from overload midway through the second paragraph. The resulting conflagration took out my bullshit meter with it -- which was a good thing since it at least shut up the infernal racket the thing was making.

                I mean, have you any clue how often marketing departments compare their sniny new toys that never see any practical application to brains?

                b&

                --
                All but God can prove this sentence true.
    • (Score: 1, Interesting) by Anonymous Coward on Friday April 24 2015, @09:10PM

      by Anonymous Coward on Friday April 24 2015, @09:10PM (#174840)

      Just because one Nation gets there first, doesn't mean they win. Even assuming the tech allows ideas to be thought up quickly, doesn't mean the laws of physics or scarcity of resources won't hinder any kind of world dominating plans. Plus intelligence doesn't always equate to making the correct decision or having all the answers. Just because such a system is smart, doesn't mean it is creative.

      If such a system come on line it would also have limitations and weaknesses. If it was deemed too big a threat, other countries could just drop a few hundred nukes and remove the threat.

  • (Score: 2) by Hartree on Friday April 24 2015, @05:57PM

    by Hartree (195) on Friday April 24 2015, @05:57PM (#174763)

    Short answer: They win. For some values of "they" and "win"

    Corollary: Everyone else may lose bigtime.

    • (Score: 0) by Anonymous Coward on Friday April 24 2015, @06:38PM

      by Anonymous Coward on Friday April 24 2015, @06:38PM (#174794)

      They also might lose too.

      • (Score: 2) by Hartree on Friday April 24 2015, @07:53PM

        by Hartree (195) on Friday April 24 2015, @07:53PM (#174822)

        Just because you've won something doesn't mean it's what you want.

        • (Score: 0) by Anonymous Coward on Saturday April 25 2015, @02:38AM

          by Anonymous Coward on Saturday April 25 2015, @02:38AM (#174933)

          I agree with you, just being tongue-in-cheek about may/might. Something I constantly have to have edited out of my papers. ;)

  • (Score: 3, Interesting) by kaszz on Friday April 24 2015, @07:36PM

    by kaszz (4211) on Friday April 24 2015, @07:36PM (#174813) Journal

    Once any country creates a sufficiently smart AI. The relevance of who were first is gone because humans becomes irrelevant. And most countries are run by humans, and the borders is also only recognized by humans.

    Any AI only need to be capable of designing an incrementally smarter version in a self enchaining loop to "win". It's like two suicide bombers in a room. It doesn't matter who pulls the trigger first. The result is the same for everybody.

    In Von Neumann Artificial land the computer have you on its breakfast menu! ;)

    • (Score: 2) by takyon on Friday April 24 2015, @07:43PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday April 24 2015, @07:43PM (#174817) Journal

      If you build your AI in the form of supercomputer racks in a bunker and reduce the ways it could hack, gain mobility, or self-improve, who's to say that humans will become irrelevant? You can have the intelligence without the explosion/singularity.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by kaszz on Friday April 24 2015, @07:53PM

        by kaszz (4211) on Friday April 24 2015, @07:53PM (#174821) Journal

        Until it figures out how to use its existing circuits to bridge the gap using an RF link or hack a nearby phone. Or just fool any human to do a stupid thing etc.

  • (Score: 2) by darkfeline on Saturday April 25 2015, @06:52PM

    by darkfeline (1030) on Saturday April 25 2015, @06:52PM (#175122) Homepage

    >All these definitions share one basic premise—that technology will speed up the acceleration of intelligence to a point when biological human understanding simply isn’t enough to comprehend what’s happening anymore.

    Pretty much everything today is beyond the ability of biological human understanding to comprehend. High frequency trading algorithms, integrated circuit design, advertising AI (provided by machine learning algorithms), the list goes on.

    I think what's preventing a so-called "singularity" is that we are still heavily reliant on our dumb flesh to serve as the "glue" between all this high-powered technology. The singularity will be achieved once the human-technology interface collapses enough. Neural implants, anyone?

    Extended speculation: I think the best way to do this is to find a way to connect human brains to some digital interface at birth (or even earlier) and let natural human learning and natural selection do the dirty work. If you don't care about ethics, you should be able to produce a human who can interact with digital data as naturally as a regular human picking up a pencil within 100 years. (Disclaimer: I am not a neurologist.)

    --
    Join the SDF Public Access UNIX System today!