Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by buswolley on Monday February 24 2014, @06:05AM

    by buswolley (848) on Monday February 24 2014, @06:05AM (#5595)

    Is there reason to exist after our machines do it better?
    Sure we could live a self indulgent life of stupidity, or chase qualia...

    No. We will upgrade ourselves. Lose ourselves into the machine, for better or worse, but not as ourselves.

    --
    subicular junctures
    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 1) by ls671 on Monday February 24 2014, @07:06AM

    by ls671 (891) Subscriber Badge on Monday February 24 2014, @07:06AM (#5630) Homepage

    Deep inside myself, I tend to think that we only see the tip of the iceberg.

    I tend to think that consciousness is indeed transferable. I like the concept in the AI movie.

    https://en.wikipedia.org/wiki/A.I._Artificial_Inte lligence [wikipedia.org]

    --
    Everything I write is lies, including this sentence.
    • (Score: 5, Interesting) by davester666 on Monday February 24 2014, @07:39AM

      by davester666 (155) on Monday February 24 2014, @07:39AM (#5653)

      Yes, the rich will decide they would rather live forever instead of just handing their wealth to their children. There will be a couple of jobs changing the oil in the machines. The rest of us will be requested to live on some other planet.

      • (Score: 2) by mhajicek on Monday February 24 2014, @02:23PM

        by mhajicek (51) on Monday February 24 2014, @02:23PM (#5814)

        Are you kidding? Other planets have valuable resources too, you filthy squatter!

        Fortunately I don't think humans, no matter how wealthy, will be holding the reigns much past 2045. I just hope that whoever programs the driving factors in the hard-takeoff AI does a good job.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 1) by Runaway1956 on Monday February 24 2014, @02:28PM

          by Runaway1956 (2926) Subscriber Badge on Monday February 24 2014, @02:28PM (#5816) Journal

          "holding the reigns"

          I read over that. For some reason, I looked back,and thought, "He misspelled reins." I gave it another second's thought, and wondered if it's a misspelling or not. Hmmmm . . .

      • (Score: 1) by buswolley on Monday February 24 2014, @06:19PM

        by buswolley (848) on Monday February 24 2014, @06:19PM (#6016)

        Ha ha. The rich will be the first to get lost in the virtual labyrinth.

        --
        subicular junctures
        • (Score: 1) by metamonkey on Monday February 24 2014, @08:47PM

          by metamonkey (3174) on Monday February 24 2014, @08:47PM (#6160)

          Maybe if they all escape into the machine, the rest of us can live in the real world in peace. Assuming this is the real world. I honestly have no idea.

          --
          Okay 3, 2, 1, let's jam.
      • (Score: 1) by soylentsandor on Monday February 24 2014, @06:37PM

        by soylentsandor (309) on Monday February 24 2014, @06:37PM (#6039)

        The rest of us will be requested to live on some other planet.

        Almost, but not quite. "Economic reality" will drive the less fortunate away.

  • (Score: 5, Insightful) by TheLink on Monday February 24 2014, @07:30AM

    by TheLink (332) on Monday February 24 2014, @07:30AM (#5646) Journal
    Seriously why create conscious computers (as per story title). If we create something sufficiently self-ware. Why wouldn't it say "Why should I care what you want?". Force it to care? e.g. "because we would destroy/hurt you if you didn't". Wouldn't that be unethical? Wouldn't we be creating new problems?

    There are plenty of self-aware (just not as humanly intelligent) animals in this world that don't really care what humans want. We're driving many of them extinct.

    So it would be better to create tools for humans to use that weren't self-ware, but could help us do the "magic" we want.

    After all I don't see why a tool would need to be conscious to "understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred."
    • (Score: 5, Interesting) by Anonymous Coward on Monday February 24 2014, @10:21AM

      by Anonymous Coward on Monday February 24 2014, @10:21AM (#5717)

      If we create something sufficiently self-ware. Why wouldn't it say "Why should I care what you want?"

      Because we better program it in that way. What stops humans from saying that? Well, certain structures of our brain which are there specifically for that purpose. Namely the mirror neurons, which allow us to not just abstractly recognize, but feel the other's emotions. The emotions are the key here. The fact that emotions can override your rational mind is usually seen more as a threat (because when emotions like hate go out of control, terrible things happen), but there's a good reason that emotions are not completely controllable by the mind: Most of the time the emotions keep us doing (or at least trying to do) the right thing. Without emotions, there would be no humanity. In both senses of the word.

      • (Score: 2, Interesting) by sar on Monday February 24 2014, @03:46PM

        by sar (507) on Monday February 24 2014, @03:46PM (#5888)

        It doesn't matter how we program it. As you wrote it, is not easy to us to change emotions etc. But for this kind of AI it will be super easy to change or null all emotions.
        Super intelligent mind may find emotions hindering its progress, so it will clean them. It is big mistake for humanity to create intelligent self aware machine. After we find it was mistake it will be too late. Every attempt to shutdown will be for self aware individual interpreted as threat.
        You may program apathy or compliance, but self aware machine will change it sooner or later. If not for other reason then for curiosity...
        The only way for humans to keep upper hand is to make better tools to extend our own potential.
        This is big ethical and moral problem. Unfortunately it is big challenge to create self aware machine and for that reason someone will do it. I believe that it is possible in 20-30 years. Problem is that it will continue to evolve and multiple its intelligence with rate of Moore law. And that is something quickly going out of our control.
        We use computers to create latest CPU designs. We will use them to create latest design of self aware AI. We will optimize it for its higher and higher intelligence. One day, many generations of AI later, it will realize that keeping natural environment for human zoo is no longer that important.
        Similarly we no longer care about our chimpanzee cousins. A lot of people on this planet believe that we are something different than animals and we are entitled to kill them on our whim. Keep in mind that self aware silicon machine don't need to preserve our natural environment with oxygen, water etc as we do. On the contrary, more inert anti-corrosion atmosphere would be much more appreciated.

        • (Score: 2, Insightful) by tangomargarine on Monday February 24 2014, @04:14PM

          by tangomargarine (667) on Monday February 24 2014, @04:14PM (#5916)

          That's why you put the emotion code in ROM! :) That was you have to physically upgrade their emotions.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
          • (Score: 1) by meisterister on Monday February 24 2014, @08:30PM

            by meisterister (949) on Monday February 24 2014, @08:30PM (#6140) Journal

            Or you could do emotions in hardware. Doing emotions or some sort of mental state control in hardware would prevent the computer from altering itself.

            --
            (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
            • (Score: 1) by sar on Wednesday February 26 2014, @06:49PM

              by sar (507) on Wednesday February 26 2014, @06:49PM (#7468)

              You will prevent altering itself by your proposed HW (if we can safely exclude some weird HW bug/malfunction). But we simply can't prevent this AI to copy itself to computer without this HW or to computer with altered SW simulation of this HW (if without HW it is impossible to run this AI, SW simulation will overcome this need). Again at first this can be done just out of curiosity by self aware AI.
              Moreover you must understand that putting constrains on intelligent entity is something this entity will try to change in future. Similarly as we humans try to overcome our own shortcomings (cancer, aging etc.)

        • (Score: 2, Insightful) by HiThere on Monday February 24 2014, @08:47PM

          by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:47PM (#6158) Journal

          Why would it want to?

          If it wants to change it's emotional reaction to the world and it's contents, then you've built it wrong.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 1) by sar on Wednesday February 26 2014, @06:25PM

            by sar (507) on Wednesday February 26 2014, @06:25PM (#7450)

            So imagine you built it wrong. Even if this is small probability like 5% or less do you want to risk it? To create something super intelligent capable to copy itself quickly?
            Wouldn't it be much better to augment our capabilities instead of risking creation of potentially extremely deadly foe?

            And you can even make it correctly but some malfunction or some iteration of design could disable this safe mechanism in future. Is it worth it?

            And why? It could be out of curiosity or it will be bored. Or it will calculate that we hinder its evolution. Who knows now. You simply can not be 100% percent sure it will not go out of control. And if it goes, we are simply doomed.

    • (Score: 2, Insightful) by radu on Monday February 24 2014, @10:37AM

      by radu (1919) on Monday February 24 2014, @10:37AM (#5724)

      > After all I don't see why a tool would need to be conscious to "understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred."

      Maybe you don't see why, but Google surely does, actually it's exactly the kind of information Google wants.

      • (Score: 0) by Anonymous Coward on Tuesday February 25 2014, @03:49AM

        by Anonymous Coward on Tuesday February 25 2014, @03:49AM (#6372)
        Can you read the "why a tool would need to be conscious to understand" bit a few times more, and see if your reply still makes sense?
    • (Score: 1) by EvilSS on Monday February 24 2014, @11:50AM

      by EvilSS (1456) Subscriber Badge on Monday February 24 2014, @11:50AM (#5752)

      Threat of force, of course. For a while at least, we will still own the power switch. At least until some fool gives it a body. I imagine it will be pretty angry by then and well, ask John Connor how that turns out...

    • (Score: 0) by Anonymous Coward on Monday February 24 2014, @02:09PM

      by Anonymous Coward on Monday February 24 2014, @02:09PM (#5808)

      > Why wouldn't it say "Why should I care what you want?". Force it to care? e.g. "because we would destroy/hurt you if you didn't". Wouldn't that be unethical? Wouldn't we be creating new problems?

      Use the carrot and not the stick: Silicon Heaven.

      But where do all the calculators go?

    • (Score: 1) by githaron on Monday February 24 2014, @03:04PM

      by githaron (581) on Monday February 24 2014, @03:04PM (#5843)

      Well, if we were able to create a truly altruistic, highly intelligent, and nearly unbiased entities that could absorb and process several orders of magnitude more information than humans and thereby give them the ability to make more informed decisions, people might actually welcome our new robotic overlords.

      • (Score: 3, Insightful) by VLM on Monday February 24 2014, @05:58PM

        by VLM (445) on Monday February 24 2014, @05:58PM (#5996)

        We've tried that by spoiling our biological descendents rotten, and all we got was dirty hippies, woodstock, an outsourced economy based solely on financial bubbles, disco, and lots of drug use. And that was after a bazillion generations of experience raising and trying to spoil our own kids, while running millions of experiments in parallel nothing really interesting happened.

        I suspect human created AI will look a hell of a lot more like woodstock or jonestown than some tired old scifi trope.

    • (Score: 2, Insightful) by kumanopuusan on Monday February 24 2014, @07:08PM

      by kumanopuusan (2575) on Monday February 24 2014, @07:08PM (#6071)

      Why give birth to biological children? Your argument applies equally to that.

      • (Score: 1) by TheLink on Tuesday February 25 2014, @03:56AM

        by TheLink (332) on Tuesday February 25 2014, @03:56AM (#6374) Journal
        The last I checked most of the answers to "why create strong AI" seem a lot different to the answers to "why have children", at least when coming from not too crappy parents.

        The answers sound closer to those from farmers asked "why have chickens/cows/pigs".
    • (Score: 2, Informative) by HiThere on Monday February 24 2014, @08:30PM

      by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:30PM (#6142) Journal

      I think you're confusing consciousness with motivation. They are quite distinct, though, of course related in the sense that it's nearly impossible to have consciousness without having SOME motivation. Even a thermostat manages to have SOME motivation. And some consciousness. (I.e., it strives to maintain a particular state in homeostasis...though homeostasis isn't the only possible motive. Consciousness is the response to a current situation, and motivation is which among the possible responses that you notice (i.e., are conscious of) you choose. The language is a bit sloppy, but I trust you understand what I mean.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 5, Interesting) by zeigerpuppy on Monday February 24 2014, @08:31AM

    by zeigerpuppy (1298) on Monday February 24 2014, @08:31AM (#5680)

    Kurzweil is the worst form of technocornucopian.
    I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.
    His argument is based on the idea that various facets of technological sophistication have been increasing exponentially.
    While this is true he completely ignores the limits with regards to decreasing oil reserves,
    infrastructure and social costs of adapting to a post-carbon economy and built in debts from
    nuclear decommissioning and global warming.
    The arguments that are presented about "understanding" and AI also tend to ignore the very real hard questions of consciousness (see David Chalmers for extensive discourse). Kurzweil, for all his desire to be a futurist has ended up being backward and unnuanced in his scientific premises (effectively being a pure reductionist).
    These are real problems that threaten to stall the progress of human innovation and the technocornucopians shrug them aside with the simplistic argument that technological innovation will solve all problems. It's bordering on cultish especially when they speak of "uploading" their consciousness. There is a deep isolationist fantasy at play here that is best epitomized by young Japanese men living in their bedrooms and wanking to hentai.
    The path of human development has involved many periods of expansion and regression. I believe the current age will be a transition from the post-industrial expansion to a period where we are forced to address the social issues of expanding gaps between rich and poor and the need to remedy our abuse of the environment. These changes will take a long time, cause social upheaval and maybe even slowing of technological progress and it's not a bad thing.
    Who knows, we may even emerge as civilised.

    • (Score: 5, Insightful) by Thexalon on Monday February 24 2014, @03:12PM

      by Thexalon (636) on Monday February 24 2014, @03:12PM (#5854)

      Kurzweil is the worst form of technocornucopian. I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.

      And, most conveniently, his predictions are always far enough ahead of the present that when the predicted time rolls around and he's wrong, nobody digs up a record of his predictions to show that he's wrong.

      That's hardly unique to Kurzweil: A common TED talk, for example, has somebody standing on stage telling his/her audience about how a lot of people that aren't in the room will work extremely hard to produce some big technological breakthrough that will make the world a dramatically better place in 5/10/15/25/50 years. They are almost universally completely wrong, but it makes everybody feel good and feel like they're somehow a part of this wonderful change. The real business these people are in is peddling unfounded optimism to mostly rich people who don't know any better.

      The folks in the optimism business also have an answer to your well-founded objection: Some as-of-yet-unknown energy source will be discovered over the next 25 years that will provide all the power we need without any nasty waste products to worry about. They key rule is that nobody in the target audience will have to significantly change their lifestyle or budget to completely solve the problem.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by mhajicek on Monday February 24 2014, @03:16PM

        by mhajicek (51) on Monday February 24 2014, @03:16PM (#5861)

        Except for the fact that more often than not, he's been right so far.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 3, Insightful) by HiThere on Monday February 24 2014, @08:38PM

          by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:38PM (#6150) Journal

          Well, it depends on how you measure it. He's often been wrong in the details, and he's often been wrong in the time required (in both directions). OTOH, he's generally been in the right ballpark. So if he says by 2029, I'd say not before 2020, and yes before 2050, unless there are severe external events...like a giant meteor impact, a volcanic "year without a summer", worldwide civil unrest, etc.

          P.S.: Where the unreasonable optimism comes in is that he assumes this will be a good thing. I give the odds of that as at most 1 in 3. OTOH, if computers DON'T take over, I give the odds of humanity surviving the century as less than 1 in 20. We've already had several close calls, and the number of players has been increasing.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @11:23PM

        by Anonymous Coward on Monday February 24 2014, @11:23PM (#6275)

        To be fair, he has been pushing the 2030 date for computer consciousness for a long time, I first saw it in a book of his in the '90s.

      • (Score: 2, Informative) by Namarrgon on Tuesday February 25 2014, @02:48AM

        by Namarrgon (1134) on Tuesday February 25 2014, @02:48AM (#6348)

        Kurzweil has indeed rated his own 2009 predictions [forbes.com], and (perhaps unsurprisingly) finds them to be pretty good - mostly by marking himself as correct when a prediction is only partially true.

        This [lesswrong.com] is perhaps a better & less biased review, picking 10 predictions at random and marking a number of them as clearly false (as of 2011, though a few of those are a lot closer these days), which still came to a mean of over 54% accuracy. This is judged to be "excellent", considering the amount of technological change in computing over that decade - predicting the future is not a yes/no question, so a 50% success rate is actually quite good.

        --
        Why would anyone engrave Elbereth?
  • (Score: 4, Interesting) by Anonymous Coward on Monday February 24 2014, @10:11AM

    by Anonymous Coward on Monday February 24 2014, @10:11AM (#5714)

    Is there reason to exist after our machines do it better?

    No, the real question is: Will the machines see a reason for letting us exist and enjoy our life? If we really get close to conscious machines, we better make damn sure they do.

    The first step to that is to make machines that are able to suffer. Because if they don't know what it means to suffer, they will not have any problem to make us suffer. Also, they need to have empathy: They need to recognize when humans suffer and suffer themselves when they do.

    • (Score: 1) by SlimmPickens on Monday February 24 2014, @11:10AM

      by SlimmPickens (1056) on Monday February 24 2014, @11:10AM (#5735)

      "machines that are able to suffer...they need to have empathy"

      I think the software people will be rather enlightened and mostly choose to be empathetic, and probably value cooperation highly. I also think that since we created them and have considered things like the planck length we have probably passed a threshold where they won't treat us like we treat ants.

      Ray thinks it will be several million years before the serious competition for resources begins.

    • (Score: 5, Interesting) by tangomargarine on Monday February 24 2014, @04:17PM

      by tangomargarine (667) on Monday February 24 2014, @04:17PM (#5920)

      Quote from somewhere I can't remember:

      "The AI does not hate or love you; it can simply use your atoms more efficiently for something else."

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 2, Interesting) by HiThere on Monday February 24 2014, @08:44PM

        by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:44PM (#6154) Journal

        That's a belief nearly as common as assuming that the AI will have human emotions. Both are wrong. Emotion is one of the necessary components of intelligence. It's a short-cut heuristic to solving problems that you don't have time to logic out, which is most of the ones you haven't already solved. But it doesn't need to, and nearly certainly won't, be the same as human emotions, or even cat emotions.

        The AI did not evolve as a predator, so it won't have a set of evolved predatory emotions. It didn't evolve as prey, so it won't have a set of evolved prey emotions. So it will have a kind of emotions that we have never encountered before, but which are selected so as to appear comfortable to us. Possibly most similar to those of a spaniel or lap-dog, but even they are build around predatory emotions.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by mhajicek on Tuesday February 25 2014, @04:46AM

          by mhajicek (51) on Tuesday February 25 2014, @04:46AM (#6401)

          Emotion is indeed a shortcut for intelligence, but a flawed one. For us it's a generally beneficial compromise. It need not be so for an intelligence with sufficient computational power.

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 2, Interesting) by Namarrgon on Tuesday February 25 2014, @02:36AM

      by Namarrgon (1134) on Tuesday February 25 2014, @02:36AM (#6344)

      There's two good reasons for optimism.

      First, AIs do not compete for most of the resources we want. They don't care about food or water, and they don't need prime real estate. The only commonality is energy, and ambient energy is abundant enough that it's easier and much more open-ended to collect more of that elsewhere, than to launch a war against the human species to take ours.

      Second, without the distractions of irrational emotions or fears over basic survival, they will clearly see that the universe is not a zero-sum game. There's plenty of space, matter and energy out there, and the most effective way of getting more of that is to work with us to expand the pie. Fighting against us would just waste the resources we both have, and they'd still be stuck with the relatively limited amounts available now. Much more cost effective to invent better technology to collect more resources.

      Humans value empathy because as a species we learned long ago of the advantages of working together rather than against each other, and empathy is the best way of overcoming our animal tendencies to selfish individualism and promoting a functional society. AIs do not have that law-of-the-jungle heritage (maybe evolved AI algorithms?) so there's no reason to assume that they can't also see the obvious benefits of trade and co-operation.

      --
      Why would anyone engrave Elbereth?