Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by zeigerpuppy on Monday February 24 2014, @08:31AM

    by zeigerpuppy (1298) on Monday February 24 2014, @08:31AM (#5680)

    Kurzweil is the worst form of technocornucopian.
    I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.
    His argument is based on the idea that various facets of technological sophistication have been increasing exponentially.
    While this is true he completely ignores the limits with regards to decreasing oil reserves,
    infrastructure and social costs of adapting to a post-carbon economy and built in debts from
    nuclear decommissioning and global warming.
    The arguments that are presented about "understanding" and AI also tend to ignore the very real hard questions of consciousness (see David Chalmers for extensive discourse). Kurzweil, for all his desire to be a futurist has ended up being backward and unnuanced in his scientific premises (effectively being a pure reductionist).
    These are real problems that threaten to stall the progress of human innovation and the technocornucopians shrug them aside with the simplistic argument that technological innovation will solve all problems. It's bordering on cultish especially when they speak of "uploading" their consciousness. There is a deep isolationist fantasy at play here that is best epitomized by young Japanese men living in their bedrooms and wanking to hentai.
    The path of human development has involved many periods of expansion and regression. I believe the current age will be a transition from the post-industrial expansion to a period where we are forced to address the social issues of expanding gaps between rich and poor and the need to remedy our abuse of the environment. These changes will take a long time, cause social upheaval and maybe even slowing of technological progress and it's not a bad thing.
    Who knows, we may even emerge as civilised.

    Starting Score:    1  point
    Moderation   +4  
       Insightful=1, Interesting=3, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   5  
  • (Score: 5, Insightful) by Thexalon on Monday February 24 2014, @03:12PM

    by Thexalon (636) on Monday February 24 2014, @03:12PM (#5854)

    Kurzweil is the worst form of technocornucopian. I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.

    And, most conveniently, his predictions are always far enough ahead of the present that when the predicted time rolls around and he's wrong, nobody digs up a record of his predictions to show that he's wrong.

    That's hardly unique to Kurzweil: A common TED talk, for example, has somebody standing on stage telling his/her audience about how a lot of people that aren't in the room will work extremely hard to produce some big technological breakthrough that will make the world a dramatically better place in 5/10/15/25/50 years. They are almost universally completely wrong, but it makes everybody feel good and feel like they're somehow a part of this wonderful change. The real business these people are in is peddling unfounded optimism to mostly rich people who don't know any better.

    The folks in the optimism business also have an answer to your well-founded objection: Some as-of-yet-unknown energy source will be discovered over the next 25 years that will provide all the power we need without any nasty waste products to worry about. They key rule is that nobody in the target audience will have to significantly change their lifestyle or budget to completely solve the problem.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by mhajicek on Monday February 24 2014, @03:16PM

      by mhajicek (51) on Monday February 24 2014, @03:16PM (#5861)

      Except for the fact that more often than not, he's been right so far.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 3, Insightful) by HiThere on Monday February 24 2014, @08:38PM

        by HiThere (866) on Monday February 24 2014, @08:38PM (#6150) Journal

        Well, it depends on how you measure it. He's often been wrong in the details, and he's often been wrong in the time required (in both directions). OTOH, he's generally been in the right ballpark. So if he says by 2029, I'd say not before 2020, and yes before 2050, unless there are severe external events...like a giant meteor impact, a volcanic "year without a summer", worldwide civil unrest, etc.

        P.S.: Where the unreasonable optimism comes in is that he assumes this will be a good thing. I give the odds of that as at most 1 in 3. OTOH, if computers DON'T take over, I give the odds of humanity surviving the century as less than 1 in 20. We've already had several close calls, and the number of players has been increasing.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 0) by Anonymous Coward on Monday February 24 2014, @11:23PM

      by Anonymous Coward on Monday February 24 2014, @11:23PM (#6275)

      To be fair, he has been pushing the 2030 date for computer consciousness for a long time, I first saw it in a book of his in the '90s.

    • (Score: 2, Informative) by Namarrgon on Tuesday February 25 2014, @02:48AM

      by Namarrgon (1134) on Tuesday February 25 2014, @02:48AM (#6348)

      Kurzweil has indeed rated his own 2009 predictions [forbes.com], and (perhaps unsurprisingly) finds them to be pretty good - mostly by marking himself as correct when a prediction is only partially true.

      This [lesswrong.com] is perhaps a better & less biased review, picking 10 predictions at random and marking a number of them as clearly false (as of 2011, though a few of those are a lot closer these days), which still came to a mean of over 54% accuracy. This is judged to be "excellent", considering the amount of technological change in computing over that decade - predicting the future is not a yes/no question, so a 50% success rate is actually quite good.

      --
      Why would anyone engrave Elbereth?