Slash Boxes

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by the_ref on Monday February 24 2014, @06:50AM

    by the_ref (2268) on Monday February 24 2014, @06:50AM (#5619)

    so nine years after the world government predicted would be in place by 2020?

    Ray always likes bumping his gums about the future, but if you check his record, his predictions don't seem to have been particularly prescient. ay_Kurzweil []

    Starting Score:    1  point
    Moderation   +4  
       Interesting=1, Informative=3, Total=4
    Extra 'Informative' Modifier   0  

    Total Score:   5  
  • (Score: 3, Informative) by omoc on Monday February 24 2014, @11:02AM

    by omoc (39) on Monday February 24 2014, @11:02AM (#5733)

    I would say he is very bad at predictions. I remember a book from the 90s (?) and IIRC all of his predictions for 2009 were either wrong or completely inaccurate. At a TED talk in 2005 (?) he said something like by 2010 computers will disappear and we have images directly written to our retina and everyone has full-immersion augmented virtual reality. I stopped paying attention to him but you can easily google these false predictions.

    • (Score: 2, Insightful) by webcommando on Monday February 24 2014, @01:55PM

      by webcommando (1995) on Monday February 24 2014, @01:55PM (#5799)

      "I stopped paying attention to him but you can easily google these false predictions."

      Honestly, I never pay much attention to self-proclaimed "futurists". They tend to suffer from something many engineers suffer from: overly optimistic estimates. Another thing I notice is they make big leaps in technology changes (e.g. twenty years or more in the future).

      I've always thought that the predictions would be more accurate if the "futurists" would look at incremental changes over time and really think about when the major changes would occur. What is possible in the next year or two. If these things came true, what would happen in the next year or two. Before you know it, you are many years in the future but have a more grounded (in my opinion, obviously) basis for your predictions.

      First SN post...glad to be here

      • (Score: 1) by ZombieBait on Monday February 24 2014, @07:43PM

        by ZombieBait (3100) on Monday February 24 2014, @07:43PM (#6099)

        This reminds me of the article about Isaac Asimov's predictions, mov-2014_n_4530785.html []. While some are wrong and some are a bit vague, he certainly seems to have thought through where technology was heading. I thought “Robots will neither be common nor very good in 2014, but they will be in existence.†was particularly appropriate for this story.

  • (Score: 3, Interesting) by Foobar Bazbot on Monday February 24 2014, @05:48PM

    by Foobar Bazbot (37) on Monday February 24 2014, @05:48PM (#5987) Journal

    Yes, as the length of that article demonstrates, Kurzweil is very good at predictions.

    Wait, did you mean good at accurate predictions?

    Anyway, looking at the predictions for 2009, from his 1999 book, there's several classes of predictions. First, there's the ones that have now been technically possible for at least a couple years (hey, I'll cut him 2-3 years slack, especially if the alternative is dredging up info to verify my memories' timestamps), but haven't materialized due to non-technical considerations:

    • Most books will be read on screens rather than paper.
        This may already be true, thought I don't think it is yet. Certainly it was technically possible, but unrealized, in 2009.
    • Intelligent roads and driverless cars will be in use, mostly on highways.
        Given the investment in building beacons, etc. into roadways, tech was quite ready for vehicles to self-drive on limited-access highways in 2009.
    • Personal worn computers provide monitoring of body functions, automated identity and directions for navigation.
        Phones (whether or not one considers them "worn") are 2/3 of the way there, and body-function monitoring is technically simple to add -- but people don't seem to want it much.
    • Computer displays built into eyeglasses for augmented reality are used.
        I don't think Google Glass quite counts as "built into eyeglasses", but we're getting there now. (Depends too, on how "used" is defined -- do devices worn by researchers count? My immediate understanding requires consumer availability (even if it's only for very rich consumers), but it's debatable.)

    Well, you're not really a good futurist if you get the tech side right and the social side wrong, yet keep making predictions that depend on social uptake. But that's a limitation we can quantify and work with, so I can't get too worked up about it.

    And of course you've got the true stuff:

    • Cables are disappearing. Computer peripheries use wireless communication.
        Video is the main exception, so far, but stuff like ChromeCast is eating into even that.
    • People can talk to their computer to give commands.
    • Computers can recognize their owner's face from a picture or video.

    I won't say any of those were obvious in 1999 (I don't know if they were or not, but it's impossible to make such a retrospective claim fairly), but one thing they have in common: all the tech was there in 1999, they just needed way more processing power than was then feasible. Tiny radios existed, but something like bluetooth needed way too much CPU and DSP to think of putting in headphones. Audio recording worked great, but even domain-specific speech recognition needed too much muscle to run on a random PC. Webcams existed (Connectix QuickCam, anyone?), but again, PCs of the day couldn't do much with that video stream. So yeah, 10 years of Moore's Law, and these became solved problems.

    But the most troubling category is these:

    • Most text will be created using speech recognition technology.
        General-purpose speech-to-text is a hard problem, and throwing bigger [CG]PUs at it doesn't solve it.
    • People use personal computers the size of rings, pins, credit cards and books.
        Battery tech just isn't there for rings, pins, and credit cards, as my Android wristwatch with 6 hour battery life (in use, playing music and reading ebooks -- standby is of course much longer) shows.
    • Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space.
        WTF? I can only assume he's thinking that with sufficiently-advanced DSP (which is indistinguishable from magic), you can beam-form directly into someone's ear, and thus need very little power to be audible. But "very small" just doesn't work -- you need a big aperture for high resolution. At best, you get an array of very small chip-based devices.
    • Autonomous nanoengineered machines have been demonstrated and include their own computational controls.
        Nanobots. Yes, Nanobots in 2009! Ok, ok, he said "nanoengineered", which could imply microbots with nanoscale components, rather than the whole bot being nanoscale. Still....

    These failed predictions reveal a serious problem -- Kurzweil seems to assume one of two things: that every technology advances exponentially with a similar time constant to cramming more transistors in a chip, or that every problem, every shortfall in some other technical field, may be worked around by cramming more transistors on a chip.

    Turns out some stuff is like that, some isn't. In general, technology growth functions look exponential (with various time constants) for awhile, but in many fields we've eventually seen a change to a constant or decreasing growth rate (i.e. linear or sigmoid growth) -- with audio transducers, for example, we've already hit that. Battery tech still is going exponential, but with a longer time constant than Moore's law.