Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Foobar Bazbot on Monday February 24 2014, @05:48PM

    by Foobar Bazbot (37) on Monday February 24 2014, @05:48PM (#5987) Journal

    Yes, as the length of that article demonstrates, Kurzweil is very good at predictions.

    Wait, did you mean good at accurate predictions?

    Anyway, looking at the predictions for 2009, from his 1999 book, there's several classes of predictions. First, there's the ones that have now been technically possible for at least a couple years (hey, I'll cut him 2-3 years slack, especially if the alternative is dredging up info to verify my memories' timestamps), but haven't materialized due to non-technical considerations:

    • Most books will be read on screens rather than paper.
        This may already be true, thought I don't think it is yet. Certainly it was technically possible, but unrealized, in 2009.
    • Intelligent roads and driverless cars will be in use, mostly on highways.
        Given the investment in building beacons, etc. into roadways, tech was quite ready for vehicles to self-drive on limited-access highways in 2009.
    • Personal worn computers provide monitoring of body functions, automated identity and directions for navigation.
        Phones (whether or not one considers them "worn") are 2/3 of the way there, and body-function monitoring is technically simple to add -- but people don't seem to want it much.
    • Computer displays built into eyeglasses for augmented reality are used.
        I don't think Google Glass quite counts as "built into eyeglasses", but we're getting there now. (Depends too, on how "used" is defined -- do devices worn by researchers count? My immediate understanding requires consumer availability (even if it's only for very rich consumers), but it's debatable.)

    Well, you're not really a good futurist if you get the tech side right and the social side wrong, yet keep making predictions that depend on social uptake. But that's a limitation we can quantify and work with, so I can't get too worked up about it.

    And of course you've got the true stuff:

    • Cables are disappearing. Computer peripheries use wireless communication.
        Video is the main exception, so far, but stuff like ChromeCast is eating into even that.
    • People can talk to their computer to give commands.
    • Computers can recognize their owner's face from a picture or video.

    I won't say any of those were obvious in 1999 (I don't know if they were or not, but it's impossible to make such a retrospective claim fairly), but one thing they have in common: all the tech was there in 1999, they just needed way more processing power than was then feasible. Tiny radios existed, but something like bluetooth needed way too much CPU and DSP to think of putting in headphones. Audio recording worked great, but even domain-specific speech recognition needed too much muscle to run on a random PC. Webcams existed (Connectix QuickCam, anyone?), but again, PCs of the day couldn't do much with that video stream. So yeah, 10 years of Moore's Law, and these became solved problems.

    But the most troubling category is these:

    • Most text will be created using speech recognition technology.
        General-purpose speech-to-text is a hard problem, and throwing bigger [CG]PUs at it doesn't solve it.
    • People use personal computers the size of rings, pins, credit cards and books.
        Battery tech just isn't there for rings, pins, and credit cards, as my Android wristwatch with 6 hour battery life (in use, playing music and reading ebooks -- standby is of course much longer) shows.
    • Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space.
        WTF? I can only assume he's thinking that with sufficiently-advanced DSP (which is indistinguishable from magic), you can beam-form directly into someone's ear, and thus need very little power to be audible. But "very small" just doesn't work -- you need a big aperture for high resolution. At best, you get an array of very small chip-based devices.
    • Autonomous nanoengineered machines have been demonstrated and include their own computational controls.
        Nanobots. Yes, Nanobots in 2009! Ok, ok, he said "nanoengineered", which could imply microbots with nanoscale components, rather than the whole bot being nanoscale. Still....

    These failed predictions reveal a serious problem -- Kurzweil seems to assume one of two things: that every technology advances exponentially with a similar time constant to cramming more transistors in a chip, or that every problem, every shortfall in some other technical field, may be worked around by cramming more transistors on a chip.

    Turns out some stuff is like that, some isn't. In general, technology growth functions look exponential (with various time constants) for awhile, but in many fields we've eventually seen a change to a constant or decreasing growth rate (i.e. linear or sigmoid growth) -- with audio transducers, for example, we've already hit that. Battery tech still is going exponential, but with a longer time constant than Moore's law.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   3