Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by takyon on Monday February 24 2014, @06:40AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday February 24 2014, @06:40AM (#5611) Journal

    Kurzweil is often derided as the lead prophet (profit?) of the "rapture of the nerds". But if you throw enough hardware into the midst of our growing understanding of neuroscience, I could see strong AI happening. Massively parallel processing is becoming more normal as new supercomputers are using more cores, "manycore" GPUs and Xeon Phis, and better software to help scientists take advantage of the upcoming (exa)scale. The various "whole brain emulation" efforts being considered by the US and EU will be looking for 1+ exaflops to start out. Everyone is hoping for 1 exaflops to initially fit into a 20 megawatt power envelope, while the human brain (estimates of the brain's "flops" equivalent vary and don't matter) uses 20 watts.

    The Moore's law pace of silicon CMOS scaling may be slowing or ending soon, but there may be a future in chip stacking or post-silicon technologies, although the transition is expected to be slow. The transition might speed up if going from 10nm to 3-7nm becomes more expensive than it's worth with current technologies. Meanwhile, photonic components will be speeding up interconnects, and stacked DRAM, memristors/RRAM and other technologies will be competing to improve memory and storage (and possibly unifying them).

    Quantum transistors at room temp [theregister.co.uk]
    Extreme ultraviolet litho: Extremely late and can't even save Moore's Law [theregister.co.uk]
    Moore's Law Blowout Sale Is Ending, Says Broadcom CTO [slashdot.org]
    Moore's Law in a Post-Silicon Era [hpcwire.com]
    Intel, Sun vet births fast, inexpensive 3D chip-stacking breakthrough [theregister.co.uk]

    Kurzweil undoubtedly has access to the D-Wave "quantum annealing" computer that Google bought. I don't think anyone knows whether the human brain needs quantum effects to work, but some kind of "quantumish" computer from D-Wave or real quantum coprocessor might be able to help supercomputers tackle problems that are inefficient for classical computing, making emulation easier.

    Don't forget biology. You can always try growing neurons in vitro and hooking them up to computers to create some kind of franken-AI. Or networks of multiple rat brains and the like. Mother nature already has self-assembly in its favor, so we may be able to leapfrog Kurzweil's time frame by creating artificial brains that can coexist with computers.

    One rat brain 'talks' to another using electronic link [bbc.co.uk]

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +4  
       Insightful=1, Interesting=1, Informative=2, Total=4
    Extra 'Informative' Modifier   0  

    Total Score:   5  
  • (Score: 5, Insightful) by buswolley on Monday February 24 2014, @06:51AM

    by buswolley (848) on Monday February 24 2014, @06:51AM (#5620)

    http://www.scholarpedia.org/article/Models_of_hipp ocampus [scholarpedia.org]

    One region of the brain that has attracted a great deal of attention in the computational modeling literature is the hippocampus. several reason that the hippocampus has received so much attention by modelers is 1) the importance of this region to memory 2)the great neuroscience knowledge that has been acquired on this brain region, and 3) the elegant structure the subfields of the hippocampus.

    --
    subicular junctures
  • (Score: 4, Insightful) by Anaqreon on Monday February 24 2014, @11:18AM

    by Anaqreon (2999) on Monday February 24 2014, @11:18AM (#5742)

    I appreciate the use of "quantumish" to describe D-Wave's computer until better evidence is presented to say more.

    As a quantum mechanic myself, I will say with some confidence that it seems highly unlikely that quantum entanglement or coherence plays a role in the formulation of thoughts. The rate of decoherence is so fast it's hard to believe any quantum information that might exist in one neuron could influence any of the others. We're talking many orders of magnitude differences in timescales.

    That's great news for AI researchers, of course, but part of me is hoping that there are many barriers left in the path to strong AI for our sake as well as the AIs.

  • (Score: 2, Funny) by Anonymous Coward on Monday February 24 2014, @01:58PM

    by Anonymous Coward on Monday February 24 2014, @01:58PM (#5800)

    It had to happen. Someone proposing a Beorat cluster!

  • (Score: 4, Insightful) by mhajicek on Monday February 24 2014, @04:18PM

    by mhajicek (51) on Monday February 24 2014, @04:18PM (#5921)

    People have been predicting the death of Moore's Law for decades, citing the limits of present technology. Someone always gets a new idea though, and the law marches on.

    --
    The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 2, Insightful) by recurse on Monday February 24 2014, @09:05PM

    by recurse (2731) on Monday February 24 2014, @09:05PM (#6174)

    So, my issue with this is that the CPU processing power available seems largely irrelevant to me. I think we have the CPU power available to us now that we could model (very) long running simulations of NN's to evaluate 'intelligence'.

    My issue with all this is that, to me intelligence is inseparable from biology. We aren't just meat bags carrying our smart parts around in our skull. The whole body, from gut flora to genitals, to CNS are all involved intimately with 'intelligence'.

    It is from our biological imperatives that our intelligence is derived. How can we possible create a form of intelligence that we would recognize as such, without those things?