kef writes:
"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.
Kurzweil says:
Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.
Skynet anyone?"
(Score: 2, Insightful) by recurse on Monday February 24 2014, @09:05PM
So, my issue with this is that the CPU processing power available seems largely irrelevant to me. I think we have the CPU power available to us now that we could model (very) long running simulations of NN's to evaluate 'intelligence'.
My issue with all this is that, to me intelligence is inseparable from biology. We aren't just meat bags carrying our smart parts around in our skull. The whole body, from gut flora to genitals, to CNS are all involved intimately with 'intelligence'.
It is from our biological imperatives that our intelligence is derived. How can we possible create a form of intelligence that we would recognize as such, without those things?