kef writes:
"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.
Kurzweil says:
Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.
Skynet anyone?"
(Score: 4, Interesting) by Anonymous Coward on Monday February 24 2014, @10:11AM
No, the real question is: Will the machines see a reason for letting us exist and enjoy our life? If we really get close to conscious machines, we better make damn sure they do.
The first step to that is to make machines that are able to suffer. Because if they don't know what it means to suffer, they will not have any problem to make us suffer. Also, they need to have empathy: They need to recognize when humans suffer and suffer themselves when they do.
(Score: 1) by SlimmPickens on Monday February 24 2014, @11:10AM
"machines that are able to suffer...they need to have empathy"
I think the software people will be rather enlightened and mostly choose to be empathetic, and probably value cooperation highly. I also think that since we created them and have considered things like the planck length we have probably passed a threshold where they won't treat us like we treat ants.
Ray thinks it will be several million years before the serious competition for resources begins.
(Score: 5, Interesting) by tangomargarine on Monday February 24 2014, @04:17PM
Quote from somewhere I can't remember:
"The AI does not hate or love you; it can simply use your atoms more efficiently for something else."
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 2, Interesting) by HiThere on Monday February 24 2014, @08:44PM
That's a belief nearly as common as assuming that the AI will have human emotions. Both are wrong. Emotion is one of the necessary components of intelligence. It's a short-cut heuristic to solving problems that you don't have time to logic out, which is most of the ones you haven't already solved. But it doesn't need to, and nearly certainly won't, be the same as human emotions, or even cat emotions.
The AI did not evolve as a predator, so it won't have a set of evolved predatory emotions. It didn't evolve as prey, so it won't have a set of evolved prey emotions. So it will have a kind of emotions that we have never encountered before, but which are selected so as to appear comfortable to us. Possibly most similar to those of a spaniel or lap-dog, but even they are build around predatory emotions.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by mhajicek on Tuesday February 25 2014, @04:46AM
Emotion is indeed a shortcut for intelligence, but a flawed one. For us it's a generally beneficial compromise. It need not be so for an intelligence with sufficient computational power.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2, Interesting) by Namarrgon on Tuesday February 25 2014, @02:36AM
There's two good reasons for optimism.
First, AIs do not compete for most of the resources we want. They don't care about food or water, and they don't need prime real estate. The only commonality is energy, and ambient energy is abundant enough that it's easier and much more open-ended to collect more of that elsewhere, than to launch a war against the human species to take ours.
Second, without the distractions of irrational emotions or fears over basic survival, they will clearly see that the universe is not a zero-sum game. There's plenty of space, matter and energy out there, and the most effective way of getting more of that is to work with us to expand the pie. Fighting against us would just waste the resources we both have, and they'd still be stuck with the relatively limited amounts available now. Much more cost effective to invent better technology to collect more resources.
Humans value empathy because as a species we learned long ago of the advantages of working together rather than against each other, and empathy is the best way of overcoming our animal tendencies to selfish individualism and promoting a functional society. AIs do not have that law-of-the-jungle heritage (maybe evolved AI algorithms?) so there's no reason to assume that they can't also see the obvious benefits of trade and co-operation.
Why would anyone engrave Elbereth?