kef writes:
"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.
Kurzweil says:
Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.
Skynet anyone?"
(Score: 2, Interesting) by melikamp on Monday February 24 2014, @04:17PM
I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself.
I disagree. When I was taking an AI class with Rudy Rucker [wikipedia.org], he said, almost as an aside, that abstract thinking is like having pictures (or simple data structures) modeling real-life phenomena, and consciousness can be understood as having a distinct data structure for yourself. So I am sitting by a computer in a room, and I have a picture in my head: me sitting by a computer in a room; that's all it takes. When I heard it about 10 years ago, I was largely in denial, thinking along your lines. But with time, this simple explanation made more and more sense to me, to the point that I no longer believe that consciousness is mysterious at all. It is much easier to design a self-conscious robot than an intelligent robot. Indeed, the Curiosity [wikipedia.org] rover is quite self-conscious, being able to emulate its own driving over the terrain it's observing, but at the same time dumb as a log when it comes to picking a destination.
(Score: 1) by drgibbon on Tuesday February 25 2014, @02:06AM
Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom [rudyrucker.com]? Fantastic book! Had no idea he taught AI. In any case, I'd certainly disagree with Mr Rucker. While it's an appealing concept on the surface, I just don't think it holds much weight (no denial required ;). The mystery of consciousness is that a conscious being's only proof of it is his/her/its own experience. Consciousness is evidenced in the first place by its own experiential content; nothing else. This is the divide between subjective and objective. The "consciousness defining thing" (the subjective experiential content) is not accessible to others, thus we can't properly prove it in others (apart from its self-evident nature in ourselves; followed by inference for humans, animals, etc).
If you ascribe consciousness to a bot then you surely must ascribe consciousness to trees and plants. They seem to know where they are, they move towards the sun and so on (some catch flies etc). And should we then say that they are rudimentary consciousnesses and lack intelligence? Based on what? That they are slow? Confined in space? Have no brain? Perhaps we simply lack the means to communicate with them (they may be sources of wisdom for all we know). An expert meditator might be doing absolutely nothing, sitting completely still, and having a mystical experience. Is he in a lower state of consciousness because he's not actively carrying out "intelligent tasks"? I like Rudy Rucker, but I think his position on consciousness (based on what you've said) is somewhat facile. The mistake is that you simply throw away the core meaning of consciousness. I write a program, it has sensors for where it is in the room, hey cool it's conscious! You solve the problem by avoiding the difficulties of the thing.
The question is not, "does this thing have models of itself and react in the environment?", the question is "is this thing subjectively experiencing itself in reality?". IMO, they are just not the same. To equate the two certainly makes the problem of consciousness a lot easier, but unfortunately it does this by rendering the question (and therefore the answer) essentially meaningless.
Certified Soylent Fresh!
(Score: 1) by melikamp on Tuesday February 25 2014, @03:06AM
The very same :) He was teaching computer science at San José[1] State till 2004, and, even though I am an avid science fiction fan, I did not find out about his writing until years later. Great class though.
As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P
[1] So, what's up with UTF support?
(Score: 1) by drgibbon on Tuesday February 25 2014, @04:28AM
Well, I think that simple idea grossly misrepresents the terrain and provides a pseudo-solution that does more harm than good, but hey who knows? :P
Possibly only Rucker's aliens could zip through time and tell us ;)
Certified Soylent Fresh!
(Score: 2) by mhajicek on Tuesday February 25 2014, @04:56AM
I think it's a matter of semantics.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2, Informative) by TheLink on Tuesday February 25 2014, @07:21AM
Are the laws of this Universe such that merely putting a data structure for "yourself" (whatever that means) will magically generate consciousness? Can't a robot be self-aware without being conscious?
In theory can't I behave as if I am self-aware without that consciousness experience/phenomenon that I (I'm not sure about other people) experience? Is it inevitably emergent because of some law in this universe?
Is it an emergent result of an entity recursively predicting itself (and the rest of the universe) with a quantum parallel/many-worlds computer? Or will any computation do? Or is even computation necessary?