"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.
Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.
I think there's an important difference between intelligence and consciousness. The interesting thing about consciousness is that we never have access to it in others; it's always inferred from behaviour and/or physiology. Intelligence is almost a way of doing things, a kind of action or thought, while consciousness has this aspect of being attached to it. The sole proof of consciousness to the human is the individual's experience of life itself. I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself. Computers might be able to start understanding language in an intelligent sense, but to me this does not equate with consciousness. Would the computer be experiencing anything? One would suspect not.
However a truly intelligent machine could be extremely useful. For instance, if it could really understand language, say to the point where it could read scientific papers, it would be fantastic to run hypotheses past an AI that has synthesised all human scientific knowledge of the brain. It might even be able to function as a translator between different branches of academic knowledge (social scientists could have access to a system that actually reads and understands the full sum of neuroscience knowledge, and so on).
I could imagine some dire scenarios too, e.g. the machine becomes seen as some kind of all knowing oracle (when in reality it would be limited by the human information fed into it), and society is led down some wrong track because our assumptions about fundamentals are already incorrect, and we get stuck in a feedback loop between our intellectual output (into the machine) and its subsequent analysis and recommendations.
Hmm, anyway I don't see any fundamental change in the computer being an information processor, even as they gain aspects of intelligence. To get back to your incentives and so on, they would need to be embodied, and programmed with sensations, needs, etc; which seems extremely foolish since we are already putting an enormous strain on the planet, and we have conscious entities exactly like that already (i.e. people), so there would seem to be no point (reinventing the wheel? ;). I suppose you could give it psychological needs (to be accepted by others and so on), but I don't see the value in this. In my opinion, an artificially intelligent system shouldn't need incentives; it just processes information but has no experience of being a machine, or of its place in reality. Which to me rules out consciousness, but not intelligence.
I think any program that uses a world model that contains a representation of itself (such as a bot that maps the room and knows where it is within said room) has a rudimentary degree of consciousness. It is technically self aware. Like intelligence, consciousness has degrees.
More full answer to this idea in reply to the poster below, but I think this position is pretty much untenable. Why is the program self-aware? Based on what is it experiencing itself in reality? If we try to imagine what it's like to be the bot, all we do is insert the substance of our own consciousness inside our bot representation (e.g. imagining what it's like to be the bot). And presumably, the substance of our own consciousness is dependent on having a human body, so they should not transfer so easily. I grant that it is at least conceivable that the bot is subjectively experiencing something, but it seems far, far less likely that the bot is experiencing itself as compared with DNA-containing lifeforms, such as people, animals, plants and so on.
Substitute "bot" with any other intelligence and reread your post and it is equally valid. If we try to imagine what it's like to be the dog, all we do is insert the substance of our own consciousness inside our dog representation, for example. But with a bot, a dog, or any number of other things, we know it's thinking about itself; it's location, orientation, velocity, energy level, etc. Is not thinking about ones self the definition of self awareness?
"Substitute 'bot' with any other intelligence and reread your post and it is equally valid."
Of course, and this is what makes the problem so difficult. One can never know, with absolute certainty, that anyone but oneself is experiencing consciousness; but that doesn't mean we can't make working judgements. We infer consciousness in others (usually based on behaviour). But a conscious entity never infers its own consciousness; it must be self-evident.
I am suggesting that the inference that a bot is consciously experiencing reality is not evidenced by the simple fact that it responds to the environment (or has models of the environment specified in code). True, neither of us have definitive proof, but I see no compelling reason to believe that it is so (other than a theoretical possibility, which IMO, is exceedingly small).
For instance, a mobile phone has what you might call "awareness" of its energy levels and location in space; it responds to light, orientation, touch and so on. By your definition, the phone is conscious. It is "thinking" about its location in space, etc. I cannot prove that the phone is not conscious (just as you cannot prove that it is), but I make a working judgement that it is not. At present, everything that has a semblance of consciousness (which we must infer) is alive and contains DNA. Computer programs/bots/AI seem to be more akin to models of conscious life, rather than conscious life itself.
Someone else posted something about David Chalmers, and I found some interesting discussion here [consc.net] about the easy vs. hard problems of consciousness (although I only skimmed the intro). What he talks about there is what I mean by consciousness, the phenomena of experience.
I agree that the cell phone is self aware. The thing is that "self awareness" has degrees just like intelligence. A calculator has some intelligence, just not very much. An average computer has significantly more, and an average person much more than that. Same with self awareness. An ant and a cell phone both have some self awareness; there is nothing special about being DNA based that gives a magical attribute of "consciousness".
Am I right in inferring that when you say "consciousness" you're referring to the higher level of self awareness by which one is aware of one's own mind and thoughts? If so, even a significant portion of the human population may not be conscious. It seems many of them operate on instinct.
In terms of consciousness in the experiential sense (which I would associate with self-awareness), I would say the phone is not self-aware (of course we both have no direct proof either way; I could say a plastic bag is imbued with a universal consciousness and neither of us could prove or disprove it definitively). Consciousness may have qualitative degrees (I'm sure it does in fact), but that does not mean that we must attribute it to telephones.
Regarding DNA, I do not claim that it is magical, or that DNA alone gives consciousness (although it is at least conceivable); I was merely pointing out that everything so far that we would attribute with consciousness (in the experiential sense) is alive and contains DNA.
As I have said many times, by consciousness I am referring to a subjective experience of being in the world (check out the Chalmers paper [consc.net] for a more thorough description of this). I cannot find any sympathy for your view that "a significant portion of the human population may not be conscious". Operating on instinct in no way rules out an experiential sense of being. I strongly doubt that mobile phones are imbued with a subjective experiential sense of being in reality. If you believe they are, we might have to agree to disagree!
I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself.
I disagree. When I was taking an AI class with Rudy Rucker [wikipedia.org], he said, almost as an aside, that abstract thinking is like having pictures (or simple data structures) modeling real-life phenomena, and consciousness can be understood as having a distinct data structure for yourself. So I am sitting by a computer in a room, and I have a picture in my head: me sitting by a computer in a room; that's all it takes. When I heard it about 10 years ago, I was largely in denial, thinking along your lines. But with time, this simple explanation made more and more sense to me, to the point that I no longer believe that consciousness is mysterious at all. It is much easier to design a self-conscious robot than an intelligent robot. Indeed, the Curiosity [wikipedia.org] rover is quite self-conscious, being able to emulate its own driving over the terrain it's observing, but at the same time dumb as a log when it comes to picking a destination.
Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom [rudyrucker.com]? Fantastic book! Had no idea he taught AI. In any case, I'd certainly disagree with Mr Rucker. While it's an appealing concept on the surface, I just don't think it holds much weight (no denial required ;). The mystery of consciousness is that a conscious being's only proof of it is his/her/its own experience. Consciousness is evidenced in the first place by its own experiential content; nothing else. This is the divide between subjective and objective. The "consciousness defining thing" (the subjective experiential content) is not accessible to others, thus we can't properly prove it in others (apart from its self-evident nature in ourselves; followed by inference for humans, animals, etc).
If you ascribe consciousness to a bot then you surely must ascribe consciousness to trees and plants. They seem to know where they are, they move towards the sun and so on (some catch flies etc). And should we then say that they are rudimentary consciousnesses and lack intelligence? Based on what? That they are slow? Confined in space? Have no brain? Perhaps we simply lack the means to communicate with them (they may be sources of wisdom for all we know). An expert meditator might be doing absolutely nothing, sitting completely still, and having a mystical experience. Is he in a lower state of consciousness because he's not actively carrying out "intelligent tasks"? I like Rudy Rucker, but I think his position on consciousness (based on what you've said) is somewhat facile. The mistake is that you simply throw away the core meaning of consciousness. I write a program, it has sensors for where it is in the room, hey cool it's conscious! You solve the problem by avoiding the difficulties of the thing.
The question is not, "does this thing have models of itself and react in the environment?", the question is "is this thing subjectively experiencing itself in reality?". IMO, they are just not the same. To equate the two certainly makes the problem of consciousness a lot easier, but unfortunately it does this by rendering the question (and therefore the answer) essentially meaningless.
Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom?
The very same :) He was teaching computer science at San JosÃ© State till 2004, and, even though I am an avid science fiction fan, I did not find out about his writing until years later. Great class though.
As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P
 So, what's up with UTF support?
"As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P"
Well, I think that simple idea grossly misrepresents the terrain and provides a pseudo-solution that does more harm than good, but hey who knows? :P
Possibly only Rucker's aliens could zip through time and tell us ;)
I think it's a matter of semantics.