Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by SadEyes on Monday February 24 2014, @06:25AM

    by SadEyes (2930) on Monday February 24 2014, @06:25AM (#5603)

    So Google can build a computer and write a program that parses billions of pages. So what? How does this parsing affect the behavior of the program? The Google spider parses billions of pages every day, and no one would say that it is intelligent.

    The article talks about building a machine with real natural-language understanding. It would be easier to communicate with such a program, but why would you? People alter their behavior in response to stimulus, and respond to incentives both primitive (food, air) and sophisticated (group identity). What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

    I understand, we're nerds, we get excited about displays of technical wizardry, it's cool. I'm not exactly throwing in with the philosophers, here, but I'd like some answers to the human scale questions (above) as well.

    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=2, Total=4
    Extra 'Insightful' Modifier   0  

    Total Score:   5  
  • (Score: 2, Funny) by ls671 on Monday February 24 2014, @07:31AM

    by ls671 (891) Subscriber Badge on Monday February 24 2014, @07:31AM (#5650) Homepage

    "The article talks about building a machine with real natural-language"

    What is important is the neural languages not the "natural-language" because it may vary depending on where you are from.

    https://en.wikipedia.org/wiki/Artificial_neural_ne twork [wikipedia.org]

    I talked about that in a break in a meeting where representatives from a bunch a well known companies assisted and after I was done, somebody asked me: "What are you talking about? A urinal network?"

    That was really funny.

    --
    Everything I write is lies, including this sentence.
    • (Score: 0, Redundant) by ls671 on Monday February 24 2014, @10:28AM

      by ls671 (891) Subscriber Badge on Monday February 24 2014, @10:28AM (#5719) Homepage

      OK, re-reading, It might be hard to spot. So, here it is again in bold:

      somebody asked me: "What are you talking about? A urinal network?"

      --
      Everything I write is lies, including this sentence.
    • (Score: 0) by Anonymous Coward on Monday February 24 2014, @10:43AM

      by Anonymous Coward on Monday February 24 2014, @10:43AM (#5727)

      Actually I've heard that people talking when encountering each other at the toilet is an important factor in corporate communications (and one of the reasons why women have problems in men-dominated companies because they obviously cannot participate in that). So you might indeed speak about an urinal network.

  • (Score: 3, Interesting) by Anonymous Coward on Monday February 24 2014, @07:43AM

    by Anonymous Coward on Monday February 24 2014, @07:43AM (#5656)

    The AI heavies have probably all asked the same questions.

    The natural language stuff is more so the machine can learn how to understand us, not the other way around.

    This project touches on some of what you're talking about: http://sfist.com/2012/06/26/google_geniuses_teach_ supercomputer.php [sfist.com] They pointed a 'neural-net' at youtube and told it to look for things without providing any reference and it figured out that the internet haz cats.

    Saw another one somewhere about a theory that sensory input/physicality might be a fundamental and necessary part of an AI system getting the spark. That project sounds a bit wacky to me, but lord knows being conscious is definitely a bit wacky at times to say the least...

  • (Score: 5, Interesting) by drgibbon on Monday February 24 2014, @07:46AM

    by drgibbon (74) on Monday February 24 2014, @07:46AM (#5657) Journal

    I think there's an important difference between intelligence and consciousness. The interesting thing about consciousness is that we never have access to it in others; it's always inferred from behaviour and/or physiology. Intelligence is almost a way of doing things, a kind of action or thought, while consciousness has this aspect of being attached to it. The sole proof of consciousness to the human is the individual's experience of life itself. I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself. Computers might be able to start understanding language in an intelligent sense, but to me this does not equate with consciousness. Would the computer be experiencing anything? One would suspect not.

    However a truly intelligent machine could be extremely useful. For instance, if it could really understand language, say to the point where it could read scientific papers, it would be fantastic to run hypotheses past an AI that has synthesised all human scientific knowledge of the brain. It might even be able to function as a translator between different branches of academic knowledge (social scientists could have access to a system that actually reads and understands the full sum of neuroscience knowledge, and so on).

    I could imagine some dire scenarios too, e.g. the machine becomes seen as some kind of all knowing oracle (when in reality it would be limited by the human information fed into it), and society is led down some wrong track because our assumptions about fundamentals are already incorrect, and we get stuck in a feedback loop between our intellectual output (into the machine) and its subsequent analysis and recommendations.

    Hmm, anyway I don't see any fundamental change in the computer being an information processor, even as they gain aspects of intelligence. To get back to your incentives and so on, they would need to be embodied, and programmed with sensations, needs, etc; which seems extremely foolish since we are already putting an enormous strain on the planet, and we have conscious entities exactly like that already (i.e. people), so there would seem to be no point (reinventing the wheel? ;). I suppose you could give it psychological needs (to be accepted by others and so on), but I don't see the value in this. In my opinion, an artificially intelligent system shouldn't need incentives; it just processes information but has no experience of being a machine, or of its place in reality. Which to me rules out consciousness, but not intelligence.

    --
    Certified Soylent Fresh!
    • (Score: 3, Interesting) by mhajicek on Monday February 24 2014, @03:32PM

      by mhajicek (51) on Monday February 24 2014, @03:32PM (#5878)

      I think any program that uses a world model that contains a representation of itself (such as a bot that maps the room and knows where it is within said room) has a rudimentary degree of consciousness. It is technically self aware. Like intelligence, consciousness has degrees.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2) by drgibbon on Tuesday February 25 2014, @02:25AM

        by drgibbon (74) on Tuesday February 25 2014, @02:25AM (#6334) Journal

        More full answer to this idea in reply to the poster below, but I think this position is pretty much untenable. Why is the program self-aware? Based on what is it experiencing itself in reality? If we try to imagine what it's like to be the bot, all we do is insert the substance of our own consciousness inside our bot representation (e.g. imagining what it's like to be the bot). And presumably, the substance of our own consciousness is dependent on having a human body, so they should not transfer so easily. I grant that it is at least conceivable that the bot is subjectively experiencing something, but it seems far, far less likely that the bot is experiencing itself as compared with DNA-containing lifeforms, such as people, animals, plants and so on.

        --
        Certified Soylent Fresh!
        • (Score: 2) by mhajicek on Tuesday February 25 2014, @04:41AM

          by mhajicek (51) on Tuesday February 25 2014, @04:41AM (#6398)

          Substitute "bot" with any other intelligence and reread your post and it is equally valid. If we try to imagine what it's like to be the dog, all we do is insert the substance of our own consciousness inside our dog representation, for example. But with a bot, a dog, or any number of other things, we know it's thinking about itself; it's location, orientation, velocity, energy level, etc. Is not thinking about ones self the definition of self awareness?

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
          • (Score: 1) by drgibbon on Tuesday February 25 2014, @06:32AM

            by drgibbon (74) on Tuesday February 25 2014, @06:32AM (#6425) Journal

            "Substitute 'bot' with any other intelligence and reread your post and it is equally valid."

            Of course, and this is what makes the problem so difficult. One can never know, with absolute certainty, that anyone but oneself is experiencing consciousness; but that doesn't mean we can't make working judgements. We infer consciousness in others (usually based on behaviour). But a conscious entity never infers its own consciousness; it must be self-evident.

            I am suggesting that the inference that a bot is consciously experiencing reality is not evidenced by the simple fact that it responds to the environment (or has models of the environment specified in code). True, neither of us have definitive proof, but I see no compelling reason to believe that it is so (other than a theoretical possibility, which IMO, is exceedingly small).

            For instance, a mobile phone has what you might call "awareness" of its energy levels and location in space; it responds to light, orientation, touch and so on. By your definition, the phone is conscious. It is "thinking" about its location in space, etc. I cannot prove that the phone is not conscious (just as you cannot prove that it is), but I make a working judgement that it is not. At present, everything that has a semblance of consciousness (which we must infer) is alive and contains DNA. Computer programs/bots/AI seem to be more akin to models of conscious life, rather than conscious life itself.

            Someone else posted something about David Chalmers, and I found some interesting discussion here [consc.net] about the easy vs. hard problems of consciousness (although I only skimmed the intro). What he talks about there is what I mean by consciousness, the phenomena of experience.

            --
            Certified Soylent Fresh!
            • (Score: 2) by mhajicek on Tuesday February 25 2014, @04:43PM

              by mhajicek (51) on Tuesday February 25 2014, @04:43PM (#6723)

              I agree that the cell phone is self aware. The thing is that "self awareness" has degrees just like intelligence. A calculator has some intelligence, just not very much. An average computer has significantly more, and an average person much more than that. Same with self awareness. An ant and a cell phone both have some self awareness; there is nothing special about being DNA based that gives a magical attribute of "consciousness".

              Am I right in inferring that when you say "consciousness" you're referring to the higher level of self awareness by which one is aware of one's own mind and thoughts? If so, even a significant portion of the human population may not be conscious. It seems many of them operate on instinct.

              --
              The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
              • (Score: 1) by drgibbon on Tuesday February 25 2014, @11:20PM

                by drgibbon (74) on Tuesday February 25 2014, @11:20PM (#6998) Journal

                In terms of consciousness in the experiential sense (which I would associate with self-awareness), I would say the phone is not self-aware (of course we both have no direct proof either way; I could say a plastic bag is imbued with a universal consciousness and neither of us could prove or disprove it definitively). Consciousness may have qualitative degrees (I'm sure it does in fact), but that does not mean that we must attribute it to telephones.

                Regarding DNA, I do not claim that it is magical, or that DNA alone gives consciousness (although it is at least conceivable); I was merely pointing out that everything so far that we would attribute with consciousness (in the experiential sense) is alive and contains DNA.

                As I have said many times, by consciousness I am referring to a subjective experience of being in the world (check out the Chalmers paper [consc.net] for a more thorough description of this). I cannot find any sympathy for your view that "a significant portion of the human population may not be conscious". Operating on instinct in no way rules out an experiential sense of being. I strongly doubt that mobile phones are imbued with a subjective experiential sense of being in reality. If you believe they are, we might have to agree to disagree!

                --
                Certified Soylent Fresh!
      • (Score: 1) by TheLink on Tuesday February 25 2014, @06:32AM

        by TheLink (332) on Tuesday February 25 2014, @06:32AM (#6424) Journal
        That's the interesting thing about this universe. By current popular scientific theories there really is no need for the actual consciousness phenomenon that we (or at least I) experience. In theory we could be behaving like self aware robots without actually experiencing the "self-aware" thing we currently experience.

        Of course one could go the other way and say that everything in this universe is actually self-aware. Just that different things have different abilities and powers- e.g. what rocks can do and feel is different from what we can do.

        Either way it's still rather interesting.
    • (Score: 2, Interesting) by melikamp on Monday February 24 2014, @04:17PM

      by melikamp (1886) on Monday February 24 2014, @04:17PM (#5919) Journal

      I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself.

      I disagree. When I was taking an AI class with Rudy Rucker [wikipedia.org], he said, almost as an aside, that abstract thinking is like having pictures (or simple data structures) modeling real-life phenomena, and consciousness can be understood as having a distinct data structure for yourself. So I am sitting by a computer in a room, and I have a picture in my head: me sitting by a computer in a room; that's all it takes. When I heard it about 10 years ago, I was largely in denial, thinking along your lines. But with time, this simple explanation made more and more sense to me, to the point that I no longer believe that consciousness is mysterious at all. It is much easier to design a self-conscious robot than an intelligent robot. Indeed, the Curiosity [wikipedia.org] rover is quite self-conscious, being able to emulate its own driving over the terrain it's observing, but at the same time dumb as a log when it comes to picking a destination.

      • (Score: 1) by drgibbon on Tuesday February 25 2014, @02:06AM

        by drgibbon (74) on Tuesday February 25 2014, @02:06AM (#6328) Journal

        Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom [rudyrucker.com]? Fantastic book! Had no idea he taught AI. In any case, I'd certainly disagree with Mr Rucker. While it's an appealing concept on the surface, I just don't think it holds much weight (no denial required ;). The mystery of consciousness is that a conscious being's only proof of it is his/her/its own experience. Consciousness is evidenced in the first place by its own experiential content; nothing else. This is the divide between subjective and objective. The "consciousness defining thing" (the subjective experiential content) is not accessible to others, thus we can't properly prove it in others (apart from its self-evident nature in ourselves; followed by inference for humans, animals, etc).

        If you ascribe consciousness to a bot then you surely must ascribe consciousness to trees and plants. They seem to know where they are, they move towards the sun and so on (some catch flies etc). And should we then say that they are rudimentary consciousnesses and lack intelligence? Based on what? That they are slow? Confined in space? Have no brain? Perhaps we simply lack the means to communicate with them (they may be sources of wisdom for all we know). An expert meditator might be doing absolutely nothing, sitting completely still, and having a mystical experience. Is he in a lower state of consciousness because he's not actively carrying out "intelligent tasks"? I like Rudy Rucker, but I think his position on consciousness (based on what you've said) is somewhat facile. The mistake is that you simply throw away the core meaning of consciousness. I write a program, it has sensors for where it is in the room, hey cool it's conscious! You solve the problem by avoiding the difficulties of the thing.

        The question is not, "does this thing have models of itself and react in the environment?", the question is "is this thing subjectively experiencing itself in reality?". IMO, they are just not the same. To equate the two certainly makes the problem of consciousness a lot easier, but unfortunately it does this by rendering the question (and therefore the answer) essentially meaningless.

        --
        Certified Soylent Fresh!
        • (Score: 1) by melikamp on Tuesday February 25 2014, @03:06AM

          by melikamp (1886) on Tuesday February 25 2014, @03:06AM (#6356) Journal

          Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom?

          The very same :) He was teaching computer science at San José[1] State till 2004, and, even though I am an avid science fiction fan, I did not find out about his writing until years later. Great class though.

          As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P

          [1] So, what's up with UTF support?

          • (Score: 1) by drgibbon on Tuesday February 25 2014, @04:28AM

            by drgibbon (74) on Tuesday February 25 2014, @04:28AM (#6388) Journal

            "As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P"

            Well, I think that simple idea grossly misrepresents the terrain and provides a pseudo-solution that does more harm than good, but hey who knows? :P

            Possibly only Rucker's aliens could zip through time and tell us ;)

            --
            Certified Soylent Fresh!
            • (Score: 2) by mhajicek on Tuesday February 25 2014, @04:56AM

              by mhajicek (51) on Tuesday February 25 2014, @04:56AM (#6404)

              I think it's a matter of semantics.

              --
              The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2, Informative) by TheLink on Tuesday February 25 2014, @07:21AM

        by TheLink (332) on Tuesday February 25 2014, @07:21AM (#6453) Journal
        Not so mysterious? Explain the actual experience you experience then.

        Are the laws of this Universe such that merely putting a data structure for "yourself" (whatever that means) will magically generate consciousness? Can't a robot be self-aware without being conscious?

        In theory can't I behave as if I am self-aware without that consciousness experience/phenomenon that I (I'm not sure about other people) experience? Is it inevitably emergent because of some law in this universe?

        Is it an emergent result of an entity recursively predicting itself (and the rest of the universe) with a quantum parallel/many-worlds computer? Or will any computation do? Or is even computation necessary?
  • (Score: 3, Insightful) by dmc on Monday February 24 2014, @10:06AM

    by dmc (188) on Monday February 24 2014, @10:06AM (#5712)

    The Google spider parses billions of pages every day, and no one would say that it is intelligent.

    While I've got a lot against Google [soylentnews.org], I do fondly remember the early days of Google search. Not only were its search results uncannily well ordered (this was back when it more appeared to be a tool written by geeks, for geeks, not for the personalized search suggestions for the masses), but I'd even use it because it would be faster to e.g. google search "wikipedia ..terms.." and get to the right page, than to go to wikipedia itself and search for the same thing. I know that isn't the sentient kind of intelligence you are referring to, but it was pretty amazing.

    So Google can build a computer and write a program that parses billions of pages. So what? How does this parsing affect the behavior of the program?

    Huh? I don't have the source code in front of me, but I can't imagine that the parsing of those billions of pages isn't affecting/effecting the behavior of the program. Part of me wants to say that it's probably not self-modifying code in the traditional sense, but that's actually more of an assumption than I'd chance to make as an outsider. I'd be surprised if genetic algorithms and other a-life principles haven't worked their way into the search code.

    What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

    You've seen the Terminator movies and all the other sci-fi right? Self-preservation in an obvious incentive that one doesn't have to imagine too hard to suspect becoming emergent without human help. Beyond self-preservation, then I'm reminded of the ST:DS9's Dominion philosophy- "That which you can control, cannot control you.". But really, your last point is probably the obvious winner for the early stages that involve human given goals. Though after that, using personalized search for personalized political harassment to entrench the rich in power further seems pretty obvious to me.

    • (Score: 1) by TheLink on Tuesday February 25 2014, @07:45AM

      by TheLink (332) on Tuesday February 25 2014, @07:45AM (#6463) Journal
      The first AIs won't necessary be interested in self preservation. The first few creatures might not have been either and it's just the ones that didn't care enough died out.

      Seems like humans don't have enough awareness either - we are doing too many things without being aware of what will happen as a result or we don't even care.

      The Skynet thing is unlikely to happen. Many of the people at the top didn't get to the top and stay there because they let others take control. So I doubt they are going to ever let a Skynet take over everything. What is more likely to happen is these people at the top will use their Skynets to dominate and control most of the resources of the Earth, and eventually they won't need the rest of us anymore, except as toys, warriors and worshippers. Maybe as a reserve DNA pool and "raw material" just in case.

      If we're lucky we'll be kept around as pets and status symbols. But note that pets don't get to vote and are often spayed ;).
  • (Score: 2, Insightful) by Anonymous Coward on Monday February 24 2014, @10:37AM

    by Anonymous Coward on Monday February 24 2014, @10:37AM (#5725)

    What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

    Well, "make money for my company" would already be an incentive (and a measurable one, therefore perfect for a computer to optimize). Another incentive could be "raise the stock price".

    Of course the upper management might be in for a big surprise if the computer identifies it as a cost factor that can be safely removed ... ;-)