Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by TGV on Monday February 24 2014, @09:04AM

    by TGV (2838) on Monday February 24 2014, @09:04AM (#5687)

    Kurzweil repeats this message so regularly, keeping this point at 15 years in the future, that to me he's become a laughing stock. We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

    Furthermore, a physical brain needs about 15 years to develop consciousness, at least at the interesting level. I'm not talking "self-recognition in a mirror" here, but even that cannot be expected to be around in the next 15 years, at least not something that emerges from an autonomous machine that is in no way programmed to recognize itself in a mirror. So how would we suddenly jump to full consciousness in such a short time?

    Starting Score:    1  point
    Moderation   +4  
       Insightful=3, Interesting=1, Total=4
    Extra 'Insightful' Modifier   0  

    Total Score:   5  
  • (Score: 2, Insightful) by SlimmPickens on Monday February 24 2014, @10:51AM

    by SlimmPickens (1056) on Monday February 24 2014, @10:51AM (#5730)

    "a physical brain needs about 15 years to develop consciousness"

    The AI we have now learns much much faster than that. Ray pointed out many years ago that the switching speed of a transistor was already 1000 times faster than the switching speed of a neuron, plus there's parallelisation to exploit. Even if our software is inefficient a 2029 software human should be experiencing time magnitudes faster than a fleshy human. I think one year will be a very long time.

    Also, while it does take 15 years to become useful, the baby is conscious before birth.

    • (Score: 1) by TGV on Monday February 24 2014, @11:01AM

      by TGV (2838) on Monday February 24 2014, @11:01AM (#5732)

      A baby cannot be considered conscious before birth. If that's the level Kurzweil is aiming at, he can be reassured. We implemented that level a long time ago.

      The speed of the transistor (who is talking about transistors anyway? their speed may be higher, but their size is at least a million times that of a neuron) is not really relevant. And what parallelism? Our brains works in parallel. Transistors work in parallel? Are you suddenly speaking about CPUs?

      Anyway, I already feel sorry for the "software human". If one of our hours looks like a week to him, he'll be bored pretty soon.

      • (Score: 3, Insightful) by drgibbon on Monday February 24 2014, @12:07PM

        by drgibbon (74) on Monday February 24 2014, @12:07PM (#5755) Journal

        A baby cannot be considered conscious before birth? What a strange notion!

        But I agree that the speed of transistors is not really relevant. There's a hell of a lot going on inside a neuron that is not going on inside a transistor (DNA, epigenetics). It seems a far too simplified view to think that transistors must have the potential for rapidly developing consciousness (or consciousness at all for that matter) because "they're faster than neurons".

        --
        Certified Soylent Fresh!
        • (Score: 1) by SlimmPickens on Monday February 24 2014, @01:50PM

          by SlimmPickens (1056) on Monday February 24 2014, @01:50PM (#5796)

          "There's a hell of a lot going on inside a neuron that is not going on inside a transistor (DNA, epigenetics)"

          While that surely matters if you're simulating everything, most practitioners of AGI are taking the algorithmic approach. We don't need to implement all that detail, we just need algorithms that describe the function of the brain regions, and since such a wide variety of animal brains work I'm sure we can stray pretty far from what a human is and have something equivalent or better.

          I think that AI's run on computers is implicit. Obviously the speed of the computer is paramount.

          • (Score: 1) by drgibbon on Monday February 24 2014, @03:15PM

            by drgibbon (74) on Monday February 24 2014, @03:15PM (#5860) Journal

            "We don't need to implement all that detail, we just need algorithms that describe the function of the brain regions, and since such a wide variety of animal brains work I'm sure we can stray pretty far from what a human is and have something equivalent or better."

            Not sure what you meant here regarding animal vs human brains, my comment wasn't meant to be restricted to humans since DNA is found in all known life forms. Anyway, I don't think you can so easily dismiss such a fundamental part of the workings of sentient beings as "all that detail" (at least if you want to talk about consciousness). Simulation at the neural level might end up with some intelligent output/action, but that does not automatically equate with consciousness. By consciousness, I mean the entity's subjective knowing, or personal experience of itself in reality and its relationship within, and to, reality. This is not the same thing as intelligence. I can understand that faster computation might lead to higher intelligence (but obviously will not be sufficient alone), but I see no reason to link faster processing with consciousness per se.

            --
            Certified Soylent Fresh!
            • (Score: 1) by Namarrgon on Tuesday February 25 2014, @03:39AM

              by Namarrgon (1134) on Tuesday February 25 2014, @03:39AM (#6369)

              One could make the argument that consciousness ("self-awareness" at least) is largely a sufficiently detailed model of oneself, where "sufficiently" is a sliding scale and of course need not be complete or even completely accurate.

              We're making early progress on general computer models of the physical world (thinking more Cyc [wikipedia.org], Wolfram Alpha [wikipedia.org], Google's Knowledge Graph [wikipedia.org] here). It'll be interesting when those models start to move beyond generalities, then include the specific instance of the system they run on.

              --
              Why would anyone engrave Elbereth?
              • (Score: 2, Insightful) by drgibbon on Tuesday February 25 2014, @04:08AM

                by drgibbon (74) on Tuesday February 25 2014, @04:08AM (#6380) Journal

                Those things do sound interesting, but the model is not the thing itself. No one would believe that a computer model of the physical world is that physical world, why should anyone think that a model of consciousness is consciousness? It is, well, a model!

                --
                Certified Soylent Fresh!
                • (Score: 1) by Namarrgon on Wednesday February 26 2014, @03:18AM

                  by Namarrgon (1134) on Wednesday February 26 2014, @03:18AM (#7094)

                  The model is not of consciousness, but of the entity itself as distinct from the environment around it. I'm suggesting that the existence of this inclusion of "self" in the system of models is a major factor in what we call "consciousness"

                  All animals model the environment around them, to gain understanding of how to use or avoid it. Is that scent good to eat? Is that shadow likely to eat me? Babies rapidly learn to do this too, and it's interesting to observe a child's environmental models growing in complexity.

                  At a certain age (usually 2-3), toddlers start to include themselves in this system of models, to predict how they themselves will react to a situation. This comes with a growing awareness of "I" as distinct from "other" (which is usually reflected in their increasing use of personal pronouns), and enables them to grow beyond pure reaction and instinct towards making "conscious" decisions to manipulate the environment to better suit this new self they have become aware of. A fascinating process that most parents are familiar with.

                  Of course, there may be many more important factors in consciousness at work, but at the least it strikes me as an approach that's worth exploring.

                  --
                  Why would anyone engrave Elbereth?
                  • (Score: 1) by drgibbon on Wednesday February 26 2014, @07:14AM

                    by drgibbon (74) on Wednesday February 26 2014, @07:14AM (#7170) Journal

                    Understood. But again, it does not follow that a model of an entity has that consciousness will necessarily have consciousness itself. I agree that people and animals build what could be called cognitive models of the environment and of themselves, and it's very interesting, but it does not mean that we should equate the phenomena of conscious experience with these models. Although having a sense of self may be brought to mind when one thinks about consciousness, there are states of consciousness where the self does not even exist! So yes, self-awareness (in the usual sense of "my mind", "my body", "my life", and so on) can be divorced from conscious experience. What I was really getting at there was your statement;

                    "One could make the argument that consciousness ("self-awareness" at least) is largely a sufficiently detailed model of oneself".

                    I would say that the fundamental aspects of consciousness are not captured by this definition. If we confuse the things that seem to rely on consciousness (e.g. self-awareness, intelligence, and so on) with the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality), we run into problems. Not only does the word consciousness cease to have any precise or useful meaning, but we are led down the (IMO) garden path of attributing consciousness to anything that can mimic these models.

                    I think that the development of children and so on is a fascinating area, but studies in that direction would seem to be more properly called cognitive/developmental, rather than of consciousness per se.

                    --
                    Certified Soylent Fresh!
                    • (Score: 1) by Namarrgon on Thursday February 27 2014, @09:29AM

                      by Namarrgon (1134) on Thursday February 27 2014, @09:29AM (#7861)

                      the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality)
                      Not really the definition I had in mind - but that's part of the problem; nobody really knows.

                      I would have said that in order to subjectively experience anything, one had to be aware of oneself first, and be aware of the effect that experience has on oneself - which to me implies a self-model.

                      But I'll happily concede my opinion is no better than any other, and we won't really know anything much for sure until we try. Which I guess was the original point; it's an approach worth trying, and we'll see how it turns out. At the least, we'll learn something.

                      --
                      Why would anyone engrave Elbereth?
        • (Score: 0) by Anonymous Coward on Monday February 24 2014, @03:01PM

          by Anonymous Coward on Monday February 24 2014, @03:01PM (#5841)

          A baby cannot be considered conscious before birth? What a strange notion!

          Not really. Our justice system has decided that women have the right to an abortion and it's legal up until the time of birth, regardless of the fact that most people consider a 9th month abortion absurd. A baby can't be considered conscious before birth because that would mean that the courts have sanctioned murder. Further, it would open up the discussion to the question of exactly what changes when a fetus turns into a person and that would potentially lead to admitting that millions of people have been murdered with the sanction of the courts by their own mothers.

          The real issue is what makes a person a person. That has always been the issue. But neither side can let the debate become based in that fact because it is a compromise and neither side can bear the thought of compromise.

          For what it's worth, I believe that brain activity is what defines a person with rights, and the courts already support that in people who have been in an accident, but since I am clicking that anonymous checkbox, I doubt my opinion counts for much.

          • (Score: 3, Insightful) by drgibbon on Monday February 24 2014, @03:58PM

            by drgibbon (74) on Monday February 24 2014, @03:58PM (#5901) Journal

            But whose justice system are you referring to? Not picking, but it's important to actually state the place, rather than assume anyone else should automatically know what "our justice system" actually refers to.

            "A baby can't be considered conscious before birth because that would mean that the courts have sanctioned murder."

            Hmm, so because we cannot have courts sanctioning murder, it logically follows that, in reality, the baby was never conscious until it left the womb? The courts are without question sanctioning the killing of living people (the baby lives, yes), but I don't really want to get into an ethical discussion about that (for what it's worth, I believe women have the right to an abortion). The way that legal institutions around the world might define consciousness for the purposes of abortion doesn't seem particularly relevant to me. Who would honestly say that a baby has no consciousness whatsoever until it leaves the womb (other than for legal purposes?). I mean first of all, we cannot truly know, and secondly, it seems to quite plainly deny the reality of life. There may be some transition period before the baby is considered to have consciousness; but I find the original statement, "a baby cannot be considered conscious before birth" honestly to be pretty bizarre!

            Btw, nothing wrong with anon comments IMO.

            --
            Certified Soylent Fresh!
            • (Score: 1) by bucc5062 on Monday February 24 2014, @06:58PM

              by bucc5062 (699) on Monday February 24 2014, @06:58PM (#6061)

              And to pull this a little back on topic, if something within a computer system becomes "conscious" and upon our knowing, we pull the plug and kill it, have we committed murder?

              --
              The more things change, the more they look the same
    • (Score: 0) by Anonymous Coward on Monday February 24 2014, @11:16AM

      by Anonymous Coward on Monday February 24 2014, @11:16AM (#5741)

      Also, while it does take 15 years to become useful, the baby is conscious before birth.

      How do you get a mirror into the womb to test that?

    • (Score: 1) by WillR on Monday February 24 2014, @04:51PM

      by WillR (2012) on Monday February 24 2014, @04:51PM (#5951)

      "The AI we have now learns much much faster than that. Ray pointed out many years ago that the switching speed of a transistor was already 1000 times faster than the switching speed of a neuron, plus there's parallelisation to exploit."

      And yet here we sit, a predicted 20-30 years away from conscious software. Same as in the 1990s, and the 80s, and the 70s.

      It's like the problem is just not amenable to being solved by throwing bigger storage and faster neural nets at it, or something.

  • (Score: 1) by threedigits on Monday February 24 2014, @11:13AM

    by threedigits (607) on Monday February 24 2014, @11:13AM (#5738)

    We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

    Easy, because it/she/he will start asking interesting questions, specially about him/her/itself. At this point you have a conscious intelligent being.

    • (Score: 5, Funny) by TGV on Monday February 24 2014, @11:16AM

      by TGV (2838) on Monday February 24 2014, @11:16AM (#5740)

      An intelligent 5 year old "software human"

      10 PRINT "Why?"
      20 GOTO 10

      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @01:07PM

        by Anonymous Coward on Monday February 24 2014, @01:07PM (#5769)

        You forgot a line:

        15 INPUT A$: REM answer waited for, but then ignored

        • (Score: 1) by githaron on Monday February 24 2014, @03:26PM

          by githaron (581) on Monday February 24 2014, @03:26PM (#5873)

          He was trying to increase the functional efficiency of the algorithm.

    • (Score: 1) by c0lo on Monday February 24 2014, @11:49AM

      by c0lo (156) on Monday February 24 2014, @11:49AM (#5751) Journal

      We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

      Easy, because it/she/he will start asking interesting questions, specially about him/her/itself. At this point you have a conscious intelligent being.

      Whooa there cowboy, hold your horses.
      I can guarantee you all the primates now in existence are self-conscious. However, I'm yet to hear of an ape that asks interesting questions; why, a lot of the homo sapience primates would fail this probe.
      Want a proof, you say? When was the last time you had a "townhall meeting with the upper management" and how much of interest did that action awoke in you? (I mean... letting aside the excitement of being the first to shout bingo [wikipedia.org]).

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0
      • (Score: 1) by TGV on Monday February 24 2014, @01:28PM

        by TGV (2838) on Monday February 24 2014, @01:28PM (#5783)

        I think the comment was meant in jest.

    • (Score: 0) by Anonymous Coward on Monday February 24 2014, @02:14PM

      by Anonymous Coward on Monday February 24 2014, @02:14PM (#5811)

      If the history of AI research has proven anything, it's that conscious awareness is not as simple as "throwing more computing power" at the problem. We can't build it if we really don't understand what we're building.