Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Friday January 08 2016, @08:54AM   Printer-friendly
from the something-to-think-about dept.

The idea of a thinking machine is an amazing one. It would be like humans creating artificial life, only more impressive because we would be creating consciousness. Or would we ? It's tempting to think that a machine that could think would think like us. But a bit of reflection shows that's not an inevitable conclusion.

To begin with, we'd better be clear about what we mean by "think". A comparison with human thinking might be intuitive, but what about animal thinking? Does a chimpanzee think? Does a crow? Does an octopus ?

The philosopher Thomas Nagel said that there was "something that it is like" to have conscious experiences. There's something that it is like to see the colour red, or to go water skiing. We are more than just our brain states.

Could there ever be "something that it's like" to be a thinking machine? In an imagined conversation with the first intelligent machine, a human might ask "Are you conscious?", to which it might reply, "How would I know?".

http://theconversation.com/what-does-it-mean-to-think-and-could-a-machine-ever-do-it-51316

[Related Video]: They're Made Out of Meat


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Touché) by Anonymous Coward on Friday January 08 2016, @09:08AM

    by Anonymous Coward on Friday January 08 2016, @09:08AM (#286516)

    Humans can think?

    • (Score: 3, Insightful) by ticho on Friday January 08 2016, @09:10AM

      by ticho (89) on Friday January 08 2016, @09:10AM (#286517) Homepage Journal

      They can, they just usually choose not to.

      • (Score: 3, Funny) by sudo rm -rf on Friday January 08 2016, @04:01PM

        by sudo rm -rf (2357) on Friday January 08 2016, @04:01PM (#286661) Journal

        One of the things Ford Prefect had always found hardest to understand about humans was their habit of continually stating and repeating the very very obvious, as in It's a nice day, or You're very tall, or Oh dear you seem to have fallen down a thirty-foot well, are you alright? At first Ford had formed a theory to account for this strange behaviour. If human beings don't keep exercising their lips, he thought, their mouths probably seize up. After a few months' consideration and observation he abandoned this theory in favour of a new one. If they don't keep on exercising their lips, he thought, their brains start working. After a while he abandoned this one as well as being obstructively cynical.

        -- Douglas Adams

      • (Score: 3, Funny) by aristarchus on Friday January 08 2016, @11:51PM

        by aristarchus (2645) on Friday January 08 2016, @11:51PM (#287001) Journal

        Bertrand Russell:

        most people would die sooner than think – in fact they do so.

    • (Score: 2) by khchung on Friday January 08 2016, @09:16AM

      by khchung (457) on Friday January 08 2016, @09:16AM (#286522)

      Good point, really.

      Show me how one can conclude a human is "conscious", then we can ask how would we know if a machine could do it.

      I.e. what is the test one can devise to show that any given human is "conscious"?

    • (Score: 0) by Anonymous Coward on Friday January 08 2016, @10:04AM

      by Anonymous Coward on Friday January 08 2016, @10:04AM (#286536)

      Well, at least they think so.

  • (Score: 5, Insightful) by julian on Friday January 08 2016, @09:16AM

    by julian (6003) on Friday January 08 2016, @09:16AM (#286521)

    I recommend the book Superintelligence by Nick Bostrom which deals with the technical paths and implications of developing an AI that might be able to answer, or at least meaningfully attempt to answer, this question. We've actually broken a few barriers before and people generally still say computers can't "think". It was once said they'd never be able to beat a human at chess; that was too hard and required a sort of understanding that wasn't reducible to mathematics and thus not amenable to a running on a computer. Computers would never be able to discern objects in pictures from context, but Google can now do this with their image search. One memorable line from the book is a quote from computer scientist John McCarthy, "As soon as it works, no one calls it AI any more."

    People are going to be reluctant to ever admit a machine is thinking even if it's indistinguishable from other humans. It eventually boils down to a sort of biological chauvinism, and a stubborn commitment to dualism usually driven by the wish-dream to survive death through the intercession of a deity. I'm afraid to die > I have a soul that can't die > my soul is the seat of my intellect and thus my thoughts > machines are man made and don't have souls > machines can't think. QED. This reasoning is acceptable to literally billions of people and it is also unfalsifiable.

    And it's not just harmless differences in opinion on a philosophical question. They'll try to stop real progress using similar justifications, wait and see. They did it with stem cells.

    • (Score: 4, Interesting) by q.kontinuum on Friday January 08 2016, @09:59AM

      by q.kontinuum (532) on Friday January 08 2016, @09:59AM (#286534) Journal

      I'm afraid to die > I have a soul that can't die > my soul is the seat of my intellect and thus my thoughts > machines are man made and don't have souls > machines can't think. QED. This reasoning is acceptable to literally billions of people and it is also unfalsifiable.

      Thought experiment: Consider a single neuron of a living person can be replace by a system on a chip. It measures hormone-/electrolyte-stated and acts accordingly, stimulating other neurons. Now imagine, whenever a neuron fails, it is replaced by such a chip. At what percentage did the living person stop existing? Did it ever, or does the person still exist even after all neurons are replaced? Is the eventually resulting machine alive? If so, what about an exact copy of the machine, or the same machine after it was switched off for a year or so? What if a part of the brain is replaced by simulation which doesn't resemble single neurons, but just a whole segment of the brain?

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 5, Insightful) by Non Sequor on Friday January 08 2016, @11:55AM

        by Non Sequor (1005) on Friday January 08 2016, @11:55AM (#286549) Journal

        I don't think everyone necessarily thinks the implementation of the brain is magical. It's just that I can't figure out these two things:

        1. What I call "my reality" is a mapping from states in my brain to "colors", "sounds", "tastes", and "textures" which have no direct, concrete, physical reality other than that they are consistently triggered by the same concrete, physical stimuli (although the senses can be fooled).
        2. This mapping is apparently observable by the brain.

         

        #1 can be dealt with by saying that, mathematically, the mapping between the physical states of the brain and a narrative relating them among themselves and to external stimuli is unique up to isomorphism, while a simpler system may have multiple plausible narratives. For example, if you write a simple program and you have some variables used to track what you intend to be emotional states, even if you can put together a plausible narrative describing how they change in response to the environment, you could just as easily relabel them and create a different narrative.

        So even though I don't really think I understand it, #1 almost kind of makes sense. But #2 just seems nuts. If I'm straining to understand the physicality of #1, how on earth is it observed by the brain which seems to be a physical object with no stray wires trailing off into a platonic realm of mappings that are unique up to isomorphism?

        --
        Write your congressman. Tell him he sucks.
        • (Score: 4, Insightful) by q.kontinuum on Friday January 08 2016, @12:35PM

          by q.kontinuum (532) on Friday January 08 2016, @12:35PM (#286556) Journal

          Interestingly, #2 seems to be a question neither science nor philosophy came any step further solving. Basically this is the question what the difference between data processing and consciousness is. Solving this question is actually quite important, as it determines in how far artificial intelligence / machines should have rights as well from a moral perspective, or if we could theoretically transfer our consciousness into a sufficiently sophisticated computer to achieve immortality.

          For me personally, the missing answer to this question is why I call myself an agnostic, but not an atheist. As long as science has no suggestion *at all*, what consciousness means, I can not take it as fact that no superior consciousness exists. Maybe consciousness is an emerging feature of any complex system? Would that mean, "earth" has a consciousness as well? Or the universe? I'm not saying I believe that to be the case, just that it could be a hypothesis I wouldn't discard immediately...

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
          • (Score: 3, Insightful) by acid andy on Friday January 08 2016, @01:15PM

            by acid andy (1683) on Friday January 08 2016, @01:15PM (#286568) Homepage Journal

            For me personally, the missing answer to this question is why I call myself an agnostic, but not an atheist. As long as science has no suggestion *at all*, what consciousness means, I can not take it as fact that no superior consciousness exists.

            I broadly agree with your points. I agree with you that we cannot prove that no superior consciousness exists. I think we probably can't prove it does exist either, with any of the information currently available. Never mind superior, it's a struggle to prove that human consciousness exists. We just have to accept our own individual consciousness sort of axiomatically.

            Maybe consciousness is an emerging feature of any complex system?

            The philosopher David Chalmers would say yes to this. I like the idea myself as well.

            Would that mean, "earth" has a consciousness as well? Or the universe? I'm not saying I believe that to be the case, just that it could be a hypothesis I wouldn't discard immediately...

            Intuitively, to me, consciousness feels like a sort of restrictive bubble of experience. A single point of view at an instant in space and time. It feels like, in one given instant, I am me and I have access only to the memories of this one particular brain, at this one particular moment. A nanosecond ago I might not have been conscious, or I might have experienced another brain's thoughts and sensations. The important thing is that I am conscious of this one brain's state precisely due to restrictions in connectivity. I have access to information stored in various parts of this brain because there are physical connections between those parts. It's difficult to imagine a planet or the universe having the same kind of vivid conscious experience because the parts of it seem less directly connected - but maybe I'm looking for the connections in the wrong way.

            --
            Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
          • (Score: 2) by Non Sequor on Friday January 08 2016, @01:28PM

            by Non Sequor (1005) on Friday January 08 2016, @01:28PM (#286574) Journal

            It kind of leads you back to dualism which is generally seen (for a variety of good reasons) as a kind of embarrassing pre-modern philosophy. Material/informational dualism? That starts leading you towards something not terribly different from middle ages theology, which is interesting. See https://en.wikipedia.org/wiki/Thomas_Aquinas#Thoughts_on_afterlife_and_resurrection [wikipedia.org]

            --
            Write your congressman. Tell him he sucks.
            • (Score: 2) by acid andy on Friday January 08 2016, @01:58PM

              by acid andy (1683) on Friday January 08 2016, @01:58PM (#286584) Homepage Journal

              It's only embarrassing if you're a hardcore physicalist or materialist. The trouble is physics itself is founded on metaphysics. If physical laws themselves exist, then the laws are metaphysical rather than physical. Perhaps the physicalists could one day be persuaded to consider consciousness as another fundamental physical entity, on the same level as matter and energy are now. The reason physicalists shy away from this sort of thing now and it's seen as unfashionable or embarrassing is because it's something that seems impossible (or at least extremely hard) to draw any testable conclusions from. Consciousness seems to be a sort of axiomatic thing, that doesn't need to be there to explain any other physical phenomenon. It only needs to be part of an understanding of reality if you seek a theorem to describe your own private subjective first person experience.

              --
              Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
              • (Score: 2) by FatPhil on Friday January 08 2016, @02:43PM

                by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Friday January 08 2016, @02:43PM (#286610) Homepage
                > The trouble is physics itself is founded on metaphysics. If physical laws themselves exist, then the laws are metaphysical rather than physical.

                That's not how I view metaphysics. I view the actual laws of physics to be as much physical as axioms of mathematics to be mathematical. The approximations to the actual laws of physics (which can always be reduced to just one law, through trivial means) which we study, represent, and test, and our agreement on how we study, represent, and test (reality against) them - that's the metaphysics.
                --
                Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
              • (Score: 4, Insightful) by Non Sequor on Friday January 08 2016, @05:18PM

                by Non Sequor (1005) on Friday January 08 2016, @05:18PM (#286704) Journal

                Magnetic field polarization patterns on a metal disc=data
                Electrical signals read by hardware=data
                Bytes in JPEG format=data
                Signal to monitor=data
                Pattern of transmitted photons=data
                Stimulus pattern in eye=data
                Optic nerve signal=data
                Analysis of signal in terms of image feature detectors=data
                Association of image features to recognizable subjects =data
                My private reality of viewing an image=magic

                If you're a dualist, the best you can say is that the last step in that sequence is really weird. However, historically, trying to reason about the world based on that weirdness has been a losing bet.

                The hardcore materialists by many accounts seem to be backing the winning horse, but it's somewhat intellectually dishonest to not own up to the fact that the last step in the sequence doesn't seem the slightest bit less weird with all of the intermediate details filled in.

                So you have two camps, one that tries to pay attention to all of the details, but tends to screw things up when it tries to define its argument, and another that better defines its argument, but basically tries to change the subject when it comes to this one little problem.

                Consciousness being something not needed to explain the physical universe but being wedded to it seems like an out, but if that's the case, how do I think that consciousness is weird? How do I observe its weirdness?

                (If I posit a mechanism, I'll probably be a crackpot.)

                --
                Write your congressman. Tell him he sucks.
                • (Score: 2) by acid andy on Friday January 08 2016, @05:51PM

                  by acid andy (1683) on Friday January 08 2016, @05:51PM (#286727) Homepage Journal

                  Well said. People like Dennett use words like "magic" to show disdain for the attitudes of the dualists. Really though that word is just a shorthand to say that this is an extraordinary and significant enough phenomenon that it deserves further research or investigation. Of course, such research can be intensely frustrating due to thousands of years of little to no progress.

                  The hardcore materialists by many accounts seem to be backing the winning horse, but it's somewhat intellectually dishonest to not own up to the fact that the last step in the sequence doesn't seem the slightest bit less weird with all of the intermediate details filled in.

                  Yes, absolutely. If the purest physicalist honestly does not believe that there is any important difference between the first person and third person experience, then why do they perform any selfish actions at all in their lives. Is it purely out of altruism? If however they do believe there is an important difference but are simply afraid to admit it, then yes, that's intellectually dishonest.

                  --
                  Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
            • (Score: 2) by q.kontinuum on Friday January 08 2016, @03:38PM

              by q.kontinuum (532) on Friday January 08 2016, @03:38PM (#286647) Journal

              My hypothesis (that consciousness might be an emerging feature of complex systems) was the exact opposite of dualism, because it ties the consciousness to the existing complex structure. I agree that the question is quite old, but that was actually my poing: Neither philosophy nor other sciences were showing any progress in answering it, but progress in artificial intelligence makes it IMO far more practically relevant to finally get an answer.

              --
              Registered IRC nick on chat.soylentnews.org: qkontinuum
              • (Score: 2) by Non Sequor on Friday January 08 2016, @07:34PM

                by Non Sequor (1005) on Friday January 08 2016, @07:34PM (#286834) Journal

                Even once we have non-human examples of consciousness that we can converse with, I'm not sure that we will be any less confused. I could say that "emergent phenomenon of complex systems" is just dualism with extra steps.

                The AI answer may be further off anyway. We don't know what fidelity of reproduction is needed for artificial neurons to reconstitute the whole brain. We know that the brain is robust against perturbations of individual neurons, but if all of the neurons are slightly out of tune, the AI may effectively be drunk. Look how much work has had to go into micro-tuning for VR. Small issues with the visual input being out of tune with the brain's current expectations cause dysfunction. To simulate the brain, you may need to overshoot its computational complexity and use the slack for tweaks to get the simulation consistent with the reference.

                --
                Write your congressman. Tell him he sucks.
      • (Score: 1) by dak664 on Friday January 08 2016, @04:16PM

        by dak664 (2433) on Friday January 08 2016, @04:16PM (#286671)

        It would have to interact with the environment in exactly the same way, depleting the same number of neurotransmitters and ions, converting the same amount of ATP producing the same waste products, with identical local heating. A large external machine might be able to do such emulation but for your continuum of consciousness to work you'd basically be replacing have to replace each neuron with an identical clone. Thought is the collective property of a 3D volume of chemical species and electrical potentials, the neurons providing a configurable "skeleton" that steers the volume towards it's gestalt.

        That's my take, anyway.

        • (Score: 2) by q.kontinuum on Friday January 08 2016, @04:27PM

          by q.kontinuum (532) on Friday January 08 2016, @04:27PM (#286680) Journal

          I agree this is hardly a practical proposal. I believe most people would agree that if it was possible to actually perform it, the expectation would be that the behaviour of the resulting cyborg would be indistinguishable from the original person. If anyone disagrees, that would open new discussions, if all agree we can discuss the consequences.
            It was more of a thought experiment / philosophical question.

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 3, Insightful) by gman003 on Friday January 08 2016, @03:21PM

      by gman003 (4155) on Friday January 08 2016, @03:21PM (#286636)

      I'm afraid to die > I have a soul that can't die > my soul is the seat of my intellect and thus my thoughts > machines are man made and don't have souls > machines can't think. QED. This reasoning is acceptable to literally billions of people and it is also unfalsifiable.

      What do you mean, unfalsifiable? There's an easy experiment to disprove that.

      If the non-physical soul is the seat of consciousness, then damage to physical organs should not impair it. So let's go find someone, stick a whisk in their brain and turn their prefrontal cortex into pudding. If they show any impairment in thought or consciousness, then obviously the soul is not involved with at least those parts of intelligence.

      Fortunately, we don't have to actually do that. Instead we can just look at accidents involving head injuries... and we find that severe enough brain damage is enough to leave you permanently unconscious. QED souls are not the seat of conscious thought.

      Now, people probably won't accept that evidence, but let's not give them the unearned dignity of calling it "unfalsifiable".

      • (Score: 2) by julian on Friday January 08 2016, @08:21PM

        by julian (6003) on Friday January 08 2016, @08:21PM (#286884)

        The way they usually get around that is by saying that the body/brain organ are like a computer monitor, the soul is the computer. If you destroy the monitor you can no longer meaningful interact with the computer, but it still exists and functions normally. They just think this "soul" exists in another inaccessible dimension, unlike the computer which is in the same physical reality as the monitor.

    • (Score: 0) by Anonymous Coward on Friday January 08 2016, @09:45PM

      by Anonymous Coward on Friday January 08 2016, @09:45PM (#286937)

      This;

      People are going to be reluctant to ever admit a machine is thinking

      A significant proportion of society still dont even believe in climate change, people believe what is convenient to them.

  • (Score: 3, Interesting) by moo kuh on Friday January 08 2016, @10:04AM

    by moo kuh (2044) on Friday January 08 2016, @10:04AM (#286535) Journal

    There is some interesting neurology (I am not a neurologist) research being done on how the brain stores and uses information:

    https://www.google.com/search?q=chunking+neurology&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US&client=palemoon [google.com]
    http://neurosciencenews.com/motor-chunking-sensorimotor-putaman-frontoparietal-cortex/ [neurosciencenews.com]

    I would say yes, machines can "think". I'll speculate that human thinking is nothing more than the brain running the biological equivalent of pattern matching algorithms on information it has stored. I think thinking boils down to either re-arranging how information is stored and/or searching through it with pattern matching algorithms. Lets say a human is trying to answer a question. When an answer is not found, the decision tree could choose the closest match if it is good enough. If the match isn't close enough, it could decide to admit defeat, seek more information, or refactor its database and try again. The decision of whether to accept the best match or not could be weighted by the importance of the question, information available, and time available. The decision taken when not accepting the current best match could be decided by importance of the question, information available, and time available. Those two decision trees sound a lot like a greedy algorithm. Our brains' way of searching its information could be nothing more than meat space breadth first or depth first graph searches.

    If human thinking is nothing more than running the meat space version of regular expressions or SQL queries, storing new information, and refactoring, then there is no reason a non-biological machine can't do it. Our brains just have really good compression, database refactoring, and pattern matching algorithms.

    I admit I have speculated a lot, but hopefully some of you will find my thoughts interesting.

    • (Score: 1, Interesting) by Anonymous Coward on Friday January 08 2016, @03:35PM

      by Anonymous Coward on Friday January 08 2016, @03:35PM (#286645)

      All my thinking to this point leads me to conclude that the human brain is a "Story Engine" ... where the minimal story is the metaphor. Pattern matching is done by comparing stories. Outcomes are predicted based on outcomes of known stories.

      As far as I can determine, no one is using this realization in the AI field. Computationally intensive comparison of individual facts seems to be the direction of development. I'm curious about what a Story Engine approach to AI might produce.

      • (Score: 2) by SlimmPickens on Friday January 08 2016, @10:07PM

        by SlimmPickens (1056) on Friday January 08 2016, @10:07PM (#286953)

        Jeff Hawkins has something like this he calls Hierarchical Temporal Memory. He has a great book called On Intelligence.

    • (Score: 2) by FatPhil on Friday January 08 2016, @03:54PM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Friday January 08 2016, @03:54PM (#286656) Homepage
      I'm pretty sure human thinking has plenty of monte carlo modelling going on too. We can't search all of the solution space, we just send out some probes and see which of the outcomes we prefer. Recent (last 10 years) advances into computer go show that monte carlo is surprisingly powerful, many dans have been toppled in the last decade because of it and its variants.

      And yes, that means I too think that machines will eventually think. We've had less than a century working seriously on the problem, and technology shows no signs of running out of new paths for potential improvement. I'm perfectly prepared to not see anything in my lifetime, but I'm sure in a few more centuries cyborgs will be laughing at how naive we 20th century meat machines were.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by aristarchus on Friday January 08 2016, @11:57PM

      by aristarchus (2645) on Friday January 08 2016, @11:57PM (#287006) Journal

      I'll speculate that human thinking is nothing more than the brain running the biological equivalent of pattern matching algorithms on information it has stored.

      Puny human! We superior thinking things find your thoughtless speculations quite amusing! Meat that thinks! Ha! https://www.youtube.com/watch?v=7tScAyNaRdQ [youtube.com]They're Made of Meat

  • (Score: 1) by bart on Friday January 08 2016, @10:19AM

    by bart (2844) on Friday January 08 2016, @10:19AM (#286538)
    There's a pretty good SF book on this topic: The two faces of tomorrow by James Patric Hogan [amazon.com].
    This book explores the evolution of smart systems, where the systems start to create solutions to human orders that no one had foreseen. In order to sort of see where this leads, they set up an experiment in a large satellite, where they deliberately try to provoke the computer systems to see how far it can get.
    The end is quite exciting :-)
  • (Score: 0) by Anonymous Coward on Friday January 08 2016, @12:50PM

    by Anonymous Coward on Friday January 08 2016, @12:50PM (#286560)

    If mind is generated by the brain, yes. If mind is not generated by the brain, and is external to it, then a machine built in the likeness of human brain would surely attract such an external mind to live in, i think. (lol)

    I tend to think consciousness is an emergent property, if not simply what a converged neural architecture feels like to YOU when its implementing YOU...?

    There was this dude, manuel dellanda or something, who wrote this book, war in the age of intelligent machines.. lemme find a link

    http://monoskop.org/images/c/c0/DeLanda_Manuel_War_in_the_Age_of_Intelligent_Machines.pdf [monoskop.org] which is a discourse analysis of the war machinery over the times. Theres a summary of it on wikipedia.

    "He draws on chaos theory to show how the biosphere reaches singularities (or bifurcations) which mark self-organization thresholds where emergent properties are displayed and claims that the "mecanosphere", constituted by the machinic phylum, possesses similar qualities. "

    So some people argue that thinking machines already exist, its just that they use humans for components they yet do not have? that is how i understood that book, anyway...

  • (Score: 3, Interesting) by acid andy on Friday January 08 2016, @12:53PM

    by acid andy (1683) on Friday January 08 2016, @12:53PM (#286561) Homepage Journal

    There are two separate questions here. The first is the question of whether the machine can be shown to "think" by an objective third person scientific analysis of its behavior (which may include analysis of its internal state). I'd argue this kind of thinking has probably already been achieved to some degree with artificial neural networks, but taking this question to it's strictest level would be whether you could build an artificial android whose behavior would be indistinguishable from a human.

    The second, harder question, relates to the quote from Nagel. This is the so called "Hard Problem of Consciousness". The question of whether some entity (I'll avoid using a word like "soul" because I'll be dismissed without a second thought and likely start a flame war) could have a direct, subjective first person experience of the machine's internal state. This question is considered by many to be impossible to answer through any objective, logical analysis, even for humans, let alone artificially created machines.

    --
    Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
  • (Score: 2, Insightful) by Anonymous Coward on Friday January 08 2016, @02:04PM

    by Anonymous Coward on Friday January 08 2016, @02:04PM (#286587)
    The question of whether machines can think is about as relevant as the question of whether submarines can swim.
  • (Score: 3, Informative) by Covalent on Friday January 08 2016, @02:27PM

    by Covalent (43) on Friday January 08 2016, @02:27PM (#286598) Journal

    Yes.

    How can I be so sure? It will be possible to perfectly simulate a human brain using artificial neurons, and probably in my lifetime. We're already able to do this with smaller, less complex brains: http://www.artificialbrains.com/openworm [artificialbrains.com]

    If we can completely simulate a human brain, then how can we call what it does anything but "thinking"? At that point the philosophers and pedantists can argue until the cows come home, but those arguments will be irrelevant. Thinking is like pornography: I know it when I see it.

    And when that simulated human brain writes a symphony or a poem, or falls in love, or forgets where it put its car keys, then it will be thinking.

    --
    You can't rationally argue somebody out of a position they didn't rationally get into.
    • (Score: 2) by Justin Case on Saturday January 09 2016, @06:15PM

      by Justin Case (4239) on Saturday January 09 2016, @06:15PM (#287335) Journal

      Thinking is like pornography: I know it when I see it.

      Ahhh, sooo close to the answer...

      When an electronic device says "Thinking is like pornography: I know it when I see it, and oh by the way, porn makes me horny" then you'll have a machine that can at least mimic human consciousness.

      And no, the electronic device you're presently using to read "Thinking is like pornography..." doesn't count. It's just doing what it's told.

  • (Score: 2) by wisnoskij on Friday January 08 2016, @03:27PM

    by wisnoskij (5149) <{jonathonwisnoski} {at} {gmail.com}> on Friday January 08 2016, @03:27PM (#286640)

    We already know that our thoughts, aka the voice in your head, is in control of nothing. We have no idea what thought is used for, other than we know it is not directly in control of anything and that all your decisions are made before your thoughts come up with the reasoning behind that decision. So if all decisions are made subconsciously, why would we build a machine based around the illusion which is human conscious thought?

  • (Score: 0) by Anonymous Coward on Friday January 08 2016, @03:44PM

    by Anonymous Coward on Friday January 08 2016, @03:44PM (#286652)

    Same AC here again. Nagel's statement should perhaps be amended to "There is something that it FEELS like to be a bat" ... all conscious experience to me seems indistinguishable from feeling. Vision is a feeling. Hearing is a feeling. Touch is a feeling. Speech is a feeling. Proprioception is a feeling.

    These things aren't simply states of existence, as in "There is something that it IS like".

    • (Score: 2) by acid andy on Friday January 08 2016, @05:22PM

      by acid andy (1683) on Friday January 08 2016, @05:22PM (#286707) Homepage Journal

      I think I see what you're getting at. I prefer Nagel's original words personally because the word "feeling" is very open to being misnterpreted. People will start to associate "feeling" with simple physical processes like nerve impulses and the chemistry that goes on while emotions occur. Analysis of those things usually excludes the direct, subjective experience of the feeling that consciousness seems to entail. You can only directly experience your own pleasure or pain or experience of colour (these fundamental conscious experiences are often called "qualia" by the way). An analysis of all the electrical impulses and chemical processes going on in another individual's brain for such an experience would seem to miss out something of that experience. That's what Nagel's getting at.

      I personally think it might be theoretically possible (but probably isn't) to objectively model how someone does experience the colour red or the sensation of heat, if we could somehow crack the code of the entire internal representation of it in their neurons and synapses. For me, that still leaves consciousness itself unexplained because consciousness is the person's first person point of view.

      --
      Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
  • (Score: 0) by Anonymous Coward on Friday January 08 2016, @04:00PM

    by Anonymous Coward on Friday January 08 2016, @04:00PM (#286660)

    maybe the computer/machine knows everything already; you just have to ask it the right question.
    A thinking computer would be a consciousness that can anticipate our next question ^_^
    and hopefully (not) answer with "42".

  • (Score: 0) by Anonymous Coward on Friday January 08 2016, @04:15PM

    by Anonymous Coward on Friday January 08 2016, @04:15PM (#286670)

    If thinking = make decisions then yes machines can think and so can animals even unicellular ones.

    If thinking = consciousness then we can't even prove to other humans that we are conscious. But our consciousness is the only thing we can be 100% sure of while we experience it- everything else could be an illusion, a virtual world.

    See: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness [wikipedia.org]
    https://en.wikipedia.org/wiki/Philosophical_zombie [wikipedia.org]

    I have faith that I'm nothing special and so many/most[1] other humans experience consciousness and even many animals do but how can one even prove such a thing?

    [1] I have seen some replies/responses on this subject by entities that seem to indicate a possibility that those entities don't actually experience the consciousness phenomenon, assuming they truly understand what the topic is about and are responding honestly. I'm not 100% sure whether those entities are human since they are replying on forums but seems that it's likely they are humans and not AIs.

    • (Score: 2) by acid andy on Friday January 08 2016, @06:05PM

      by acid andy (1683) on Friday January 08 2016, @06:05PM (#286743) Homepage Journal

      I have seen some replies/responses on this subject by entities that seem to indicate a possibility that those entities don't actually experience the consciousness phenomenon, assuming they truly understand what the topic is about and are responding honestly. I'm not 100% sure whether those entities are human since they are replying on forums but seems that it's likely they are humans and not AIs.

      I know exactly what you mean! The difficult counter-intuitive bit though is to remember that we can theoretically imagine an AI (or zombie if you like) that internally runs through all the same narrative and reasoning as we do when we ponder what it is like to be conscious and claims that it is 100% sure that it is conscious, but isn't. Of course you realise this, because you admit no-one can prove it to anyone else. It's very odd though that the belief in one's own consciousness seems to be kicking off these narratives when at the same time we know that consciousness doesn't necessarily have to exist for them to happen!

      I feel so lucky because I seem to have scored a front row seat in this particular human's brain (at least for the time being). Science tells me that this human's brain would have got along just fine thinking and feeling and remembering and acting even if I wasn't there. Yet here I remain. It's all very odd. We've hit the jackpot on this planet being human too. Colour vision, opposable thumbs.

      --
      Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
  • (Score: 0) by Anonymous Coward on Friday January 08 2016, @04:37PM

    by Anonymous Coward on Friday January 08 2016, @04:37PM (#286686)

    If a machine becomes self aware, then it's thinking. Otherwise it's only processing data.

  • (Score: 2) by srobert on Friday January 08 2016, @08:01PM

    by srobert (4803) on Friday January 08 2016, @08:01PM (#286865)

    About 35 years ago there was a popular book of philosophical speculation about this and other issues.
    Godel, Escher, Bach: An Eternal Golden Braid, by Douglas Hofstadter. It occurs to me that some young people here will never have heard of it.

  • (Score: 2) by darkfeline on Friday January 08 2016, @10:00PM

    by darkfeline (1030) on Friday January 08 2016, @10:00PM (#286949) Homepage

    There's an "unfair" bias, because when we say can machines think, we're really saying can machines think like humans do, and preferably like how I think in particular.

    Take for example,

    >a human might ask "Are you conscious?", to which it might reply, "How would I know?".

    Why would the machine speak in English? If we were to find a previously unknown, isolated colony of humans, would we expect them to speak English, or any previously known language, for that matter? Would we expect them to think and hold the same cultural values as previously known cultures?

    --
    Join the SDF Public Access UNIX System today!
  • (Score: 2) by HiThere on Friday January 08 2016, @10:28PM

    by HiThere (866) on Friday January 08 2016, @10:28PM (#286966) Journal

    1) You have an experinece.
    2) You notice that you had the experience.
    3) You notice that you noticed that you had the experience.

    The first step is to have a mind.
    The second step is to have awareness.
    The third step is consciousness. It can be extremely rudimentary.

    N.B.: I will grant that my use of the terms is idiosyncratic, but so is everyone else's. There is no common language for thing that cannot be pointed at. When things are difficult to point at, lots of "you know what I mean" terms evolve that don't really have consensual definitions. Thoughts, minds, consciousness, etc. have been such terms, and are just starting to be pointable at as computer science evolves. (Please note, I haven't tried to define intelligence. That's one where there are multiple distinct schools about what it means that give firm answers which disagree with each other.)

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by aristarchus on Saturday January 09 2016, @12:04AM

      by aristarchus (2645) on Saturday January 09 2016, @12:04AM (#287010) Journal

      1) You have an experinece.
      2) You notice that you had the experience.
      3) You notice that you noticed that you had the experience.

      4) You notice that you noticed that you noticed that you had the experience.
      5) Rinse, Apply, Lather, Rinse, Repeat.

      You see, immediate regressus ad infinitum! That way lies Madness! Or at least a Husserlian Phenomenological Reduction, but those never really end well, in any case.

      • (Score: 2) by HiThere on Saturday January 09 2016, @01:44AM

        by HiThere (866) on Saturday January 09 2016, @01:44AM (#287071) Journal

        Well, consciousness is arbitrarily recursive, limited only by your "stack depth". But you don't get it until step 3.

        N.B.: You can also explaim the same thing in terms of direct experience, model of experience, model of the model of the experience, etc. It means the same thing, but I don't think it's as clear.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 1) by o_o on Sunday January 10 2016, @07:39PM

    by o_o (1544) on Sunday January 10 2016, @07:39PM (#287731)

    God Knows, Man Thinks. So thinking can be viewed as a process that leads to knowing.