Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by n1 on Friday January 08 2016, @08:54AM   Printer-friendly
from the something-to-think-about dept.

The idea of a thinking machine is an amazing one. It would be like humans creating artificial life, only more impressive because we would be creating consciousness. Or would we ? It's tempting to think that a machine that could think would think like us. But a bit of reflection shows that's not an inevitable conclusion.

To begin with, we'd better be clear about what we mean by "think". A comparison with human thinking might be intuitive, but what about animal thinking? Does a chimpanzee think? Does a crow? Does an octopus ?

The philosopher Thomas Nagel said that there was "something that it is like" to have conscious experiences. There's something that it is like to see the colour red, or to go water skiing. We are more than just our brain states.

Could there ever be "something that it's like" to be a thinking machine? In an imagined conversation with the first intelligent machine, a human might ask "Are you conscious?", to which it might reply, "How would I know?".

http://theconversation.com/what-does-it-mean-to-think-and-could-a-machine-ever-do-it-51316

[Related Video]: They're Made Out of Meat


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by acid andy on Friday January 08 2016, @12:53PM

    by acid andy (1683) on Friday January 08 2016, @12:53PM (#286561) Homepage Journal

    There are two separate questions here. The first is the question of whether the machine can be shown to "think" by an objective third person scientific analysis of its behavior (which may include analysis of its internal state). I'd argue this kind of thinking has probably already been achieved to some degree with artificial neural networks, but taking this question to it's strictest level would be whether you could build an artificial android whose behavior would be indistinguishable from a human.

    The second, harder question, relates to the quote from Nagel. This is the so called "Hard Problem of Consciousness". The question of whether some entity (I'll avoid using a word like "soul" because I'll be dismissed without a second thought and likely start a flame war) could have a direct, subjective first person experience of the machine's internal state. This question is considered by many to be impossible to answer through any objective, logical analysis, even for humans, let alone artificially created machines.

    --
    Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3