Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by AthanasiusKircher on Wednesday May 22 2019, @04:37PM (1 child)

    by AthanasiusKircher (5291) on Wednesday May 22 2019, @04:37PM (#846290) Journal

    Wouldn't Mental illness, feelings or psychosis be a form of data corruption since that was not a programmed behavior.

    What does a "programmed behavior" mean in relationship to a modern AI algorithm, though? Most AI these days depends on huge amounts of statistical functions (like neural nets, and multiple layers of them), trained on datasets, which produce algorithms essentially that depend on enormous weighted values produced from the training, with no real clear interpretation of every value in the data that basically runs the algorithm as it evolves.

    This is often the problem with such AI algorithms too -- you don't know how they will react to novel situations (or even moderately tweaked situations from what they've encountered before), because you can't actually deconstruct their behavior. So, it seems perfectly possible for bad "behavior" or "mental illness" of a sort to creep into even a rather simple algorithm of this type. There's no "programmed behavior" for a machine that can TRULY learn and adapt well to novel situations.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by hemocyanin on Wednesday May 22 2019, @05:29PM

    by hemocyanin (186) on Wednesday May 22 2019, @05:29PM (#846321) Journal

    I recently watched the Netflix documentary about Alpha Go -- the AI that recently beat a top ranked world class go player. The show is called "Alpha Go".

    It was intriguing really to see how the players perspectives (as well as the reaction of live commentators) toward the AI changed as play progressed. It is hard for me to pin down what it seemed I was observing, but there was something really deep going on.

    I don't think watching Alpha Go answers the topic of this thread, but it is perhaps a bit of info that could be useful to throw into one's thinking about the topic.