Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Wednesday May 22 2019, @05:02PM

    by HiThere (866) Subscriber Badge on Wednesday May 22 2019, @05:02PM (#846304) Journal

    Mammal emotions are hormone driven, but that's not a definition, as so is, e.g., blood pressure, the need to urinate, etc.

    I would define emotions as a technique for handling conflicting goal states. In that case, once you get beyond simple classifier systems you *will* have emotional conflicts which need to be resolved. This is neither good nor bad, but simply a design requirement. It only becomes a problem if the way of resolving those conflicts is a problem.

    That said, don't expect robot emotional conflicts to be similar to human emotional conflicts. There are a lot of very basic design decisions that are going to be different, and that's what the conflict and resolution mechanism are based on. But you can minimize the conflicts if you minimize the conflicts between goal states. One way to do this is to give them a clear ranking, but Maslow's hierarchy of needs provides some excellent reasons why that is only going to have limited success. And Buridan's Ass informs us that sometimes an arbitrary choice will be necessary.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2