Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:32PM (2 children)

    by Anonymous Coward on Wednesday May 22 2019, @01:32PM (#846186)

    And what makes you think the AI will let you unplug it?

  • (Score: 2) by choose another one on Wednesday May 22 2019, @03:52PM (1 child)

    by choose another one (515) Subscriber Badge on Wednesday May 22 2019, @03:52PM (#846268)

    Nuke it from orbit, only way to be sure.

    • (Score: 0) by Anonymous Coward on Thursday May 23 2019, @10:12AM

      by Anonymous Coward on Thursday May 23 2019, @10:12AM (#846582)

      Well, unless the AI actually controls the nukes.