No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?
Nervously awaiting learned opinions,
VT
(Score: 3, Interesting) by JoeMerchant on Wednesday May 22 2019, @04:33PM
Oh, clearly there's the high liklihood that whales _might_ be able to speak English, but just don't even care to attempt it, because we're so obviously not worth the effort: from their perspective.
🌻🌻 [google.com]