No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?
Nervously awaiting learned opinions,
VT
(Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM
Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.
Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?