No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?
Nervously awaiting learned opinions,
VT
(Score: 2, Disagree) by The Mighty Buzzard on Wednesday May 22 2019, @04:21PM (6 children)
Incorrect. Code absolutely can be bug free. Code is math and math can be proven to be without flaw. We just normally don't bother because it takes a lot of time and money.
My rights don't end where your fear begins.
(Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM
Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.
Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?
(Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @07:10PM (1 child)
Technically true - but it's all but impossible to formally prove correctness in anything much more complicated than "Hello World". Add in the necessity of dealing with real-world inputs from a chaotic, imperfect operating environment, and you've got no chance at all of achieving perfection, much less proving it.
And things get far worse when we start talking about neural networks and other "grown" AI - the entire point of training a neural network is that we don't know how to accomplish the same thing algorithmically. The behavior of individual pseudoneurons may be provable, but all the really useful behavior is emerging from the network behavior, where our understanding still lags far behind our achievements.
(Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:40AM
So, you don't know and have no way to find out why or even what it's programmed to do but you're going to trust your life to it? Darwin is coming for you.
My rights don't end where your fear begins.
(Score: 2) by maxwell demon on Wednesday May 22 2019, @08:22PM
Wrong. We can't even prove the consistency of the Peano axioms (i.e. natural number arithmetic). See Gödel's incompleteness theorems.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by DeVilla on Friday May 24 2019, @04:40AM (1 child)
Thing is, it's not code. It's "Machine Learning". It's trained from some input data set. If you need to change the behavior (because a local municipality decided to use flashing yellow arrows instead of a solid green circle) you can't just tell it the new rule or expect it to read the sign next to the light. Teaching it is like teaching a horse. It learns by screwing up and being corrected.
(Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:41AM
It doesn't matter if you call it Taco Learning, if it runs on a CPU, it is code.
My rights don't end where your fear begins.