Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Disagree) by The Mighty Buzzard on Wednesday May 22 2019, @04:21PM (6 children)

    Incorrect. Code absolutely can be bug free. Code is math and math can be proven to be without flaw. We just normally don't bother because it takes a lot of time and money.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Moderation   0  
       Disagree=1, Total=1
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM

    by Anonymous Coward on Wednesday May 22 2019, @06:26PM (#846345)

    Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.

    Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?

  • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @07:10PM (1 child)

    by Immerman (3985) on Wednesday May 22 2019, @07:10PM (#846356)

    Technically true - but it's all but impossible to formally prove correctness in anything much more complicated than "Hello World". Add in the necessity of dealing with real-world inputs from a chaotic, imperfect operating environment, and you've got no chance at all of achieving perfection, much less proving it.

    And things get far worse when we start talking about neural networks and other "grown" AI - the entire point of training a neural network is that we don't know how to accomplish the same thing algorithmically. The behavior of individual pseudoneurons may be provable, but all the really useful behavior is emerging from the network behavior, where our understanding still lags far behind our achievements.

  • (Score: 2) by maxwell demon on Wednesday May 22 2019, @08:22PM

    by maxwell demon (1608) on Wednesday May 22 2019, @08:22PM (#846366) Journal

    Code is math and math can be proven to be without flaw.

    Wrong. We can't even prove the consistency of the Peano axioms (i.e. natural number arithmetic). See Gödel's incompleteness theorems.

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by DeVilla on Friday May 24 2019, @04:40AM (1 child)

    by DeVilla (5354) on Friday May 24 2019, @04:40AM (#846951)

    Thing is, it's not code. It's "Machine Learning". It's trained from some input data set. If you need to change the behavior (because a local municipality decided to use flashing yellow arrows instead of a solid green circle) you can't just tell it the new rule or expect it to read the sign next to the light. Teaching it is like teaching a horse. It learns by screwing up and being corrected.