Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Immerman on Wednesday May 22 2019, @08:33PM

    by Immerman (3985) on Wednesday May 22 2019, @08:33PM (#846370)

    Take a step back for a moment if you will, and lets ask a more fundamental question: How do you feel about letting other people drive? A ride with a friend, a taxi, bus, airplane - anything where someone else is ultimately in control of piloting the vehicle?

    Do you object to a loyal human chauffeur as strenuously as you do to an AI driver?

    Because I think that's the starting point to look at it from - would you like to have someone else do the driving for you? Done right it's a stressful, time-consuming job, and most people are nowhere near as good as a professional driver.
    If not, then obviously an AI driver will be no better.
    But if you are okay with having a driver, then the proper comparison is not "myself or an AI" it's "a professional driver or an AI". And if the AI is (as it will almost certainly *eventually* become) a better, safer driver in every measurable way, then it becomes an aesthetic choice - do you pick the objectively more dangerous driver because they offer "the human touch"? If I was doing something unusual, more likely to confuse the AI, I'd strongly prefer that. But just getting from A to B through normal traffic and normal traffic problems? As long as the AI has the track-record of proven reliability behind it, I see no reason to distrust it.

    Other than security - the destructive potential of a software driving system that can be remotely corrupted should not be understated. Not that I'm likely to be targeted personally, but painting an internet-facing bullseye on an inherently dangerous vehicle is just asking for trouble.

    As for who's liable in the case of an AI malfunction that kills someone? Seems a pretty clear-cut case of manufacturer liability for a product defect to me, and from what little I've heard the car companies mostly agree. As usual when someone rich is at fault, nobody goes to jail - but they've got deep enough pockets that generous payouts are reasonable to expect, if only for the PR impact. The current Tesla,etc. stories muddy the water precisely because Autopilot is *not* self-driving, only a driving assist system, and thus since it explicitly requires you in the loop as safety overseer, it can be reasonably argued that you are ultimately responsible for safe behavior. One of the many reasons I think "almost self-driving" systems should not be allowed. But, once a system is warranted to be able to operate fully autonomously, with no oversight whatsoever, then liability for its actions when used for the purpose it was marketed resides squarely with the manufacturer.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4