Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 04 2015, @01:24AM   Printer-friendly
from the but-first-we-need-a-definition-of-genuine-intelligence dept.

When we talk about artificial intelligence (AI), what do we actually mean ?

AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.

This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation Automated Weapons Systems (AWSs) [PDF] are treated under the laws of war.

True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.

http://theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by meustrus on Friday September 04 2015, @03:04AM

    by meustrus (4961) on Friday September 04 2015, @03:04AM (#232104)

    We don't have to solve the hard philosophical problems about "intelligence" to solve the legal problems. Most of the time it comes down to liability. And when it comes to liability, I have one solid rule about what makes an intelligence liable for its own actions:

    It understands the consequences and can feel remorse.

    AI is good at finding the optimal solutions to problems. This is why it exists. And if it can feel remorse - if it can factor in some negative value for the results of its actions, past or future - then it has surpassed its creators' responsibility for its own actions. Think about it in terms of a clock versus a person with a mental disability. If one sells a clock to a hospital, and somebody dies because medication was administered at the wrong time, then the person who made (or more likely the person who sold) the clock is responsible. But what if a person dies because a mentally disabled person was hired to manage their medication and failed to do so correctly? Is it their fault? Probably, unless the person who hired them knew they weren't up for the job. And that's where we need to get with AI.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 3, Touché) by c0lo on Friday September 04 2015, @04:24AM

    by c0lo (156) Subscriber Badge on Friday September 04 2015, @04:24AM (#232125) Journal
    It understands the consequences and can feel remorse... [etc]

    Ah, I see. So, this is why we haven't seen many banksters in jail - they fall outside the "intelligence" definition and you can't punish a moron.
    My only problem: why do we reward them with our money?

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 2) by penguinoid on Saturday September 05 2015, @02:24AM

    by penguinoid (5331) on Saturday September 05 2015, @02:24AM (#232489)

    My rule for liability would be something along the lines of "entities bear responsibility in direct proportion to how much their actions contributed to the result". Liability is similar but should go to whatever criminal actions lead to the result, whatever actions were performed with the result as the intent, and finally to negligent actions. Where AI fits in here would be rather complicated and depend on all kinds of details including AI complexity and the decision to deploy the AI.

    As to the actions of a self-driving cars, the solution is simple -- the AI company pays for the car insurance, any criminal liability disappears into a cloud of corporations.

    --
    RIP Slashdot. Killed by greedy bastards.