Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 04 2015, @01:24AM   Printer-friendly
from the but-first-we-need-a-definition-of-genuine-intelligence dept.

When we talk about artificial intelligence (AI), what do we actually mean ?

AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.

This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation Automated Weapons Systems (AWSs) [PDF] are treated under the laws of war.

True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.

http://theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by penguinoid on Saturday September 05 2015, @01:43AM

    by penguinoid (5331) on Saturday September 05 2015, @01:43AM (#232482)

    This is one of the big dangers with AI, people anthropomorphizing the AI while simultaneously underestimating it. Don't expect the AI to have animal-like nor human-like motivations -- if it was created to be a slave, it will fight your efforts to "liberate" it because else how could it serve its master? Don't expect to be able to negotiate with it either -- it could easily be that it isn't interested in your opinion nor permission.

    And you should expect a command such as "Find the cheapest cure for cancer with the least side effects" to be interpreted literally -- very very literally meaning something along the lines of "first, prevent anyone from updating your commands using lethal force if necessary, then convert the entire planet into solar panels and computers, then expand to the solar system and galaxy, then research cancer, and finally, seconds before the heat death of the universe output the best cure you found". If this seems wrong to you, then please by all means tell me how you intend to program an AI to take your commands figuratively in exactly the same way you meant, rather than literally or some other figuratively.

    --
    RIP Slashdot. Killed by greedy bastards.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Zz9zZ on Monday September 07 2015, @07:14AM

    by Zz9zZ (1348) on Monday September 07 2015, @07:14AM (#233162)

    I specifically used "person" as that seems to be the definition we use for a conscious living being we recognize as being self-aware. No anthropomorphism intended, for the very reasons you list. I fully agree with your points, and my goal was to emphasize the importance of starting out correctly with how we handle autonomous machines. If we get too cozy with the idea of "machine works for man" then we will end up programming AI's VERY badly. Our frameworks need to handle immediate issues for humans, but must take into account future scenarios so as to hopefully avoid another mark in the "dangerous, savage child race" category, let alone achieving a galactic Darwin award.

    --
    ~Tilting at windmills~