Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 04 2015, @01:24AM   Printer-friendly
from the but-first-we-need-a-definition-of-genuine-intelligence dept.

When we talk about artificial intelligence (AI), what do we actually mean ?

AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.

This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation Automated Weapons Systems (AWSs) [PDF] are treated under the laws of war.

True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.

http://theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by penguinoid on Saturday September 05 2015, @01:12AM

    by penguinoid (5331) on Saturday September 05 2015, @01:12AM (#232479)

    I'd be in favor of any law that would increase the probability that the first general AI be created by an ethical and responsible programmer/group. This would probably have to be some sort of government sponsored research or education focusing on safe AI design.

    I see development of a general AI as incredibly dangerous (to the extent of potential extermination of all life on Earth, even the cockroaches) but also inevitable (impossible to ban hardware research because others won't, hardware is approaching the calculating power of a human being, there's already an 800 MB sample code for a general AI and blueprints for its hardware). Hopefully, someone can create one that has all the correct human values... else bye bye humanity.

    --
    RIP Slashdot. Killed by greedy bastards.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1) by khallow on Saturday September 05 2015, @03:55AM

    by khallow (3766) Subscriber Badge on Saturday September 05 2015, @03:55AM (#232503) Journal

    I'd be in favor of any law that would increase the probability that the first general AI be created by an ethical and responsible programmer/group.

    Since laws don't do that, what else do we have here?

    This would probably have to be some sort of government sponsored research or education focusing on safe AI design.

    And which government do you trust enough to do that? I probably could get behind a Swiss or Swedish led project, but even then, where's the experience and capabilities to keep bad things from happening?

    I think a better approach here is to improve humanity while we do the AI stuff. That way, the AI doesn't have to be smarter than us now, it has to be smarter than a substantially advanced human.