When we talk about artificial intelligence (AI), what do we actually mean ?
AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.
This definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation Automated Weapons Systems (AWSs) [PDF] are treated under the laws of war.
True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts). But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.
http://theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796
(Score: 2) by penguinoid on Saturday September 05 2015, @02:24AM
My rule for liability would be something along the lines of "entities bear responsibility in direct proportion to how much their actions contributed to the result". Liability is similar but should go to whatever criminal actions lead to the result, whatever actions were performed with the result as the intent, and finally to negligent actions. Where AI fits in here would be rather complicated and depend on all kinds of details including AI complexity and the decision to deploy the AI.
As to the actions of a self-driving cars, the solution is simple -- the AI company pays for the car insurance, any criminal liability disappears into a cloud of corporations.
RIP Slashdot. Killed by greedy bastards.