Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by janrinok on Friday October 14 2016, @05:28PM   Printer-friendly
from the avoiding-Samaritan dept.

The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI "raises a host of ethical and legal issues".

"We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI," the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that "no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time."

They think they can stop Samaritan?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Murdoc on Friday October 14 2016, @11:38PM

    by Murdoc (2518) on Friday October 14 2016, @11:38PM (#414475)

    I don't think that it is as impossible as you think. It's not like we have no idea how to analyse complex systems. In such things, we don't have to be able to predict the exact outcome of every input with perfect precision, but we can get a good idea in the short term of what is likely and also of the overall range of outputs that the system can produce. Think like weather: we have a decent idea of what it will be like in the next few days, and overall we know that it's not going to spontaneously develop a storm that will wipe out all life on Earth. Nor will all weather calm to nothing, everywhere at once, and stay that way. I think we can study AI with at least that level of precision, and probably better, since we can manipulate the inputs in a controlled environment, unlike the weather. There are numerous other analogues of complex systems already being studied as well. And really I think that those that are developing AI are probably already doing all the low-level analysis anyway. After all, they don't want their Rosie the Robot to start putting your house guests in the stew. This commission will just be there to provide a way to make sure that higher-level analysis is done, and in a setting that is (ostensibly anyway) directed for the public good, instead of by myopic, profit seeking enterprises.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2