Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday October 14 2016, @05:28PM   Printer-friendly
from the avoiding-Samaritan dept.

The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI "raises a host of ethical and legal issues".

"We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI," the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that "no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time."

They think they can stop Samaritan?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Friday October 14 2016, @11:34PM

    by Anonymous Coward on Friday October 14 2016, @11:34PM (#414473)

    Your definitions of artificial intelligence and real intelligence are flawed. "artificial" means man made with "real" meaning nature grown biologically. Being able to fully understand something doesn't change it from artificial to real.

    You can fully audit any AI, computers are still deterministic. If you have the code, the data sets, and a record of all the input you can calculate exactly what it'll do in every situation. This is assuming the AI is all software. If you run genetic algorithms on reconfigurable chips those algorithms end up using the properties of their environment and we don't have control over the environment. So if you have all the data, then the AI is completely predictable even if some stranger walked by and was startled by something he didn't predict.

    However, AI techniques like neural nets (where all the advancements are coming from) have a vast search space so trying to predict what will happen given some input without actually simulating it is very difficult. These AIs have 'tiny holes' in them meaning if the right pixel is off in exactly a certain way it'll trigger a completely unexpected outcome. Well, not unexpected because you can trace through everything and see it happen, but unexpected as in unwanted behavior. Like when you're suddenly startled because you thought for a moment that speck of dust floating on the air was a spider trying to land on your face. Misfires like that. Neural nets get startled too.

    Many AI algorithms are quite compact. They don't take a ton of code, so you can validate that all the algorithms and components are correct and then you simulate using whatever data you have. That's how you audit AI stuff.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Saturday October 15 2016, @10:42AM

    by Anonymous Coward on Saturday October 15 2016, @10:42AM (#414574)

    Artificial vs Actual intelligence in the AI debate comes down to understanding. This is a common problem in the field of artificial intelligence. Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

    The distinction here is between Strong or General AI versus ML algorithms that "attempt" to emulate human thought. While I agree that ML is fairly well understood and to some extent "audit-able" it is also not to be considered "intelligent" in a Strong or General AI sense. ML performs tasks that can be fairly well understood and validated at a reasonably low level of function. The issue with any proposed Strong or General AI is that if it can be validated a low level, but our brain can not be, then the SAI/AGI is not performing intelligently, rather it is simply following a set of understood and validated "rules".

    Again perhaps only a distinction of philosophy, but a still a useful distinction when attempting to "define" Artificial Intelligence vs comparative "learning" processes like ML.

    One frequently issued example is understanding of colors. We can not accurately define or explain colors. One can not necessarily prove that the red I see and the red you see and the red an ML sees are all "the same". A ML does not have the same understanding of a color that you have, or that I have, or that a person with one of the myriad forms of color-blindness has. Furthermore the ML can not describe the color it sees any better than we can.

    In a similar way, an ML that produces a story/screenplay/poem does not "understand" the work they create. They can not be asked to explain their thought processes or "show their work" in the way a human writer can. Furthermore they can not critically evaluate and compare their writing to another piece of writing. These statements may all be appended with a "YET" but currently this is still an issue for describing ML as "intelligent".

    • (Score: 2) by maxwell demon on Saturday October 15 2016, @03:57PM

      by maxwell demon (1608) on Saturday October 15 2016, @03:57PM (#414601) Journal

      Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

      So as soon as we fully understand human intelligence, humans cease to be intelligent?

      --
      The Tao of math: The numbers you can count are not the real numbers.