Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Friday October 14 2016, @05:28PM   Printer-friendly
from the avoiding-Samaritan dept.

The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI "raises a host of ethical and legal issues".

"We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI," the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that "no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time."

They think they can stop Samaritan?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by SomeGuy on Friday October 14 2016, @06:58PM

    by SomeGuy (5632) on Friday October 14 2016, @06:58PM (#414410)

    Exactly this. I think the last time this sort of topic came up, someone claiming to be an AI expert was going on about how they could audit the entire thing... right. Two problems, first, if every possible decision is audited and approved, then it no longer "artificial" intelligence, it is real intelligence - the process of programming is just different, possibly simplified but with a more difficult auditing process. Second, when you are dealing with any sizable intelligence, who in the heck is realistically going to audit all of that? We have a hard enough time getting people to audit traditional program code.

    On the other hand, without auditing do we really know what is in "modern" program code anyway? Which system should we trust more? A trillion seemingly random bits of organically learned AI knowledge, or a trillion lines of code pumped out by drooling outsourced golden-brown sacks of crap working from their hovels in India from business requirements penned by a PHB with a brain the size of a flee.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by tibman on Friday October 14 2016, @07:46PM

    by tibman (134) Subscriber Badge on Friday October 14 2016, @07:46PM (#414423)

    Yeah, emergent behavior is probably impossible to audit. It's like auditing a bunch of ants to determine what the exact shape/size/design the colony is going to be. Any prediction is a guess at best.

    --
    SN won't survive on lurkers alone. Write comments.
    • (Score: 0) by Anonymous Coward on Friday October 14 2016, @08:30PM

      by Anonymous Coward on Friday October 14 2016, @08:30PM (#414440)

      We are basically going to come back around to the messy, imperfect nature of human behavior so abhorred by literalist geeks - the need for "good" judgment where "good" is some sort of consensus (aka cultural) concept that has no fixed, absolute definition.

  • (Score: 1, Insightful) by Anonymous Coward on Friday October 14 2016, @11:34PM

    by Anonymous Coward on Friday October 14 2016, @11:34PM (#414473)

    Your definitions of artificial intelligence and real intelligence are flawed. "artificial" means man made with "real" meaning nature grown biologically. Being able to fully understand something doesn't change it from artificial to real.

    You can fully audit any AI, computers are still deterministic. If you have the code, the data sets, and a record of all the input you can calculate exactly what it'll do in every situation. This is assuming the AI is all software. If you run genetic algorithms on reconfigurable chips those algorithms end up using the properties of their environment and we don't have control over the environment. So if you have all the data, then the AI is completely predictable even if some stranger walked by and was startled by something he didn't predict.

    However, AI techniques like neural nets (where all the advancements are coming from) have a vast search space so trying to predict what will happen given some input without actually simulating it is very difficult. These AIs have 'tiny holes' in them meaning if the right pixel is off in exactly a certain way it'll trigger a completely unexpected outcome. Well, not unexpected because you can trace through everything and see it happen, but unexpected as in unwanted behavior. Like when you're suddenly startled because you thought for a moment that speck of dust floating on the air was a spider trying to land on your face. Misfires like that. Neural nets get startled too.

    Many AI algorithms are quite compact. They don't take a ton of code, so you can validate that all the algorithms and components are correct and then you simulate using whatever data you have. That's how you audit AI stuff.

    • (Score: 0) by Anonymous Coward on Saturday October 15 2016, @10:42AM

      by Anonymous Coward on Saturday October 15 2016, @10:42AM (#414574)

      Artificial vs Actual intelligence in the AI debate comes down to understanding. This is a common problem in the field of artificial intelligence. Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

      The distinction here is between Strong or General AI versus ML algorithms that "attempt" to emulate human thought. While I agree that ML is fairly well understood and to some extent "audit-able" it is also not to be considered "intelligent" in a Strong or General AI sense. ML performs tasks that can be fairly well understood and validated at a reasonably low level of function. The issue with any proposed Strong or General AI is that if it can be validated a low level, but our brain can not be, then the SAI/AGI is not performing intelligently, rather it is simply following a set of understood and validated "rules".

      Again perhaps only a distinction of philosophy, but a still a useful distinction when attempting to "define" Artificial Intelligence vs comparative "learning" processes like ML.

      One frequently issued example is understanding of colors. We can not accurately define or explain colors. One can not necessarily prove that the red I see and the red you see and the red an ML sees are all "the same". A ML does not have the same understanding of a color that you have, or that I have, or that a person with one of the myriad forms of color-blindness has. Furthermore the ML can not describe the color it sees any better than we can.

      In a similar way, an ML that produces a story/screenplay/poem does not "understand" the work they create. They can not be asked to explain their thought processes or "show their work" in the way a human writer can. Furthermore they can not critically evaluate and compare their writing to another piece of writing. These statements may all be appended with a "YET" but currently this is still an issue for describing ML as "intelligent".

      • (Score: 2) by maxwell demon on Saturday October 15 2016, @03:57PM

        by maxwell demon (1608) on Saturday October 15 2016, @03:57PM (#414601) Journal

        Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

        So as soon as we fully understand human intelligence, humans cease to be intelligent?

        --
        The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by Murdoc on Friday October 14 2016, @11:38PM

    by Murdoc (2518) on Friday October 14 2016, @11:38PM (#414475)

    I don't think that it is as impossible as you think. It's not like we have no idea how to analyse complex systems. In such things, we don't have to be able to predict the exact outcome of every input with perfect precision, but we can get a good idea in the short term of what is likely and also of the overall range of outputs that the system can produce. Think like weather: we have a decent idea of what it will be like in the next few days, and overall we know that it's not going to spontaneously develop a storm that will wipe out all life on Earth. Nor will all weather calm to nothing, everywhere at once, and stay that way. I think we can study AI with at least that level of precision, and probably better, since we can manipulate the inputs in a controlled environment, unlike the weather. There are numerous other analogues of complex systems already being studied as well. And really I think that those that are developing AI are probably already doing all the low-level analysis anyway. After all, they don't want their Rosie the Robot to start putting your house guests in the stew. This commission will just be there to provide a way to make sure that higher-level analysis is done, and in a setting that is (ostensibly anyway) directed for the public good, instead of by myopic, profit seeking enterprises.