Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by janrinok on Friday October 14 2016, @05:28PM   Printer-friendly
from the avoiding-Samaritan dept.

The UK government has been urged to establish an AI ethics board to tackle the creeping influence of machine learning on society.

The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI "raises a host of ethical and legal issues".

"We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI," the report said.

It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

Innovate UK – an agency of UK.gov's Department of Business – said that "no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time."

They think they can stop Samaritan?


Original Submission

Related Stories

Is Ethical A.I. Even Possible? 35 comments

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Friday October 14 2016, @05:53PM

    by Anonymous Coward on Friday October 14 2016, @05:53PM (#414390)

    'The UK Government Urged to Establish an Artificial Intelligence Ethics Board'

    Interesting, maybe they could first try having a bloody UK Government Ethics [theguardian.com] Board...

  • (Score: 1, Informative) by Anonymous Coward on Friday October 14 2016, @06:03PM

    by Anonymous Coward on Friday October 14 2016, @06:03PM (#414392)

    It highlighted that methods are required to verify that AI systems are operating in a transparent manner, to make sure that their behaviour is not unpredictable, and that any decisions made can be explained.

    The thing about these systems is that we literally don't know how they work. We "train" them by feeding them data, but the associations they create between inputs and results are completely informal. For all we know they are built on statistical anomalies that are not apparent and not actually meaningful. We have a hard enough time dealing with bad logic in humans (prejudice, stereotyping, etc) but at least we inherently have the capacity to understand those mechanism by virtue of us all sharing similar equipment. But we won't even have that in common with AI.

    • (Score: 2) by SomeGuy on Friday October 14 2016, @06:58PM

      by SomeGuy (5632) on Friday October 14 2016, @06:58PM (#414410)

      Exactly this. I think the last time this sort of topic came up, someone claiming to be an AI expert was going on about how they could audit the entire thing... right. Two problems, first, if every possible decision is audited and approved, then it no longer "artificial" intelligence, it is real intelligence - the process of programming is just different, possibly simplified but with a more difficult auditing process. Second, when you are dealing with any sizable intelligence, who in the heck is realistically going to audit all of that? We have a hard enough time getting people to audit traditional program code.

      On the other hand, without auditing do we really know what is in "modern" program code anyway? Which system should we trust more? A trillion seemingly random bits of organically learned AI knowledge, or a trillion lines of code pumped out by drooling outsourced golden-brown sacks of crap working from their hovels in India from business requirements penned by a PHB with a brain the size of a flee.

      • (Score: 2) by tibman on Friday October 14 2016, @07:46PM

        by tibman (134) Subscriber Badge on Friday October 14 2016, @07:46PM (#414423)

        Yeah, emergent behavior is probably impossible to audit. It's like auditing a bunch of ants to determine what the exact shape/size/design the colony is going to be. Any prediction is a guess at best.

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 0) by Anonymous Coward on Friday October 14 2016, @08:30PM

          by Anonymous Coward on Friday October 14 2016, @08:30PM (#414440)

          We are basically going to come back around to the messy, imperfect nature of human behavior so abhorred by literalist geeks - the need for "good" judgment where "good" is some sort of consensus (aka cultural) concept that has no fixed, absolute definition.

      • (Score: 1, Insightful) by Anonymous Coward on Friday October 14 2016, @11:34PM

        by Anonymous Coward on Friday October 14 2016, @11:34PM (#414473)

        Your definitions of artificial intelligence and real intelligence are flawed. "artificial" means man made with "real" meaning nature grown biologically. Being able to fully understand something doesn't change it from artificial to real.

        You can fully audit any AI, computers are still deterministic. If you have the code, the data sets, and a record of all the input you can calculate exactly what it'll do in every situation. This is assuming the AI is all software. If you run genetic algorithms on reconfigurable chips those algorithms end up using the properties of their environment and we don't have control over the environment. So if you have all the data, then the AI is completely predictable even if some stranger walked by and was startled by something he didn't predict.

        However, AI techniques like neural nets (where all the advancements are coming from) have a vast search space so trying to predict what will happen given some input without actually simulating it is very difficult. These AIs have 'tiny holes' in them meaning if the right pixel is off in exactly a certain way it'll trigger a completely unexpected outcome. Well, not unexpected because you can trace through everything and see it happen, but unexpected as in unwanted behavior. Like when you're suddenly startled because you thought for a moment that speck of dust floating on the air was a spider trying to land on your face. Misfires like that. Neural nets get startled too.

        Many AI algorithms are quite compact. They don't take a ton of code, so you can validate that all the algorithms and components are correct and then you simulate using whatever data you have. That's how you audit AI stuff.

        • (Score: 0) by Anonymous Coward on Saturday October 15 2016, @10:42AM

          by Anonymous Coward on Saturday October 15 2016, @10:42AM (#414574)

          Artificial vs Actual intelligence in the AI debate comes down to understanding. This is a common problem in the field of artificial intelligence. Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

          The distinction here is between Strong or General AI versus ML algorithms that "attempt" to emulate human thought. While I agree that ML is fairly well understood and to some extent "audit-able" it is also not to be considered "intelligent" in a Strong or General AI sense. ML performs tasks that can be fairly well understood and validated at a reasonably low level of function. The issue with any proposed Strong or General AI is that if it can be validated a low level, but our brain can not be, then the SAI/AGI is not performing intelligently, rather it is simply following a set of understood and validated "rules".

          Again perhaps only a distinction of philosophy, but a still a useful distinction when attempting to "define" Artificial Intelligence vs comparative "learning" processes like ML.

          One frequently issued example is understanding of colors. We can not accurately define or explain colors. One can not necessarily prove that the red I see and the red you see and the red an ML sees are all "the same". A ML does not have the same understanding of a color that you have, or that I have, or that a person with one of the myriad forms of color-blindness has. Furthermore the ML can not describe the color it sees any better than we can.

          In a similar way, an ML that produces a story/screenplay/poem does not "understand" the work they create. They can not be asked to explain their thought processes or "show their work" in the way a human writer can. Furthermore they can not critically evaluate and compare their writing to another piece of writing. These statements may all be appended with a "YET" but currently this is still an issue for describing ML as "intelligent".

          • (Score: 2) by maxwell demon on Saturday October 15 2016, @03:57PM

            by maxwell demon (1608) on Saturday October 15 2016, @03:57PM (#414601) Journal

            Once you can fully audit an intelligent system and there is no "mystery" to how it works it ceases to be artificial intelligence and becomes simply a program (this is a philosophical distinction rather than a programmatic distinction.) The idea being that once an AI is fully audited or understood, it ceases to mirror the complexity and magic of the human mind, which has functions that we can not understand or audit.

            So as soon as we fully understand human intelligence, humans cease to be intelligent?

            --
            The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by Murdoc on Friday October 14 2016, @11:38PM

        by Murdoc (2518) on Friday October 14 2016, @11:38PM (#414475)

        I don't think that it is as impossible as you think. It's not like we have no idea how to analyse complex systems. In such things, we don't have to be able to predict the exact outcome of every input with perfect precision, but we can get a good idea in the short term of what is likely and also of the overall range of outputs that the system can produce. Think like weather: we have a decent idea of what it will be like in the next few days, and overall we know that it's not going to spontaneously develop a storm that will wipe out all life on Earth. Nor will all weather calm to nothing, everywhere at once, and stay that way. I think we can study AI with at least that level of precision, and probably better, since we can manipulate the inputs in a controlled environment, unlike the weather. There are numerous other analogues of complex systems already being studied as well. And really I think that those that are developing AI are probably already doing all the low-level analysis anyway. After all, they don't want their Rosie the Robot to start putting your house guests in the stew. This commission will just be there to provide a way to make sure that higher-level analysis is done, and in a setting that is (ostensibly anyway) directed for the public good, instead of by myopic, profit seeking enterprises.

  • (Score: 0) by Anonymous Coward on Friday October 14 2016, @06:35PM

    by Anonymous Coward on Friday October 14 2016, @06:35PM (#414399)

    I envision they will have Watson start reading all the ethics journals instead of the medical ones.

    /drtfa

    • (Score: 2) by maxwell demon on Saturday October 15 2016, @03:59PM

      by maxwell demon (1608) on Saturday October 15 2016, @03:59PM (#414602) Journal

      What will happen if you ask Watson whether it is ethical to leave ethical decisions to Watson, and Watson answers "no"?

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by urza9814 on Monday October 17 2016, @08:11PM

        by urza9814 (3954) on Monday October 17 2016, @08:11PM (#415362) Journal

        What will happen if you ask Watson whether it is ethical to leave ethical decisions to Watson, and Watson answers "no"?

        We'll set our best and brightest philosophers to the task of redefining 'Ethics' until that answer changes...

  • (Score: 2) by RamiK on Friday October 14 2016, @06:53PM

    by RamiK (1813) on Friday October 14 2016, @06:53PM (#414405)

    To what end? What could the board possibly accomplish? They can't influence foreign corporations or governments regarding anything to do with AI. Worse, with Brexit underway they can't even force the EU to adopt some kind of product regulations as part of the common market.

    If they'll cut research funding, plenty of governments will recruit the researchers to their own universities.

    Honestly, a tribal village elder has as much a say to the price of tea in China as the UK government has a hand in shaping artificial intelligence research world-wide.

    --
    compiling...
    • (Score: 2) by Murdoc on Friday October 14 2016, @11:44PM

      by Murdoc (2518) on Friday October 14 2016, @11:44PM (#414478)

      I don't believe that the purpose of the board is to directly affect anything. It's just for research. Then, once that research is available, then governments like the UK and possibly others can use that research as the basis for their policy decisions, whether just in their own countries, or in the formation of international treaties. Other countries will of course have the option of creating their own commissions to verify or invalidate their research, but it's a step in the right direction I think, and hopefully humanity will come to conclusions that will lead to better legislation in this area for the benefit of all, insofar as you may or may not have faith that humanity can do that at all in any area, but assuming it is possible, this would be the first step in doing so.

      • (Score: 0) by Anonymous Coward on Saturday October 15 2016, @05:28PM

        by Anonymous Coward on Saturday October 15 2016, @05:28PM (#414616)

        Nah:

        The call comes from a Robotics and Artificial Intelligence report published yesterday by the the House of Commons science and technology select committee. It quotes experts who warned the panel that AI “raises a host of ethical and legal issues”.

        Try again.

  • (Score: 2, Funny) by Anonymous Coward on Friday October 14 2016, @08:24PM

    by Anonymous Coward on Friday October 14 2016, @08:24PM (#414438)

    The chairman must be a fellow named Dave.

    • (Score: 3, Funny) by JNCF on Friday October 14 2016, @10:44PM

      by JNCF (4317) on Friday October 14 2016, @10:44PM (#414465) Journal

      I'm sorry man, I'm afraid Dave's not here.

  • (Score: 0) by Anonymous Coward on Friday October 14 2016, @10:45PM

    by Anonymous Coward on Friday October 14 2016, @10:45PM (#414468)

    Are relative.

    And those have have none, really dont care about rules that try to impose arbitrary ones ..