Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday May 29 2019, @11:58AM   Printer-friendly
from the I'm-sorry-Dave dept.

Artificial intelligence is ubiquitous. Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song.

But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

[...] To better understand how AI might be governed, and how to prevent human bias from altering the automated systems we rely on every day, CNET spoke with Salesforce AI experts Kathy Baxter and Richard Socher in San Francisco. Regulating the technology might be challenging, and the process will require nuance, said Baxter.

The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said. "That we make sure [the process] does not infringe on those human rights. It also needs  to be transparent. It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it."

Salesforce and other tech firms, Baxter said, are developing cross-industry guidance on the criteria for data used in AI data models. "We will show the factors that are used in a model like age, race, gender. And we're going to raise a flag if you're using one of those protected data categories."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by physicsmajor on Wednesday May 29 2019, @03:34PM (4 children)

    by physicsmajor (1471) on Wednesday May 29 2019, @03:34PM (#848941)

    This is the right answer.

    Deep Learning is very, very good at finding patterns and it does so with the easiest and best shortcuts. Thus if you feed it headshots of the perpetrators of all violent firearm homicides, you should not be shocked when it becomes biased toward identifying young black men; they are wildly overrepresented in this cohort due to gang violence. I make no commentary on this other than stating the facts. This sort of thing is how we get periodic stories in the lay press about AI bring biased.

    But there's the rub. It is biased, but biased by reality. The only way to prevent this is to start faking data to manipulate the result back to [normal/acceptable] but what is acceptable when it's apparently not even okay to be white? Regardless, when we do any manipulation to the inputs, the output becomes worthless - so either we don't try to train on situations with a skin color bias, or we accept that the result will be biased by reality.

    Starting Score:    1  point
    Moderation   +1  
       Troll=2, Insightful=1, Interesting=2, Total=5
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Informative) by HiThere on Wednesday May 29 2019, @04:34PM (2 children)

    by HiThere (866) Subscriber Badge on Wednesday May 29 2019, @04:34PM (#848969) Journal

    I accept that you believe that you are stating facts, but you aren't. They a slightly over represented in that group due to gang violence. Another reason they are over represented is that they get arrested for things that others don't. Another reason they are over represented is that most of them can't afford decent lawyers, so even with equivalent evidence (and an unbiased jury?), they will be convicted more often. There are probably other reasons that didn't occur to me off the top of my head, and likely also some ameliorating factors.

    N.B.: These are primary reasons, not secondaries, like "lower classes are always more violent because they're more frustrated". The secondaries are important if you're trying to figure out how to address the result, but not significant in surface explanations. For those you only want direct observables, not the reasons why they are observed.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 3, Interesting) by physicsmajor on Thursday May 30 2019, @02:09AM (1 child)

      by physicsmajor (1471) on Thursday May 30 2019, @02:09AM (#849156)

      Mod me troll all you like, but please bring evidence next time.

      The data is pretty clear here. Over half of the actual mug shots in this cohort would be black, despite blacks being about 14% of the US population. There's not a "slight" over-representation, the magnitude of this is closer to an order of magnitude when referenced within each race (it's about 9x). Within that group they are also wildly more likely to be male and young, which is a further serious bias problem. But honestly any bias along racial lines at all WILL be picked up by a deep learning algorithm, because overall skin tone is very easy for the algorithm to train itself to see.

      My point is really simple: reality is biased. It is, I am deliberately tabling the discussion about why (socioeconomic is certainly a major factor), the point is that there IS bias. You feed reality in, you will get biased results (or lay press articles screaming about racist AI). Fixing this is not trivial, because it is the ground truth - any manipulation you do makes the algorithm in question untrustworthy, and non-generalizable as it skews the algorithm farther from reality.

      • (Score: 2) by HiThere on Thursday May 30 2019, @03:51AM

        by HiThere (866) Subscriber Badge on Thursday May 30 2019, @03:51AM (#849184) Journal

        I'm not denying that over half the mug shots would be blacks, I'm denying that there is evidence supporting your reason for why that is true....or rather for the extent of it's truth. The objective fact is true. The justification does not match available evidence. If you go back a bit the Irish gangs and the tongs were just as violent. And in both cases large numbers of innocent folks were swept into prison on the wings of prejudice. So you can't say it's because of the gangs. That's a part of the reason, but only a part...and often not a large part.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Informative) by DeathMonkey on Wednesday May 29 2019, @05:36PM

    by DeathMonkey (1380) on Wednesday May 29 2019, @05:36PM (#848993) Journal

    Thus if you feed it headshots of the perpetrators of all violent firearm homicides, you should not be shocked when it becomes biased toward identifying young black men;

    If all you feed it is head-shots the AI is going to finger white men, first, because they are by far the most common perpetrators.

    But, that's only because there are far more white men in this country than black men.

    So are we "faking" the inputs by including population numbers in order to derive a rate so we can compare populations?