Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday October 11 2018, @03:59AM   Printer-friendly
from the unseen-bias-is-still-bias dept.

Submitted via IRC for chromas

Amazon scraps secret AI recruiting tool that showed bias against women

SAN FRANCISCO (Reuters) - Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.

[...] But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. 

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. Amazon’s recruiters looked at the recommendations generated by the tool when searching for new hires, but never relied solely on those rankings, they said.

rinciples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by acid andy on Thursday October 11 2018, @06:50AM

    by acid andy (1683) on Thursday October 11 2018, @06:50AM (#747313) Homepage Journal

    That is the problem with the current "AI" - they learn "what is the past" and can drive you on based on that past. Nothing more than sophisticated "statistical correlation" machines.

    You could make a case that humans only have access to sensory input that came from the past as well. We can build mathematical models to perform simulations that extrapolate into the future but it's very difficult to tell if they're right or not until future becomes past.

    Yes, maybe they learn things from correlations not yet detected by humans, but they will not discover new experiences or validate new hypotheses or propose things that break the patterns of the past. Inherently so, because their learning process punishes them if they propose "revolutionary unseen things" and rewards them when they "predict the past" as a form of validation of their learning.

    You're right. It's an artifact of the way in which they are trained. If you had a large training data set with a very long and varied history, I suppose you could break it up into smaller sets chronologically and then train the network on an older set and reward it based on its performance against a newer set (the "generalization performance").

    If you really wanted to reward "revolutionary unseen things", I suppose you'd have to have a cadre of arty human critics to manually score every training output on a "imaginative forward-thinkingness" score. Again though, the humans don't have a crystal ball so it's more than likely they'd be leading that particular neural network up a blind alley. You could maybe get around this somewhat by training up lots of neural networks instead of just one (with lots of different human trainers) but good luck finding enough data. In all honesty, I have to wonder whether just introducing a certain amount of randomness to the networks would have a result as good or better at handling the future than any arbitrary team of human visionaries!

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2