Submitted via IRC for chromas
Amazon scraps secret AI recruiting tool that showed bias against women
SAN FRANCISCO (Reuters) - Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.
Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.
[...] But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.
The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. Amazon’s recruiters looked at the recommendations generated by the tool when searching for new hires, but never relied solely on those rankings, they said.
rinciples.
(Score: 2) by acid andy on Thursday October 11 2018, @06:50AM
You could make a case that humans only have access to sensory input that came from the past as well. We can build mathematical models to perform simulations that extrapolate into the future but it's very difficult to tell if they're right or not until future becomes past.
You're right. It's an artifact of the way in which they are trained. If you had a large training data set with a very long and varied history, I suppose you could break it up into smaller sets chronologically and then train the network on an older set and reward it based on its performance against a newer set (the "generalization performance").
If you really wanted to reward "revolutionary unseen things", I suppose you'd have to have a cadre of arty human critics to manually score every training output on a "imaginative forward-thinkingness" score. Again though, the humans don't have a crystal ball so it's more than likely they'd be leading that particular neural network up a blind alley. You could maybe get around this somewhat by training up lots of neural networks instead of just one (with lots of different human trainers) but good luck finding enough data. In all honesty, I have to wonder whether just introducing a certain amount of randomness to the networks would have a result as good or better at handling the future than any arbitrary team of human visionaries!
If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?