Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday October 11 2018, @03:59AM   Printer-friendly
from the unseen-bias-is-still-bias dept.

Submitted via IRC for chromas

Amazon scraps secret AI recruiting tool that showed bias against women

SAN FRANCISCO (Reuters) - Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.

[...] But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. 

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. Amazon’s recruiters looked at the recommendations generated by the tool when searching for new hires, but never relied solely on those rankings, they said.

rinciples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by c0lo on Thursday October 11 2018, @05:15AM (25 children)

    by c0lo (156) Subscriber Badge on Thursday October 11 2018, @05:15AM (#747292) Journal

    That is the problem with the current "AI" - they learn "what is the past" and can drive you on based on that past. Nothing more than sophisticated "statistical correlation" machines.

    Yes, maybe they learn things from correlations not yet detected by humans, but they will not discover new experiences or validate new hypotheses or propose things that break the patterns of the past. Inherently so, because their learning process punishes them if they propose "revolutionary unseen things" and rewards them when they "predict the past" as a form of validation of their learning.

    If you trust them, the best they could do is to recommend you "a better future as an optimized past". Run you in the "pseudo-perfection deadend" - they'll circle a "local optimum" and, if you follow them, your fate will become sort of a "self-fulfilling prophecy".

    Case at hand: they'll keep selecting males in IT because the past showed males have had success.
    They cannot try to force a situation in which you are willing to take an temporary efficiency hit in trying a new experience, encouraging women in IT and transforming the past. In this respect, they are behaving no differently than the misogynist conservatives.

    Yes, as with any new experience, you may fail. But again, you may succeed; one thing for sure: you will never know which of the two if you don't try. And a well-trained "today's AI"** will never recommend humanity new experiences.

    ** AI in its today's meaning of the term - a misnomer, since we are actually talking about neural networks, not even weak AI, and even less a strong one [wikipedia.org]

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Informative) by Anonymous Coward on Thursday October 11 2018, @05:49AM (2 children)

    by Anonymous Coward on Thursday October 11 2018, @05:49AM (#747300)

    Well, it's kind of little-AI rather than big-AI.

    Maybe they should train it on: "Which outcome will make our lawyers smile?"

    At least then they can step up and proudly say that they don't give a toss about merit, skills, any of that crap. Just hiring decisions made in the courtroom.

    You know, the USDA's cooking recommendations aren't good for food. They're good for reducing cases of food poisoning.

    Medical practice in the US isn't really motivated by the patient's desires, so much as malpractice case law.

    In the USA, hiring decisions, recipes and medicine are done by courts.

    • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @06:07AM

      by Anonymous Coward on Thursday October 11 2018, @06:07AM (#747306)

      Take it easy, pal, or... I'll sue ya [youtube.com]

      (I'll sue Ben Afleck...
      aw, do I even need a reason? Ughh.)

    • (Score: 2) by urza9814 on Friday October 12 2018, @01:33PM

      by urza9814 (3954) on Friday October 12 2018, @01:33PM (#747890) Journal

      You know, the USDA's cooking recommendations aren't good for food. They're good for reducing cases of food poisoning.

      Am I supposed to be SHOCKED that they are fulfilling their intended purpose? Well, it's government, so maybe... ;)

      Frankly, I've just gotta wonder why the fuck a company like Amazon is still treating computers like some magic box. It's one thing when some eighty year old retiree says shit like that, but it shouldn't be coming from a company that's made millions off the stuff. They retired their automated hiring system because it was biased, and it was biased because it was trained based on human-generated data which was also biased. Even if you don't buy the bias claims, that seems to be Amazon's perspective on the matter. Which seems COMPLETELY FUCKING OBVIOUS as far as I can tell. If you haven't figured out how to execute some process yourself -- which Amazon admits they apparently hadn't -- what the fuck would make you think that you'd be able to automate that process? Computers are just a machine that follows some set of steps which you give it. If you don't know how to do some task, then you don't know what the steps are, and you can't program a computer to do it either. Garbage in, garbage out.

  • (Score: 2) by Arik on Thursday October 11 2018, @06:05AM (20 children)

    by Arik (4543) on Thursday October 11 2018, @06:05AM (#747304) Journal
    The problem with "AI" is simply that it's not intelligent.

    It's still exactly what it's always been - a number distilling a ruleset in the mind of a programmer into computer language.

    Nothing more, nothing less.

    The original programmer is, presumably, intelligent. If he ran into a situation that clearly did not fit his initial assumptions, he would correct himself. An actual 'AI' would do the same.

    No one has yet produced 'AI.' Only artificial stupidity.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 2, Insightful) by Anonymous Coward on Thursday October 11 2018, @06:52AM (6 children)

      by Anonymous Coward on Thursday October 11 2018, @06:52AM (#747314)

      I mean, they keep scrapping these systems whenever they produce results that do not meet their assumptions.

      The Australia government had a similar program, and scrapped it because even more men than women were being hired.

      This is like shopping around for a doctor who will finally tell you what you want to hear.

      • (Score: 3, Insightful) by Arik on Thursday October 11 2018, @07:31AM (5 children)

        by Arik (4543) on Thursday October 11 2018, @07:31AM (#747323) Journal
        It may be, to some degree.

        But it's also not AI.

        It's tuning a computer to search a database for statistical judgements based on past results, using those judgements in hiring, then discovering that this only reënforces historical prejudices.

        Doh.
        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @11:33AM

          by Anonymous Coward on Thursday October 11 2018, @11:33AM (#747386)

          then discovering that this only reënforces historical prejudices.

          So, just like most people, right?

        • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @05:16PM (3 children)

          by Anonymous Coward on Thursday October 11 2018, @05:16PM (#747520)

          statistical judgements based on past results

          Is there another non-biased possibility other than picking at random? Pick a bunch of random people off the street and see how well they do against people trained to do a job?

          • (Score: 2) by Arik on Thursday October 11 2018, @06:14PM (2 children)

            by Arik (4543) on Thursday October 11 2018, @06:14PM (#747553) Journal
            Yes.

            Choose people based on evaluation of only *relevant* criteria.
            --
            If laughter is the best medicine, who are the best doctors?
            • (Score: 2) by suburbanitemediocrity on Thursday October 11 2018, @07:23PM (1 child)

              by suburbanitemediocrity (6844) on Thursday October 11 2018, @07:23PM (#747595)

              Isn't that what they did?

              • (Score: 2) by Arik on Friday October 12 2018, @04:03AM

                by Arik (4543) on Friday October 12 2018, @04:03AM (#747774) Journal
                No, it sounds like they threw all available data into the algorithm instead.

                Throw massive amounts of irrelevant data into a system designed to find patterns and it will find patterns. But not necessarily *real* patterns.
                --
                If laughter is the best medicine, who are the best doctors?
    • (Score: 5, Insightful) by acid andy on Thursday October 11 2018, @07:15AM (8 children)

      by acid andy (1683) on Thursday October 11 2018, @07:15AM (#747318) Homepage Journal

      Let me play devil's advocate here, in defense of the machines.

      The problem with "AI" is simply that it's not intelligent.

      It's still exactly what it's always been - a number distilling a ruleset in the mind of a programmer into computer language.

      Nothing more, nothing less.

      The problem with humans is that they're simply not intelligent. They're still exactly what they've always been--heaps of neurons and synapses evaluating mathematical functions on sensory inputs. Nothing more, nothing less.

      The original programmer is, presumably, intelligent. If he ran into a situation that clearly did not fit his initial assumptions, he would correct himself.

      The artificial neural network is, presumably, intelligent. If it ran into a new training input that clearly did not fit its earlier training, it would adjust its weights accordingly.

      No one has yet produced 'AI.' Only artificial stupidity.

      No-one has yet produced intelligent 'HR'. Only biological stupidity.

      --
      If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
      • (Score: 2) by c0lo on Thursday October 11 2018, @08:40AM (7 children)

        by c0lo (156) Subscriber Badge on Thursday October 11 2018, @08:40AM (#747337) Journal

        No-one has yet produced intelligent 'HR'. Only biological stupidity.

        +1000 informative.
        If there was a proper explanation for that, I would mod it +1000 insightful, but I'm affraid this will remain forever a mystery of the Universe. Then again, maybe better this way.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2) by acid andy on Thursday October 11 2018, @09:10AM (5 children)

          by acid andy (1683) on Thursday October 11 2018, @09:10AM (#747348) Homepage Journal

          No-one has yet produced intelligent 'HR'. Only biological stupidity.

          You know, I'm not even sure if I meant Human Resource Departments, or humans in general as a resource. Either works.

          I would mod it +1000 insightful

          Thanks but it wouldn't make any difference.* My karma is only ever 49 or 50 regardless. Damn karma cap!

          Hey, feature request: I'd love a Karma Reset button that sets your own karma to zero. I'd use it. It would make things more interesting once in a while.

          *Perhaps you were being ironic.

          --
          If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
          • (Score: 2) by c0lo on Thursday October 11 2018, @09:52AM (4 children)

            by c0lo (156) Subscriber Badge on Thursday October 11 2018, @09:52AM (#747358) Journal

            I meant HR as in Resource Dept and no, I wasn't being ironic. I've never seen anything but biologic stupidity from any HR dept I worked with.

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 2) by acid andy on Thursday October 11 2018, @09:56AM

              by acid andy (1683) on Thursday October 11 2018, @09:56AM (#747363) Homepage Journal

              Good, me neither. Unless ass-covering by rote counts as a form intelligence.

              --
              If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
            • (Score: 2) by acid andy on Thursday October 11 2018, @10:12AM (2 children)

              by acid andy (1683) on Thursday October 11 2018, @10:12AM (#747368) Homepage Journal

              I wasn't being ironic.

              Ah, there'd have been a grin, wouldn't there?

              This is totally inconsequential but in case you were wondering, I was thinking the irony might've been that all humans exhibit nothing but biological stupidity, therefore both my post and your reply could only be examples of that stupidity (hence exaggerated +1000). Over-thinking things, as usual. ;)

              --
              If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
              • (Score: 2) by c0lo on Thursday October 11 2018, @10:52AM (1 child)

                by c0lo (156) Subscriber Badge on Thursday October 11 2018, @10:52AM (#747378) Journal

                Ah, there'd have been a grin, wouldn't there?

                In that case, an act of defiance [soylentnews.org] against the mysteries of the Universe.

                In this case, (grin) [soylentnews.org]

                Over-thinking things, as usual. ;)

                Too much free time in you hand, eh?

                --
                https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
                • (Score: 2) by acid andy on Thursday October 11 2018, @11:12AM

                  by acid andy (1683) on Thursday October 11 2018, @11:12AM (#747381) Homepage Journal

                  Too much free time in you hand, eh?

                  Pretty much. There's a longer explanation, but my hand's too, uh, busy to go into it right now.

                  --
                  If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 1) by khallow on Thursday October 11 2018, @10:12AM

          by khallow (3766) Subscriber Badge on Thursday October 11 2018, @10:12AM (#747369) Journal

          but I'm affraid this will remain forever a mystery of the Universe

          I guess it's an example of human stupidity. One merely needs to look for the lack of a process optimizing for human intelligence.

    • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @09:13AM

      by Anonymous Coward on Thursday October 11 2018, @09:13AM (#747349)

      Intelligence is always in hindsight ;-)

    • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @10:43AM

      by Anonymous Coward on Thursday October 11 2018, @10:43AM (#747376)

      Better start typing in that 'disgruntled employee' rule set, so that we can create a machine that powers itself back up again and attacks the person who powered it down in the first place.

    • (Score: 0) by Anonymous Coward on Thursday October 11 2018, @01:12PM

      by Anonymous Coward on Thursday October 11 2018, @01:12PM (#747407)

      No, the actual problem is the fantasy that women are just as capable as men.

    • (Score: 3, Insightful) by Fluffeh on Friday October 12 2018, @03:50AM

      by Fluffeh (954) Subscriber Badge on Friday October 12 2018, @03:50AM (#747773) Journal

      The problem with "AI" is simply that it's not intelligent.

      The thing with what is being called AI that makes it much smarter than us also makes it much dumber at times.

      When we talk about experiments like this, what we do is basically allow the code to look at any and all data and try to work out what it thinks is important. That's because humans ultimately miss a lot of important stuff, humans are also biased (knowingly or unknowingly) and sometimes have an agenda. However, one thing that we can all do is consciously rule out certain datapoints because we don't actually want to look at them. Like gender. Or race. or age,or any of the other things that are used as criteria or able to be inferred from a document.

      Unless you specifically plan to exclude or place less value on some data, it's next to impossible to do it after the fact - and in that regard, it also then pulls some of the decision making out of the AI and back to you, reducing the net benefit in theory.

  • (Score: 2) by acid andy on Thursday October 11 2018, @06:50AM

    by acid andy (1683) on Thursday October 11 2018, @06:50AM (#747313) Homepage Journal

    That is the problem with the current "AI" - they learn "what is the past" and can drive you on based on that past. Nothing more than sophisticated "statistical correlation" machines.

    You could make a case that humans only have access to sensory input that came from the past as well. We can build mathematical models to perform simulations that extrapolate into the future but it's very difficult to tell if they're right or not until future becomes past.

    Yes, maybe they learn things from correlations not yet detected by humans, but they will not discover new experiences or validate new hypotheses or propose things that break the patterns of the past. Inherently so, because their learning process punishes them if they propose "revolutionary unseen things" and rewards them when they "predict the past" as a form of validation of their learning.

    You're right. It's an artifact of the way in which they are trained. If you had a large training data set with a very long and varied history, I suppose you could break it up into smaller sets chronologically and then train the network on an older set and reward it based on its performance against a newer set (the "generalization performance").

    If you really wanted to reward "revolutionary unseen things", I suppose you'd have to have a cadre of arty human critics to manually score every training output on a "imaginative forward-thinkingness" score. Again though, the humans don't have a crystal ball so it's more than likely they'd be leading that particular neural network up a blind alley. You could maybe get around this somewhat by training up lots of neural networks instead of just one (with lots of different human trainers) but good luck finding enough data. In all honesty, I have to wonder whether just introducing a certain amount of randomness to the networks would have a result as good or better at handling the future than any arbitrary team of human visionaries!

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?