Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Friday April 14 2017, @06:37AM   Printer-friendly
from the machines-like-us dept.

Surprise: If you use the Web to train your artificial intelligence, it will be biased:

One of the great promises of artificial intelligence (AI) is a world free of petty human biases. Hiring by algorithm would give men and women an equal chance at work, the thinking goes, and predicting criminal behavior with big data would sidestep racial prejudice in policing. But a new study shows that computers can be biased as well, especially when they learn from us. When algorithms glean the meaning of words by gobbling up lots of human-written text, they adopt stereotypes very similar to our own. "Don't think that AI is some fairy godmother," says study co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. "AI is just an extension of our existing culture."

The work was inspired by a psychological tool called the implicit association test, or IAT. In the IAT, words flash on a computer screen, and the speed at which people react to them indicates subconscious associations. Both black and white Americans, for example, are faster at associating names like "Brad" and "Courtney" with words like "happy" and "sunrise," and names like "Leroy" and "Latisha" with words like "hatred" and "vomit" than vice versa.

To test for similar bias in the "minds" of machines, Bryson and colleagues developed a word-embedding association test (WEAT). They started with an established set of "word embeddings," basically a computer's definition of a word, based on the contexts in which the word usually appears. So "ice" and "steam" have similar embeddings, because both often appear within a few words of "water" and rarely with, say, "fashion." But to a computer an embedding is represented as a string of numbers, not a definition that humans can intuitively understand. Researchers at Stanford University generated the embeddings used in the current paper by analyzing hundreds of billions of words on the internet.

Instead of measuring human reaction time, the WEAT computes the similarity between those strings of numbers. Using it, Bryson's team found that the embeddings for names like "Brett" and "Allison" were more similar to those for positive words including love and laughter, and those for names like "Alonzo" and "Shaniqua" were more similar to negative words like "cancer" and "failure." To the computer, bias was baked into the words.

I swear this is not a politics story.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday April 14 2017, @06:00PM

    by Anonymous Coward on Friday April 14 2017, @06:00PM (#494118)

    It does seem orchestrated. Not exactly a surprise in light of organizations like JournoList [wikipedia.org] come CabaList [theatlantic.com] and other similar groups. What I always wonder though is about the motivation for these sort of things. In many cases it's obvious, but in this case I wonder. The 'messaging' here is predictably backfiring. Even on the amazing source of human intelligencia that is Twitter people are quick to observe that algorithms aren't biased but rather that objective analysis of the data is yielding conclusions that many people would rather not hear.

    I imagine the release is certainly political in any case. So we're left to assume that it's a smart right leaning group, or a dumb left leaning group. My money's on hanlon's razor.