Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by n1 on Friday April 14 2017, @06:37AM   Printer-friendly
from the machines-like-us dept.

Surprise: If you use the Web to train your artificial intelligence, it will be biased:

One of the great promises of artificial intelligence (AI) is a world free of petty human biases. Hiring by algorithm would give men and women an equal chance at work, the thinking goes, and predicting criminal behavior with big data would sidestep racial prejudice in policing. But a new study shows that computers can be biased as well, especially when they learn from us. When algorithms glean the meaning of words by gobbling up lots of human-written text, they adopt stereotypes very similar to our own. "Don't think that AI is some fairy godmother," says study co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. "AI is just an extension of our existing culture."

The work was inspired by a psychological tool called the implicit association test, or IAT. In the IAT, words flash on a computer screen, and the speed at which people react to them indicates subconscious associations. Both black and white Americans, for example, are faster at associating names like "Brad" and "Courtney" with words like "happy" and "sunrise," and names like "Leroy" and "Latisha" with words like "hatred" and "vomit" than vice versa.

To test for similar bias in the "minds" of machines, Bryson and colleagues developed a word-embedding association test (WEAT). They started with an established set of "word embeddings," basically a computer's definition of a word, based on the contexts in which the word usually appears. So "ice" and "steam" have similar embeddings, because both often appear within a few words of "water" and rarely with, say, "fashion." But to a computer an embedding is represented as a string of numbers, not a definition that humans can intuitively understand. Researchers at Stanford University generated the embeddings used in the current paper by analyzing hundreds of billions of words on the internet.

Instead of measuring human reaction time, the WEAT computes the similarity between those strings of numbers. Using it, Bryson's team found that the embeddings for names like "Brett" and "Allison" were more similar to those for positive words including love and laughter, and those for names like "Alonzo" and "Shaniqua" were more similar to negative words like "cancer" and "failure." To the computer, bias was baked into the words.

I swear this is not a politics story.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by darkfeline on Saturday April 15 2017, @03:03AM

    by darkfeline (1030) on Saturday April 15 2017, @03:03AM (#494278) Homepage

    >It becomes little more than a highly sophisticated and costly exercise in confirmation bias.

    That's a good description of most human endeavors.

    --
    Join the SDF Public Access UNIX System today!
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2