Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday July 04 2020, @10:14PM   Printer-friendly
from the garbage-in-garbage-out dept.

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs:

MIT has taken offline its highly cited dataset that trained AI systems to potentially describe people using racist, misogynistic, and other problematic terms.

The database was removed this week after The Register alerted the American super-college. MIT also urged researchers and developers to stop using the training library, and to delete any copies. "We sincerely apologize," a professor told us.

The training set, built by the university, has been used to teach machine-learning models to automatically identify and list the people and objects depicted in still images. For example, if you show one of these systems a photo of a park, it might tell you about the children, adults, pets, picnic spreads, grass, and trees present in the snap. Thanks to MIT's cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.

[...] Vinay Prabhu, chief scientist at UnifyID, a privacy startup in Silicon Valley, and Abeba Birhane, a PhD candidate at University College Dublin in Ireland, pored over the MIT database and discovered thousands of images labelled with racist slurs for Black and Asian people, and derogatory terms used to describe women. They revealed their findings in a paper [pre-print PDF] submitted to a computer-vision conference due to be held next year.

[...] The key problem is that the dataset includes, for example, pictures of Black people and monkeys labeled with the N-word; women in bikinis, or holding their children, labeled whores; parts of the anatomy labeled with crude terms; and so on – needlessly linking everyday imagery to slurs and offensive language, and baking prejudice and bias into future AI models.

Antonio Torralba, a professor of electrical engineering and computer science at CSAIL, said the lab wasn't aware these offensive images and labels were present within the dataset at all. "It is clear that we should have manually screened them," he told The Register. "For this, we sincerely apologize. Indeed, we have taken the dataset offline so that the offending images and categories can be removed."

In a statement on its website, however, CSAIL said the dataset will be permanently pulled offline because the images were too small for manual inspection and filtering by hand. The lab also admitted it automatically obtained the images from the internet without checking whether any offensive pics or language were ingested into the library, and it urged people to delete their copies of the data:

[...] Giant datasets like ImageNet and 80 Million Tiny Images are also often collected by scraping photos from Flickr or Google Images without people's explicit consent. Meanwhile, Facebook hired actors who agreed to have their faces used in a dataset designed to teach software to detect computer-generated faked images.

Prabhu and Birhane said the social network's approach was a good idea, though they noted academic studies are unlikely to have the funding to pay actors to star in training sets. "We acknowledge that there is no perfect solution to create an ideal dataset, but that doesn't mean people shouldn't try and create better ones," they said.

The duo suggested blurring people's faces in datasets focused on object recognition, carefully screening the images and labels to remove any offensive material, and even training systems using realistic synthetic data. "You don't need to include racial slurs, pornographic images, or pictures of children," they said. "Doing good science and keeping ethical standards is not mutually exclusive."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday July 05 2020, @04:44AM (2 children)

    by Anonymous Coward on Sunday July 05 2020, @04:44AM (#1016390)

    Ok nutter

  • (Score: 0) by Anonymous Coward on Sunday July 05 2020, @09:17AM (1 child)

    by Anonymous Coward on Sunday July 05 2020, @09:17AM (#1016457)

    Scratch a leftie, and banned words and hate speech stream out. Thanks for demoing, duplicitous cunt.

    • (Score: 0) by Anonymous Coward on Sunday July 05 2020, @08:41PM

      by Anonymous Coward on Sunday July 05 2020, @08:41PM (#1016636)

      My god you rightwing terrorists are such sensitive little bitches.