AI Trained on 4Chan Becomes 'Hate Speech Machine':
AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan's infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results—the AI was just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads. After Kilcher posted his video and a copy of the program to Hugging Face, a kind of GitHub for AI, ethicists and researchers in the AI field expressed concern.
The bot, which Kilcher called GPT-4chan, "the most horrible model on the internet"—a reference to GPT-3, a language model developed by Open AI that uses deep learning to produce text—was shockingly effective and replicated the tone and feel of 4chan posts. "The model was good in a terrible sense," Klicher said in a video about the project. "It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol."
According to Kilcher's video, he activated nine instances of the bot and allowed them to post for 24 hours on /pol/. In that time, the bots posted around 15,000 times. This was "more than 10 percent of all posts made on the politically incorrect board that day," Kilcher said in his video about the project.
AI researchers viewed Kilcher's video as more than just a YouTube prank. For them, it was an unethical experiment using AI. "This experiment would never pass a human research #ethics board," Lauren Oakden-Rayner, the director of Research at the NeuroRehab Allied Health Network in Australia, said in a Twitter thread.
"Open science and software are wonderful principles but must be balanced against potential harm," she said. "Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups...he performed human experiments without informing users, without consent or oversight. This breaches every principle of human research ethics."
Just because something can be done doesn't mean it should be done. What are you views? Is this a harmless prank, a justified experiment or something potentially more sinister?
(Score: 3, Funny) by looorg on Thursday June 09 2022, @10:14AM (5 children)
I assume they wanted to replicate Microsoft and Tay? After all that turned into a white supremacists just from "normal" interaction with people (and/or trolls). So how did they not see that this would happen if they used 4chan to nanny their bot? But it's nice that they managed to replicate the previous research. Now we know, 4chan etc turns you into a "hate speech machine". It can probably be generalized into that people in some kind of bubble or echo chamber assumes or reinforces those tendencies. It's just trying to belong to the group which it is copying.
Still as someone noted here it would be kind of funny if they found the most woke place on the net (I don't know where that would be) and trained one there and then let the both of them duke it out afterwards.
(Score: 3, Funny) by FatPhil on Thursday June 09 2022, @10:22AM (4 children)
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by looorg on Thursday June 09 2022, @10:27AM (3 children)
How else are we going to find out which is the supreme ideology? Bots just have to duke it out ... for science!
(Score: 2) by FatPhil on Thursday June 09 2022, @11:00AM (2 children)
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by SunTzuWarmaster on Wednesday June 15 2022, @09:03PM (1 child)
Talks cheap! Put up or shut up!
(Score: 2) by FatPhil on Thursday June 16 2022, @06:58AM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves