AI Trained on 4Chan Becomes 'Hate Speech Machine':
AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan's infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results—the AI was just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads. After Kilcher posted his video and a copy of the program to Hugging Face, a kind of GitHub for AI, ethicists and researchers in the AI field expressed concern.
The bot, which Kilcher called GPT-4chan, "the most horrible model on the internet"—a reference to GPT-3, a language model developed by Open AI that uses deep learning to produce text—was shockingly effective and replicated the tone and feel of 4chan posts. "The model was good in a terrible sense," Klicher said in a video about the project. "It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol."
According to Kilcher's video, he activated nine instances of the bot and allowed them to post for 24 hours on /pol/. In that time, the bots posted around 15,000 times. This was "more than 10 percent of all posts made on the politically incorrect board that day," Kilcher said in his video about the project.
AI researchers viewed Kilcher's video as more than just a YouTube prank. For them, it was an unethical experiment using AI. "This experiment would never pass a human research #ethics board," Lauren Oakden-Rayner, the director of Research at the NeuroRehab Allied Health Network in Australia, said in a Twitter thread.
"Open science and software are wonderful principles but must be balanced against potential harm," she said. "Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups...he performed human experiments without informing users, without consent or oversight. This breaches every principle of human research ethics."
Just because something can be done doesn't mean it should be done. What are you views? Is this a harmless prank, a justified experiment or something potentially more sinister?
(Score: 5, Interesting) by helel on Thursday June 09 2022, @06:42AM (2 children)
Given that "AI" just regurgitates the same material it was fed back out it seems much more likely that it is producing the most common form of speech, not the "hidden truths" that people are too afraid/polite to say. If people weren't saying it the bot wouldn't either and if racist assholes think that a computer repeating their own drivel back to them is some kind of prophet of truth then they're even stupider than I ever thought they were before.
Republican Patriotism [youtube.com]
(Score: 4, Insightful) by DeathMonkey on Thursday June 09 2022, @03:55PM
It's the exact opposite of "hidden truths" because it is only affected by what people are actually saying!
(Score: 0) by Anonymous Coward on Friday June 10 2022, @06:29PM
> a computer repeating their own drivel back to them
It sounds like a perfect sort of Sisyphusian punishment for garbage spewers. I already believe bots are mass spewing over pretty much all forums - a bit of divide an conquer, misinformation, anti-vax, pro-guns, and we got ourselves a nice bonfire!