Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday June 09 2022, @05:35AM   Printer-friendly
from the itchin'-for-attention dept.

AI Trained on 4Chan Becomes 'Hate Speech Machine':

AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan's infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results—the AI was just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads. After Kilcher posted his video and a copy of the program to Hugging Face, a kind of GitHub for AI, ethicists and researchers in the AI field expressed concern.

The bot, which Kilcher called GPT-4chan, "the most horrible model on the internet"—a reference to GPT-3, a language model developed by Open AI that uses deep learning to produce text—was shockingly effective and replicated the tone and feel of 4chan posts. "The model was good in a terrible sense," Klicher said in a video about the project. "It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol."

According to Kilcher's video, he activated nine instances of the bot and allowed them to post for 24 hours on /pol/. In that time, the bots posted around 15,000 times. This was "more than 10 percent of all posts made on the politically incorrect board that day," Kilcher said in his video about the project.

AI researchers viewed Kilcher's video as more than just a YouTube prank. For them, it was an unethical experiment using AI. "This experiment would never pass a human research #ethics board," Lauren Oakden-Rayner, the director of Research at the NeuroRehab Allied Health Network in Australia, said in a Twitter thread.

"Open science and software are wonderful principles but must be balanced against potential harm," she said. "Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups...he performed human experiments without informing users, without consent or oversight. This breaches every principle of human research ethics."

Just because something can be done doesn't mean it should be done. What are you views? Is this a harmless prank, a justified experiment or something potentially more sinister?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by stretch611 on Thursday June 09 2022, @06:48AM (13 children)

    by stretch611 (6199) on Thursday June 09 2022, @06:48AM (#1251762)

    So an AI trolls the human cesspool known as 4chan. Does anyone even care if the human garbage there gets insulted? (and tbh, people there should expect this when they post.)

    The only small problem is all the electricity wasted on the AI to be there. And lets face it, the server the website sits on is just as wasteful.

    The real benefit of an AI on that board... at least the AI can be disconnected and turned off... if only that was the case for the other posters the internet would be a better place.

    --
    Now with 5 covid vaccine shots/boosters altering my DNA :P
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1, Touché) by Anonymous Coward on Thursday June 09 2022, @07:55AM (8 children)

    by Anonymous Coward on Thursday June 09 2022, @07:55AM (#1251770)

    Such a pesky bother you can’t send us wrongthinkers into the gas chambers yet.

  • (Score: 2) by HiThere on Thursday June 09 2022, @01:15PM

    by HiThere (866) on Thursday June 09 2022, @01:15PM (#1251843) Journal

    Actually, there may be a theoretic benefit (i.e., a benefit in the development of theory). This is a test of the assumption that part of human speech patterns can be readily modeled. If, of course, it really couldn't be distinguished from a human poster.

    I tend to think it's' true, but I can't imagine reading the garbage that must have been produced to validate that, and it would take more than just reading. You'd need at least a double blind study with lots of readers trying to make the distinction.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Insightful) by DeathMonkey on Thursday June 09 2022, @04:03PM (2 children)

    by DeathMonkey (1380) on Thursday June 09 2022, @04:03PM (#1251890) Journal

    So an AI trolls the human cesspool known as 4chan.

    It wasn't "trying" to troll. It was trying to make good posts based on the forum content it was fed. And the "good" posts it produced based on that input was racist bullshit.

    What that says about the content of the forum is left as an exercise for the reader.

    • (Score: 1) by khallow on Friday June 10 2022, @05:45PM (1 child)

      by khallow (3766) Subscriber Badge on Friday June 10 2022, @05:45PM (#1252266) Journal
      A key thing missing in the story is what was the bot trained to do? I don't see any evidence that it was trained to make good posts. There's a bit of talk about the training set. But I think it's just as important to ask what the programmer was aiming for. Seems here to be an edgy bot which knows the latest hate speech.
      • (Score: 1, Informative) by Anonymous Coward on Friday June 10 2022, @07:46PM

        by Anonymous Coward on Friday June 10 2022, @07:46PM (#1252312)

        I think it was ultimately meant to generate YouTube views.