Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 9 submissions in the queue.
posted by janrinok on Thursday October 17, @10:09PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Activision Blizzard reports that exposure to toxic voice chat in Call of Duty has declined by 43 percent since the beginning of this year. The publisher credits the recent implementation of AI-based moderation for the results, which have convinced it to expand its use when Call of Duty: Black Ops 6 launches on October 25.

The publisher introduced the moderator, ToxMod, when it launched Call of Duty: Modern Warfare III last November. According to ToxMod's website, the system analyzes text transcripts of in-game voice chat and detects keywords based on multiple factors.

To tell the difference between trash talk and harassment, ToxMod monitors keywords and reactions to comments, recognizing player emotions and understanding the game's code of conduct. Furthermore, the system estimates the perceived age and gender of the speaker and recipients to understand each conversation's context better.

ToxMod can't ban anyone on its own, but it can quickly flag violations for review by human moderators. Activision then decides whether to warn or mute players, only issuing bans after repeated infractions. The number of repeat offenders in Modern Warfare III and Call of Duty: Warzone fell by 67 percent since Activision implemented improvements in June 2024. In July alone, 80 percent of players caught violating voice chat rules didn't re-offend.

All regions except Asia currently support ToxMod, and Call of Duty uses the system to moderate voice chat in English, Spanish, and Portuguese. When Black Ops 6 launches, Activision will add support for French and German.

Meanwhile, text-based chat and username moderation expanded from 14 to 20 languages in August. Community Sift has been Call of Duty's text moderation partner since the first Modern Warfare reboot game launched in 2019, blocking 45 million messages since Modern Warfare III's November release.

Using AI to moderate player behavior is far less controversial than employing the technology for in-game assets. Late last year, AI-generated art appeared in content skins for Modern Warfare III. Amid the historic number of gaming industry layoffs occurring around that time, some fear publishers will try to replace artists with AI models.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Touché) by Rosco P. Coltrane on Thursday October 17, @11:39PM

    by Rosco P. Coltrane (4757) on Thursday October 17, @11:39PM (#1377503)

    Now 43% less toxic.

  • (Score: 5, Funny) by JoeMerchant on Friday October 18, @12:01AM (7 children)

    by JoeMerchant (3937) on Friday October 18, @12:01AM (#1377505)

    AI constitutes an algorithm that "knows toxicity when it sees it."

    Same algorithm bans toxic speech.

    Same algorithm measures a reduction in toxic speech... duh.

    --
    🌻🌻🌻 [google.com]
    • (Score: 3, Interesting) by aafcac on Friday October 18, @12:53AM (2 children)

      by aafcac (17646) on Friday October 18, @12:53AM (#1377507)

      I was wondering about that. Did they go based on how many reports they were getting, or was the AI saying it, or was there just a 40% reduction in people in those chats.

      • (Score: 3, Funny) by Gaaark on Friday October 18, @02:09AM (1 child)

        by Gaaark (41) on Friday October 18, @02:09AM (#1377512) Journal

        The AI assured measured the reduction in toxicity by 'Exterminating' banning all the toxic players.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
        • (Score: 2) by aafcac on Friday October 18, @08:59PM

          by aafcac (17646) on Friday October 18, @08:59PM (#1377614)

          The next version will cut it in half by simply swatting the people deemed to be problematic.

    • (Score: 3, Insightful) by Mykl on Friday October 18, @04:52AM (1 child)

      by Mykl (1112) on Friday October 18, @04:52AM (#1377537)

      Great success! Medals all around!

      This reminds me of the humor of our state's High School graduation marks each year. The press and the government go all breathless talking about how it was the "highest scoring cohort ever!" without mentioning that, as it's a standardized test, is entirely up to them to decide.

      • (Score: 2) by JoeMerchant on Friday October 18, @01:21PM

        by JoeMerchant (3937) on Friday October 18, @01:21PM (#1377569)

        By the time a senior class graduated, 20% or more have dropped out. That tends to skew average test scores dramatically higher through the year while dropouts are dropped from the average.

        --
        🌻🌻🌻 [google.com]
    • (Score: 3, Informative) by KritonK on Friday October 18, @05:10AM (1 child)

      by KritonK (465) on Friday October 18, @05:10AM (#1377538)

      Same algorithm bans toxic speech.

      According to the summary, ToxMod can't ban anyone on its own, but it can quickly flag violations for review by human moderators.

  • (Score: 3, Interesting) by KritonK on Friday October 18, @05:22AM

    by KritonK (465) on Friday October 18, @05:22AM (#1377539)

    A 43% percent drop in toxicity means that 57% of the toxicity remains.

    I wonder how they measured that. If they already had a system in place that can identify 100% of toxic speech, so that they can measure ToxMod's performance, why don't they actually use it, to reduce toxicity to 0% and obviate the need for something like ToxMod? If they do not, how did they measure ToxMod's performance?

    It's more likely that the best that they can do is to measure the toxicity of a random sample of comments, and that this random sample has less toxicity after the use of ToxMod, presumably because the comments flagged by ToxMod and presented to human moderators for review contain more true positives than the random samples that they had been reviewing up to now.

  • (Score: 2) by looorg on Friday October 18, @08:46AM

    by looorg (578) on Friday October 18, @08:46AM (#1377557)

    So if the AI algo then. Sure. It's not or have nothing to do with that if they find you out they'll BAN your account. It's basically self control cause you don't want to have your toy (or game) taken away from you if you sit around COC chat and sound like you have Tourette syndrome.

    Do you really want to pay 69,99€ every time you feel like verbally abusing other players? That probably gets old quite fast. I would think that would be a bigger factor then players just starting to behave cause "Toxmod" AI is there to monitor your chatting habits.

  • (Score: 2, Funny) by echostorm on Friday October 18, @09:08AM

    by echostorm (210) on Friday October 18, @09:08AM (#1377560)

    more like 43% less fun!
    The XBOX lobby was the best part of the console in the 360 days.
    The constant chatter, autistic screeching, and in-depth analysis of every mother's indiscretion was truly the golden age of gaming.

  • (Score: 2) by ikanreed on Friday October 18, @06:39PM (1 child)

    by ikanreed (3164) on Friday October 18, @06:39PM (#1377597) Journal

    Part of me fears that there will be no place to go when I want toxicity.

    It's never come up, but the loss of the option still scares me.

    • (Score: 1) by khallow on Saturday October 19, @05:53AM

      by khallow (3766) Subscriber Badge on Saturday October 19, @05:53AM (#1377665) Journal

      Part of me fears that there will be no place to go when I want toxicity.

      Go back to Candy Crush loser.

  • (Score: 0) by Anonymous Coward on Friday October 18, @10:20PM

    by Anonymous Coward on Friday October 18, @10:20PM (#1377625)

    You can disable comments or ratings for an even greater reduction in "toxicity". One way mediums are truly the future. Better yet, make any user participation into advertisements "posted" by bots.

(1)