Slash Boxes

SoylentNews is people

Submission Preview

Link to Story

Microsoft accused of selling AI tool that spews violent, sexual images to kids

Accepted submission by Freeman at 2024-03-07 16:09:49 from the d'oh! dept.
News []

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported [].

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.
Bloomberg also reviewed Jones' letter and reported [] that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” And it can also be used to generate “harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”
Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions.
Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says [] - 20240202
New “Stable Video Diffusion” AI Model Can Animate Any Still Image [] - 20231130
The Age of Promptography [] - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action [] - 20230908
It Costs Just $400 to Build an AI Disinformation Machine [] - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted [] - 20230824
“Meaningful Harm” From AI Necessary Before Regulation, says Microsoft Exec [] - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns [] - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI [] - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame [] - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio [] - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images [] - 20211214

Original Submission