Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Submission Preview

Link to Story

Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

Accepted submission by Freeman at 2023-02-03 16:33:07 from the copyright dept.
News

https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/ [arstechnica.com]

On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper [arxiv.org] outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion [arstechnica.com]. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. [arxiv.org]

Recently, AI image synthesis models have been the subject of intense ethical debate [arstechnica.com] and even legal action [arstechnica.com]. Proponents and opponents of generative AI tools regularly argue [theverge.com] over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

Related:
Getty Images Targets AI Firm For 'Copying' Photos [soylentnews.org]


Original Submission