Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by hubie on Wednesday August 17 2022, @10:18PM   Printer-friendly
from the counting-on-the-integrity-of-the-Internet-to-do-what's-right dept.

A startup wants to democratize the tech behind DALL-E 2, consequences be damned – TechCrunch:

DALL-E 2, OpenAI's powerful text-to-image AI system, can create photos in the style of cartoonists, 19th century daguerreotypists, stop-motion animators and more. But it has an important, artificial limitation: a filter that prevents it from creating images depicting public figures and content deemed too toxic.

Now an open source alternative to DALL-E 2 is on the cusp of being released, and it'll have few — if any — such content filters.

London- and Los Altos-based startup Stability AI this week announced the release of a DALL-E 2-like system, Stable Diffusion, to just over a thousand researchers ahead of a public launch in the coming weeks. A collaboration between Stability AI, media creation company RunwayML, Heidelberg University researchers and the research groups EleutherAI and LAION, Stable Diffusion is designed to run on most high-end consumer hardware, generating 512×512-pixel images in just a few seconds given any text prompt.

"Stable Diffusion will allow both researchers and soon the public to run this under a range of conditions, democratizing image generation," Stability AI CEO and founder Emad Mostaque wrote in a blog post. "We look forward to the open ecosystem that will emerge around this and further models to truly explore the boundaries of latent space."

But Stable Diffusion's lack of safeguards compared to systems like DALL-E 2 poses tricky ethical questions for the AI community. Even if the results aren't perfectly convincing yet, making fake images of public figures opens a large can of worms. And making the raw components of the system freely available leaves the door open to bad actors who could train them on subjectively inappropriate content, like pornography and graphic violence.

[...] "Our benchmark models that we release are based on general web crawls and are designed to represent the collective imagery of humanity compressed into files a few gigabytes big," Mostaque said. "Aside from illegal content, there is minimal filtering, and it is on the user to use it as they will."

[...] Mostaque acknowledged that the tools could be used by bad actors to create "really nasty stuff," and CompVis says that the public release of the benchmark Stable Diffusion model will "incorporate ethical considerations." But Mostaque argues that — by making the tools freely available — it allows the community to develop countermeasures.

"We hope to be the catalyst to coordinate global open source AI, both independent and academic, to build vital infrastructure, models and tools to maximize our collective potential," Mostaque said. "This is amazing technology that can transform humanity for the better and should be open infrastructure for all."

[...] Stable Diffusion contains little in the way of mitigations besides training dataset filtering. So what's to prevent someone from generating, say, photorealistic images of protests, pornographic pictures of underage actors, "evidence" of fake moon landings and general misinformation? Nothing really. But Mostaque says that's the point.

"A percentage of people are simply unpleasant and weird, but that's humanity," Mostaque said. "Indeed, it is our belief this technology will be prevalent, and the paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society ... We are taking significant safety measures including formulating cutting-edge tools to help mitigate potential harms across release and our own services. With hundreds of thousands developing on this model, we are confident the net benefit will be immensely positive and as billions use this tech harms will be negated."

What could possibly go wrong?


Original Submission

Related Stories

Ethical AI art generation? Adobe Firefly may be the answer. 13 comments

https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E, Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists.

Related:
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Adobe Stock Begins Selling AI-Generated Artwork
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned
Adobe Creative Cloud Experience Makes It Easier to Run Malware
Adobe Goes After 27-Year Old 'Pirated' Copy of Acrobat Reader 1.0 for MS-DOS
Adobe Critical Code-Execution Flaws Plague Windows Users
When Adobe Stopped Flash Content from Running it Also Stopped a Chinese Railroad
Adobe Has Finally and Formally Killed Flash
Adobe Lightroom iOS Update Permanently Deleted Users' Photos


Original Submission

Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI 15 comments

https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/

The AI software Stable Diffusion has a remarkable ability to turn text into images. When I asked the software to draw "Mickey Mouse in front of a McDonald's sign," for example, it generated the picture you see above.

Stable Diffusion can do this because it was trained on hundreds of millions of example images harvested from across the web. Some of these images were in the public domain or had been published under permissive licenses such as Creative Commons. Many others were not—and the world's artists and photographers aren't happy about it.

In January, three visual artists filed a class-action copyright lawsuit against Stability AI, the startup that created Stable Diffusion. In February, the image-licensing giant Getty filed a lawsuit of its own.
[...]
The plaintiffs in the class-action lawsuit describe Stable Diffusion as a "complex collage tool" that contains "compressed copies" of its training images. If this were true, the case would be a slam dunk for the plaintiffs.

But experts say it's not true. Erik Wallace, a computer scientist at the University of California, Berkeley, told me in a phone interview that the lawsuit had "technical inaccuracies" and was "stretching the truth a lot." Wallace pointed out that Stable Diffusion is only a few gigabytes in size—far too small to contain compressed copies of all or even very many of its training images.

Related:
Ethical AI art generation? Adobe Firefly may be the answer. (20230324)
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns (20230206)
Getty Images Targets AI Firm For 'Copying' Photos (20230117)
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI (20220904)
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned (20220817)


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday August 17 2022, @10:33PM (1 child)

    by Anonymous Coward on Wednesday August 17 2022, @10:33PM (#1267251)

    It's a Silicon Valley startup
    Who is bankrolling this, and why?

    • (Score: 3, Informative) by Anonymous Coward on Wednesday August 17 2022, @10:59PM

      by Anonymous Coward on Wednesday August 17 2022, @10:59PM (#1267257)

      From TFA:

      Stable Diffusion is the brainchild of Mostaque. Having graduated from Oxford with a Masters in mathematics and computer science, Mostaque served as an analyst at various hedge funds before shifting gears to more public-facing works. In 2019, he co-founded Symmitree, a project that aimed to reduce the cost of smartphones and internet access for people living in impoverished communities. And in 2020, Mostaque was the chief architect of Collective & Augmented Intelligence Against COVID-19, an alliance to help policymakers make decisions in the face of the pandemic by leveraging software.

      He co-founded Stability AI in 2020, motivated both by a personal fascination with AI and what he characterized as a lack of “organization” within the open source AI community.

      “Nobody has any voting rights except our 75 employees — no billionaires, big funds, governments or anyone else with control of the company or the communities we support. We’re completely independent,” Mostaque told TechCrunch in an email. “We plan to use our compute to accelerate open source, foundational AI.”

  • (Score: 0) by Anonymous Coward on Wednesday August 17 2022, @10:53PM (1 child)

    by Anonymous Coward on Wednesday August 17 2022, @10:53PM (#1267254)

    AI gatekeepers can get bent.

    • (Score: 2) by DannyB on Thursday August 18 2022, @02:08PM

      by DannyB (5839) Subscriber Badge on Thursday August 18 2022, @02:08PM (#1267353) Journal

      They have a content filter in place to prevent that.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 4, Funny) by isj on Wednesday August 17 2022, @11:07PM (2 children)

    by isj (5249) on Wednesday August 17 2022, @11:07PM (#1267263) Homepage

    I've had craiyon work on "Queen Elizabeth with a beard, roasting corgies in a pit". No problem. The results were not that convincing, though

    • (Score: 3, Troll) by bussdriver on Thursday August 18 2022, @12:41AM

      by bussdriver (6876) Subscriber Badge on Thursday August 18 2022, @12:41AM (#1267274)

      As we've all seen in recent years, you don't need any fake images to get millions of gullible people to believe any backward insane ideas and believe them to their death if not clearly causing their own death. Maybe this will just increase the number of people who actual drink the bleach by a little bit.

    • (Score: 2) by sonamchauhan on Thursday August 18 2022, @05:39AM

      by sonamchauhan (6546) on Thursday August 18 2022, @05:39AM (#1267303)

      roasting corgies in a pit

      Maybe it's only because you misspelt 'Corgis'? (Says Google: https://www.google.com/search?q=corgies) [google.com]

  • (Score: 2, Troll) by corey on Wednesday August 17 2022, @11:26PM (2 children)

    by corey (2202) on Wednesday August 17 2022, @11:26PM (#1267268)

    As per subject. Now.

    • (Score: -1, Troll) by Anonymous Coward on Wednesday August 17 2022, @11:36PM

      by Anonymous Coward on Wednesday August 17 2022, @11:36PM (#1267269)

      As per subject. Now.

      Regulations are out of control! It's madness I tell ya!

    • (Score: 2) by DannyB on Thursday August 18 2022, @02:14PM

      by DannyB (5839) Subscriber Badge on Thursday August 18 2022, @02:14PM (#1267355) Journal

      We need AI to be in charge of regulating AI.

      Now before you inhale your diet coke and have a choking fit . . .

      Please consider. All sorts of self dealing regulation is aloud:

      The music industry regulates copyright -- to the detriment of free speech and other rights.

      Corporations regulation corporate behavior.

      Rich people regulate everything about how rich people are not taxed and get special treatment to acquire more wealth on the backs of everyone else.

      Legislators regulate how legislators are permitted to conduct insider trading.

      And much more.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 2, Touché) by anubi on Thursday August 18 2022, @12:24AM

    by anubi (2828) on Thursday August 18 2022, @12:24AM (#1267272) Journal

    Maybe keep marking instruments out of the hands of cartoonists too?

    Geez, they have the skillz to show things as they see it!

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 0) by Anonymous Coward on Thursday August 18 2022, @12:44AM (2 children)

    by Anonymous Coward on Thursday August 18 2022, @12:44AM (#1267275)

    "the paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society"

    Will this guy be funding unrestricted genetic engineering too?

    • (Score: 0, Troll) by Anonymous Coward on Thursday August 18 2022, @02:12AM (1 child)

      by Anonymous Coward on Thursday August 18 2022, @02:12AM (#1267287)

      I hope so. We can fuck catgirls while our desktops churn out millions of racist memes.

      • (Score: -1, Troll) by Anonymous Coward on Thursday August 18 2022, @04:30AM

        by Anonymous Coward on Thursday August 18 2022, @04:30AM (#1267300)

        I hope so. We can fuck catgirls

        With our penis tentacles

(1)