Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday February 07 2023, @06:26AM   Printer-friendly
from the copyright dept.

But out of 300,000 high-probability images tested, researchers found a 0.03% memorization rate:

On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed.

Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

Related:
Getty Images Targets AI Firm For 'Copying' Photos


Original Submission

Related Stories

Getty Images Targets AI Firm For 'Copying' Photos 19 comments

US firm Getty Images on Tuesday threatened to sue a tech company it accuses of illegally copying millions of photos for use in an artificial intelligence (AI) art tool:

Getty, which distributes stock images and news photos including those of AFP, accused Stability AI of profiting from its pictures and those of its partners. Stability AI runs a tool called Stable Diffusion that allows users to generate mash-up images from a few words of text, but the firm uses material it scrapes from the web often without permission.

The question of copyright is still in dispute, with creators and artists arguing that the tools infringe their intellectual property and AI firms claiming they are protected under "fair use" rules.

Tools like Stable Diffusion and Dall-E 2 exploded in popularity last year, quickly becoming a global sensation with absurd images in the style of famous artists flooding social media.

Related:


Original Submission

You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi 30 comments

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/

Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).

If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.
[...]
For example, here's a list of notable LLaMA-related events based on a timeline Willison laid out in a Hacker News comment:

Related:
DuckDuckGo's New Wikipedia Summary Bot: "We Fully Expect It to Make Mistakes"
Robots Let ChatGPT Touch the Real World Thanks to Microsoft (Article has a bunch of other SoylentNews related links as well.)
Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI


Original Submission

Ethical AI art generation? Adobe Firefly may be the answer. 13 comments

https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E, Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists.

Related:
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Adobe Stock Begins Selling AI-Generated Artwork
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned
Adobe Creative Cloud Experience Makes It Easier to Run Malware
Adobe Goes After 27-Year Old 'Pirated' Copy of Acrobat Reader 1.0 for MS-DOS
Adobe Critical Code-Execution Flaws Plague Windows Users
When Adobe Stopped Flash Content from Running it Also Stopped a Chinese Railroad
Adobe Has Finally and Formally Killed Flash
Adobe Lightroom iOS Update Permanently Deleted Users' Photos


Original Submission

Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI 15 comments

https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/

The AI software Stable Diffusion has a remarkable ability to turn text into images. When I asked the software to draw "Mickey Mouse in front of a McDonald's sign," for example, it generated the picture you see above.

Stable Diffusion can do this because it was trained on hundreds of millions of example images harvested from across the web. Some of these images were in the public domain or had been published under permissive licenses such as Creative Commons. Many others were not—and the world's artists and photographers aren't happy about it.

In January, three visual artists filed a class-action copyright lawsuit against Stability AI, the startup that created Stable Diffusion. In February, the image-licensing giant Getty filed a lawsuit of its own.
[...]
The plaintiffs in the class-action lawsuit describe Stable Diffusion as a "complex collage tool" that contains "compressed copies" of its training images. If this were true, the case would be a slam dunk for the plaintiffs.

But experts say it's not true. Erik Wallace, a computer scientist at the University of California, Berkeley, told me in a phone interview that the lawsuit had "technical inaccuracies" and was "stretching the truth a lot." Wallace pointed out that Stable Diffusion is only a few gigabytes in size—far too small to contain compressed copies of all or even very many of its training images.

Related:
Ethical AI art generation? Adobe Firefly may be the answer. (20230324)
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns (20230206)
Getty Images Targets AI Firm For 'Copying' Photos (20230117)
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI (20220904)
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned (20220817)


Original Submission

As Europeans Strike First to Rein in AI, the US Follows 9 comments

The European Union is writing legislation that would hold accountable companies that create generative AI platforms:

A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT,to publicize any copyrighted material used by the technology platforms to create content of any kind.

A new draft of European Parliament's legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.

The European Union's "Artificial Intelligence Act" (AI Act) is the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.

The bill's provisions also require that the large language models (LLMs) behind generative AI tech, such as the GPT-4, be designed with adequate safeguards against generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust, according to The Washington Post.

[...] But the solution to keeping AI honest isn't easy, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. It's likely that LLM creators, such as San Fransisco-based OpenAI and others, will need to develop powerful LLMs to check that the ones trained initially have no copyrighted materials. Rules-based systems to filter out copyright materials are likely to be ineffective, Liten said.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday February 07 2023, @09:01AM (6 children)

    by Anonymous Coward on Tuesday February 07 2023, @09:01AM (#1290581)

    challenges views that image synthesis models do not memorize their training data

    That supports my view that lots of AI people don't actually have a good idea of how their stuff does what it does or what actually is happening behind the scenes. They're like the alchemists of yore, who could get stuff done (e.g. gunpowder) but they didn't actually have good theories or models on what actually is happening. Lots of trial and error to get results that could be useful but they don't really know how it works or how and when it'll fail.

    tldr; AI is still in the alchemy stage and hasn't got to the Chemistry stage yet.

    The mistakes that many current top AIs make tell me that those AIs still don't actually understand stuff. It's the mistakes you make that tell me how smart you actually were when you made that mistake.

    The AIs need lots of samples because they're not gaining understanding but gathering statistics. Most dogs won't need millions of samples or training sessions to learn the difference between a bus and a car; or cactus and a fire hydrant. And when a dog mistakes a car for a bus, the car might actually look like a bus.

    The problem is too many people's jobs/$$$ are depending on the hype/BS. I'm OK with AI being used, just not OK when too many people want to rely on AI for stuff that it's not ready/good for. And don't give me that bullshit that it's not copyright infringement just because an AI does it.

    • (Score: 0) by Anonymous Coward on Tuesday February 07 2023, @09:27AM (5 children)

      by Anonymous Coward on Tuesday February 07 2023, @09:27AM (#1290585)

      And don't give me that bullshit that it's not copyright infringement just because an AI does it.

      It doesn't matter what bullshit you give/get. This will be decided by the courts.

      There is no copyright for art style, and this 0.03% memorization rate isn't the end of the world.

      • (Score: 0) by Anonymous Coward on Tuesday February 07 2023, @10:39AM (2 children)

        by Anonymous Coward on Tuesday February 07 2023, @10:39AM (#1290589)

        Go tell the judges that this is bullshit: https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org]

        • (Score: 0) by Anonymous Coward on Tuesday February 07 2023, @10:55AM (1 child)

          by Anonymous Coward on Tuesday February 07 2023, @10:55AM (#1290591)
          • (Score: 0) by Anonymous Coward on Wednesday February 08 2023, @03:19AM

            by Anonymous Coward on Wednesday February 08 2023, @03:19AM (#1290692)
            Sure but there's a whole spectrum between copying the stuff 100% and just copying the style. The article shows that there are cases where the AI is copying a lot closer to the 100% than just the art style.

            And Jeff vs Koons shows where one of the lines is drawn in the USA.
      • (Score: 1, Insightful) by Anonymous Coward on Tuesday February 07 2023, @10:43AM (1 child)

        by Anonymous Coward on Tuesday February 07 2023, @10:43AM (#1290590)
        How certain are you that it's only 0.03% and can't be higher in a different future attack? Do you have any strong mathematical proof that it'll only ever be 0.03%?
        • (Score: 2, Insightful) by Anonymous Coward on Tuesday February 07 2023, @11:00AM

          by Anonymous Coward on Tuesday February 07 2023, @11:00AM (#1290592)

          The "attack" is just exposing duplicates in the training set. I'm surprised it's not higher.

  • (Score: 3, Informative) by Freeman on Tuesday February 07 2023, @03:11PM

    by Freeman (732) on Tuesday February 07 2023, @03:11PM (#1290612) Journal

    (Same issue as the listed related article, but here's another one.)
    https://arstechnica.com/tech-policy/2023/02/getty-sues-stability-ai-for-copying-12m-photos-and-imitating-famous-watermark/ [arstechnica.com]

    Getty sues Stability AI for copying 12M photos and imitating famous watermark

    Getty Images is well-known for its extensive collection of millions of images, including its exclusive archive of historical images and its wider selection of stock images hosted on iStock. On Friday, Getty filed a second lawsuit against Stability AI Inc to prevent the unauthorized use and duplication of its stock images using artificial intelligence.

    According to the company's newest lawsuit filed in a US district court in Delaware, “Stability AI has copied more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images, as part of its efforts to build a competing business.”

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(1)