Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 18 2023, @08:06PM   Printer-friendly
from the stable-confusion dept.

US firm Getty Images on Tuesday threatened to sue a tech company it accuses of illegally copying millions of photos for use in an artificial intelligence (AI) art tool:

Getty, which distributes stock images and news photos including those of AFP, accused Stability AI of profiting from its pictures and those of its partners. Stability AI runs a tool called Stable Diffusion that allows users to generate mash-up images from a few words of text, but the firm uses material it scrapes from the web often without permission.

The question of copyright is still in dispute, with creators and artists arguing that the tools infringe their intellectual property and AI firms claiming they are protected under "fair use" rules.

Tools like Stable Diffusion and Dall-E 2 exploded in popularity last year, quickly becoming a global sensation with absurd images in the style of famous artists flooding social media.

Related:


Original Submission

Related Stories

Alien Dreams: An Emerging Art Scene 9 comments

From Machine Learning @ Berkeley Blog

In recent months there has been a bit of an explosion in the AI generated art scene.

Ever since OpenAI released the weights and code for their CLIP model, various hackers, artists, researchers, and deep learning enthusiasts have figured out how to utilize CLIP as a an effective “natural language steering wheel” for various generative models, allowing artists to create all sorts of interesting visual art merely by inputting some text – a caption, a poem, a lyric, a word – to one of these models.

[The linked story provides about 3 dozen stunning examples of inputs and generated images as well as an extensive links to resources, preprints, and journal articles.--martyb]


Original Submission

NVIDIA Research's GauGAN AI Art Demo Responds to Words 4 comments

NVIDIA Research's GauGAN AI Art Demo Responds to Words:

A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research's wildly popular AI painting demo.

The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it's easier than ever. Simply type a phrase like "sunset at a beach" and AI generates the scene in real time. Add an additional adjective like "sunset at a rocky beach," or swap "sunset" to "afternoon" or "rainy day" and the model, based on generative adversarial networks, instantly modifies the picture.

With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images.

The new GauGAN2 text-to-image feature can now be experienced on NVIDIA AI Demos, where visitors to the site can experience AI through the latest demos from NVIDIA Research. With the versatility of text prompts and sketches, GauGAN2 lets users create and customize scenes more quickly and with finer control.

Direct link to YouTube video.

Kinda makes Turtle graphics from the 70s look rather basic. However, beware Rule 34…


Original Submission

Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI 24 comments

https://arstechnica.com/gaming/2022/09/pixel-art-comes-to-life-fan-upgrades-classic-ms-dos-games-with-ai/

Last night, a Reddit user by the name of frigis9 posted a series of six images that feature detailed graphical upgrades to classic MS-DOS computer games such as Commander Keen 6 and The Secret of Monkey Island. The most interesting part is how they did it: by using an image synthesis technique called "img2img" (image to image), which takes an input image, applies a written text prompt, and generates a similar output image as a result. It's a feature of the Stable Diffusion image synthesis model released last week.

[...] Art quality in image synthesis currently requires much trial and error with prompts and cherry-picking to achieve the kinds of results frigis9 posted—likely hours of work. But with some incremental advances in image synthesis techniques and GPU power, we could imagine an emulator upgrading vintage game graphics in real time within a few years.


Original Submission

Adobe Stock Begins Selling AI-Generated Artwork 15 comments

https://arstechnica.com/information-technology/2022/12/adobe-stock-begins-selling-ai-generated-artwork/

On Monday, Adobe announced that its stock photography service, Adobe Stock, would begin allowing artists to submit AI-generated imagery for sale, Axios reports. The move comes during Adobe's embrace of image synthesis and also during industry-wide efforts to deal with the rapidly growing field of AI artwork in the stock art business, including earlier announcements from Shutterstock and Getty Images.

Submitting AI-generated imagery to Adobe Stock comes with a few restrictions. The artist must own (or have the rights to use) the image, AI-synthesized artwork must be submitted as an illustration (even if photorealistic), and it must be labeled with "Generative AI" in the title.

Further, each AI artwork must adhere to Adobe's new Generative AI Content Guidelines, which require the artist to include a model release for any real person depicted realistically in the artwork. Artworks that incorporate illustrations of people or fictional brands, characters, or properties require a property release that attests the artist owns all necessary rights to license the content to Adobe Stock.
[...]
AI-generated artwork has proven ethically problematic among artists. Some criticized the ability of image synthesis models to reproduce artwork in the styles of living artists, especially since the AI models gained that ability from unauthorized scrapes of websites.


Original Submission

Paper: Stable Diffusion “Memorizes” Some Images, Sparking Privacy Concerns 8 comments

But out of 300,000 high-probability images tested, researchers found a 0.03% memorization rate:

On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed.

Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

Related:
Getty Images Targets AI Firm For 'Copying' Photos


Original Submission

Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film 15 comments

https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.

[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.

Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

Ethical AI art generation? Adobe Firefly may be the answer. 13 comments

https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E, Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists.

Related:
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Adobe Stock Begins Selling AI-Generated Artwork
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned
Adobe Creative Cloud Experience Makes It Easier to Run Malware
Adobe Goes After 27-Year Old 'Pirated' Copy of Acrobat Reader 1.0 for MS-DOS
Adobe Critical Code-Execution Flaws Plague Windows Users
When Adobe Stopped Flash Content from Running it Also Stopped a Chinese Railroad
Adobe Has Finally and Formally Killed Flash
Adobe Lightroom iOS Update Permanently Deleted Users' Photos


Original Submission

Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI 15 comments

https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/

The AI software Stable Diffusion has a remarkable ability to turn text into images. When I asked the software to draw "Mickey Mouse in front of a McDonald's sign," for example, it generated the picture you see above.

Stable Diffusion can do this because it was trained on hundreds of millions of example images harvested from across the web. Some of these images were in the public domain or had been published under permissive licenses such as Creative Commons. Many others were not—and the world's artists and photographers aren't happy about it.

In January, three visual artists filed a class-action copyright lawsuit against Stability AI, the startup that created Stable Diffusion. In February, the image-licensing giant Getty filed a lawsuit of its own.
[...]
The plaintiffs in the class-action lawsuit describe Stable Diffusion as a "complex collage tool" that contains "compressed copies" of its training images. If this were true, the case would be a slam dunk for the plaintiffs.

But experts say it's not true. Erik Wallace, a computer scientist at the University of California, Berkeley, told me in a phone interview that the lawsuit had "technical inaccuracies" and was "stretching the truth a lot." Wallace pointed out that Stable Diffusion is only a few gigabytes in size—far too small to contain compressed copies of all or even very many of its training images.

Related:
Ethical AI art generation? Adobe Firefly may be the answer. (20230324)
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns (20230206)
Getty Images Targets AI Firm For 'Copying' Photos (20230117)
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI (20220904)
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned (20220817)


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Revek on Wednesday January 18 2023, @08:13PM (2 children)

    by Revek (5022) on Wednesday January 18 2023, @08:13PM (#1287423)

    Lets face Getty images steals photos. It been shown time and time again. I wonder with the copyright status of AI right now if they can sue for this since any evidence will have no copyright?

    --
    This page was generated by a Swarm of Roaming Elephants
    • (Score: 4, Funny) by DannyB on Wednesday January 18 2023, @09:43PM (1 child)

      by DannyB (5839) Subscriber Badge on Wednesday January 18 2023, @09:43PM (#1287433) Journal

      If everyone gets their popcorn ready for the inevitable legal battle, won't so much similarity of popcorn and how it is prepared be some kind of infringement of the work of someone somewhere who did popcorn first?

      Intellectual Property gone mad.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 3, Insightful) by choose another one on Thursday January 19 2023, @09:23AM

        by choose another one (515) Subscriber Badge on Thursday January 19 2023, @09:23AM (#1287521)

        Me, I'm just idly wondering (while eating popcorn) if the lawyers:

        a) are so arrogant they think they won't eventually be replaced by AIs
        b) have realized it's inevitable and are just trying to get as much cash out before it happens
        c) are actually already AIs...

  • (Score: 5, Insightful) by DannyB on Wednesday January 18 2023, @09:41PM (15 children)

    by DannyB (5839) Subscriber Badge on Wednesday January 18 2023, @09:41PM (#1287432) Journal

    Suppose you train your own brain by going to art museums, art exhibits, looking at books of fine art. Now you create a painting. Why is that not an infringement? All you did was train a neural network (your own) to learn to mimic part of style, color, texture, etc of existing works.

    Why all of a sudden does it matter if the neural network happens to be in silly con?

    --
    The lower I set my standards the more accomplishments I have.
    • (Score: 2) by SomeRandomGeek on Wednesday January 18 2023, @10:27PM (5 children)

      by SomeRandomGeek (856) on Wednesday January 18 2023, @10:27PM (#1287445)

      The government assigns to the creators of intellectual property the exclusive use of that property as an incentive for them to create it. There is never any harm caused by any intellectual property violation except the loss of incentive to intellectual property creators. What is different from a human seeing the work and an AI seeing the work? The owner of the work sold the right for a human to see it, and hasn't sold the right for an AI to see it (yet). By using a work in a way that it is not licensed for, you are depriving the owner of that work the opportunity to extract more money from you. Ethically that is not much, but legally it is rock solid.

      • (Score: 2) by mhajicek on Wednesday January 18 2023, @11:33PM (3 children)

        by mhajicek (51) on Wednesday January 18 2023, @11:33PM (#1287460)

        They also haven't sold the right for household pets to see it. If your pets watch tv with the family you should go to jail!

        /S

        There is no such thing as an audience specific viewing right. If you let your work be seen, it will be seen.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 3, Interesting) by SomeRandomGeek on Wednesday January 18 2023, @11:51PM (2 children)

          by SomeRandomGeek (856) on Wednesday January 18 2023, @11:51PM (#1287465)

          There is no such thing as an audience specific viewing right. If you let your work be seen, it will be seen.

          There is not any kind of viewing right at all, except as granted by the rights holder. And rights holders can and do subdivide the rights into all kinds of bizarre little categories. Personal use, audio performance, radio performance, broadcast TV performance, cable TV performance, all different categories. These categories don't originate in law. They originate in contracts created by rights holders. The only reason that there is no performance to pets category is that there's no money in it.

          • (Score: 3, Informative) by mhajicek on Thursday January 19 2023, @03:05AM (1 child)

            by mhajicek (51) on Thursday January 19 2023, @03:05AM (#1287497)

            Those are transmission rights, and cannot distinguish between viewers.

            --
            The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2) by choose another one on Thursday January 19 2023, @09:20AM

        by choose another one (515) Subscriber Badge on Thursday January 19 2023, @09:20AM (#1287520)

        What is different from a human seeing the work and an AI seeing the work?

        Oh dear, we've dug that one up again.

        "The movie is over, please choose exit A for mandatory memory erasure, or exit B if you have pre-paid for right-to-remember or wish to pay as you go"
        "Please note, if you are choosing right-to-remember you must be able to show evidence of membership of an MPAA authorised all-future-creative-output-review-and-approve subscription plan"

        [Wishing I could credit where I read this concept first, but I can't remember, ironic?]

    • (Score: 2) by Tork on Wednesday January 18 2023, @10:29PM (7 children)

      by Tork (3914) Subscriber Badge on Wednesday January 18 2023, @10:29PM (#1287446)

      Why is that not an infringement? All you did was train a neural network (your own) to learn to mimic part of style, color, texture, etc of existing works.

      It has been a long time since I've looked into this so take what I have to say with a grain of salt: The legality of it is determined by working out a percentage of change from the original work. Whether or not you used software or wetware to arrive there is not of consideration. Do keep in mind that a lot of artwork is digital now so unlike with a paintbrush you can copy & paste.

      Having said that I've just explained a huge problem with the lawsuit. Heh! Good thing I'm not taking a position on it (or even read the article...) or I'd look silly! I've never heard of an artist being able to rescind permission to create a derivative work. I wonder if this case is more like "I didn't okay my work being in their seed database", which would be akin to saying: "I do not approve of my painting hanging in a museum"... I think.

      I really am talking out of my hinder right now so I'm gonna just stop here. Mainly I wanted to mention the 'percent of change' detail when legal issues with this sort of stuff come up.

      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 2) by Freeman on Wednesday January 18 2023, @10:42PM (5 children)

        by Freeman (732) on Wednesday January 18 2023, @10:42PM (#1287453) Journal

        As they say Fair Use is a defense, not a right. As you have to prove Fair Use, if sued. Essentially same thing goes for something like a Weird Al song.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by Tork on Wednesday January 18 2023, @10:48PM (1 child)

          by Tork (3914) Subscriber Badge on Wednesday January 18 2023, @10:48PM (#1287456)
          Are you saying that instead of suing the AI developer they should be suing the people that use the software?

          I hope I didn't misread you because I think that's a fair point. I was just reading about a dude that AI-generated a kid's book and it caused a lot of trouble for him.
          --
          🏳️‍🌈 Proud Ally 🏳️‍🌈
          • (Score: 2) by Freeman on Thursday January 19 2023, @02:47PM

            by Freeman (732) on Thursday January 19 2023, @02:47PM (#1287553) Journal

            What I'm saying is, in this case, the people that trained their AI on other people's data sets without permission need to defend themselves. Fair Use is just one of the defenses they could make, but I think they have an uphill battle to fight.

            --
            Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2, Informative) by Anonymous Coward on Thursday January 19 2023, @12:56AM (2 children)

          by Anonymous Coward on Thursday January 19 2023, @12:56AM (#1287477)

          Essentially same thing goes for something like a Weird Al song.

          No, it's not
          Al gets permission from the rights holder before releasing a parody.

          • (Score: 3, Funny) by optotronic on Thursday January 19 2023, @02:43AM (1 child)

            by optotronic (4285) on Thursday January 19 2023, @02:43AM (#1287491)

            It sounds like you're saying Al gets permission whereas AI doesn't. This is likely to confuse some people unless they're using a different font than I am.

            Who designed the lowercase print "L", anyway?

            • (Score: 2) by bzipitidoo on Thursday January 19 2023, @03:37AM

              by bzipitidoo (4388) on Thursday January 19 2023, @03:37AM (#1287499) Journal

              Argh, alright drat it, I'll force my browser to render websites in a font that has visual distinction between I and l.

      • (Score: 1, Touché) by Anonymous Coward on Thursday January 19 2023, @05:32AM

        by Anonymous Coward on Thursday January 19 2023, @05:32AM (#1287504)

        According to US law the changes could be near 100% and yet still infringe: https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org]

    • (Score: 0) by Anonymous Coward on Friday January 20 2023, @03:26AM

      by Anonymous Coward on Friday January 20 2023, @03:26AM (#1287669)

      1) Your brain isn't just a neural network AND you can still get sued if you infringe.
      2) The AI bunch don't seem to know that much about what their AI stuff is actually doing. So how can they be sure it isn't infringing?

      That said as long as the AI stuff hasn't been producing any works yet (which in the USA have to be judged in the light of: https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org] ) then what counts is the downloading of the stuff. If Google etc are legally allowed to download and process the images then sure that AI stuff or similar should also be allowed to.

      But if the AI has been producing works then what should be checked is if the works infringe. Even humans who reuse short snippets of songs can be found guilty for copyright infringement.

      So are typefaces, textures and patterns copyrightable? If some of these are then the AI bunch might be in trouble if they reuse them.

(1)