from the artificial-artificial-intelligence dept.
On Monday, Adobe announced that its stock photography service, Adobe Stock, would begin allowing artists to submit AI-generated imagery for sale, Axios reports. The move comes during Adobe's embrace of image synthesis and also during industry-wide efforts to deal with the rapidly growing field of AI artwork in the stock art business, including earlier announcements from Shutterstock and Getty Images.
Submitting AI-generated imagery to Adobe Stock comes with a few restrictions. The artist must own (or have the rights to use) the image, AI-synthesized artwork must be submitted as an illustration (even if photorealistic), and it must be labeled with "Generative AI" in the title.
Further, each AI artwork must adhere to Adobe's new Generative AI Content Guidelines, which require the artist to include a model release for any real person depicted realistically in the artwork. Artworks that incorporate illustrations of people or fictional brands, characters, or properties require a property release that attests the artist owns all necessary rights to license the content to Adobe Stock.
[...]
AI-generated artwork has proven ethically problematic among artists. Some criticized the ability of image synthesis models to reproduce artwork in the styles of living artists, especially since the AI models gained that ability from unauthorized scrapes of websites.
Related Stories
Dick Clark's New Year's Rockin' Eve has become a woke, sanitized shell of its former self. The crowd of rowdy, inebriated locals and tourists is long gone. What you see now is bouncing and screaming for the latest flash-in-the-pan artists while industry veterans like Duran Duran barely elicit a cheer.
Youtuber and music industry veteran Rick Beato recently posted an interesting video on how Auto-Tune has destroyed popular music. Beato quotes from an interview he did with Smashing Pumpkins' Billy Corgan where the latter stated, "AI systems will completely dominate music. The idea of an intuitive artist beating an AI system is going to be very difficult." AI is making inroads into visual art as well, and hackers, artists and others seem to be embracing it with enthusiasm.
AI seems to be everywhere lately, from retrofitting decades old manufacturing operations to online help desk shenanigans to a wearable assistant to helping students cheat. Experts are predicting AI to usher in the next cyber security crisis and the end of programming as we know it.
Will there be a future where AI can and will do everything? Where artists are judged on their talents with a keyboard/mouse instead of a paintbrush or guitar? And what about those of us who will be developing the systems AI uses to produce stuff? Will tomorrow's artist be the programming genius who devises a profound algorithm that can produce stuff faster, or more eye/ear-appealing, where everything is completely computerized and lacking any humanity? Beato makes a good point in his video on auto-tune, that most people don't notice when something has been digitally altered, and quite frankly, they don't care either.
Will the "purists" among us be disparaged and become the new "Boomers"? What do you think?.
US firm Getty Images on Tuesday threatened to sue a tech company it accuses of illegally copying millions of photos for use in an artificial intelligence (AI) art tool:
Getty, which distributes stock images and news photos including those of AFP, accused Stability AI of profiting from its pictures and those of its partners. Stability AI runs a tool called Stable Diffusion that allows users to generate mash-up images from a few words of text, but the firm uses material it scrapes from the web often without permission.
The question of copyright is still in dispute, with creators and artists arguing that the tools infringe their intellectual property and AI firms claiming they are protected under "fair use" rules.
Tools like Stable Diffusion and Dall-E 2 exploded in popularity last year, quickly becoming a global sensation with absurd images in the style of famous artists flooding social media.
Related:
- Adobe Stock Begins Selling AI-Generated Artwork
- Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI
- NVIDIA Research's GauGAN AI Art Demo Responds to Words
- Alien Dreams: An Emerging Art Scene
Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.
Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.
[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.
Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms
Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.
The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.
In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.
To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.
In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."
On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E, Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.
Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists.
Related:
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Adobe Stock Begins Selling AI-Generated Artwork
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned
Adobe Creative Cloud Experience Makes It Easier to Run Malware
Adobe Goes After 27-Year Old 'Pirated' Copy of Acrobat Reader 1.0 for MS-DOS
Adobe Critical Code-Execution Flaws Plague Windows Users
When Adobe Stopped Flash Content from Running it Also Stopped a Chinese Railroad
Adobe Has Finally and Formally Killed Flash
Adobe Lightroom iOS Update Permanently Deleted Users' Photos
In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.
"I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would've increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I'm seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it's able to do. It's shocking to me."
[...] "It makes me worry so much about all of the people in the business," he told The Hollywood Reporter. "Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I'm thinking this will touch every corner of our industry."
You can read the full interview at The Hollywood Reporter
[...] Perry also looks beyond Hollywood and says that it's not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. "If you look at it across the world, how it's changing so quickly, I'm hoping that there's a whole government approach to help everyone be able to sustain."
Previously on SoylentNews:
OpenAI Teases a New Generative Video Model Called Sora - 20240222
(Score: 2) by Sjolfr on Saturday December 10 2022, @03:48AM
... the movement for AI rights will begin shortly.
(Score: 0) by Anonymous Coward on Saturday December 10 2022, @04:04AM (11 children)
Aren't these AI images in the public domain?
(Score: 0) by Anonymous Coward on Saturday December 10 2022, @06:49AM (10 children)
Especially if they are trying to profit from this.
(Score: 3, Interesting) by sjames on Saturday December 10 2022, @08:37AM (9 children)
It's highly questionable that any sort of license would even be needed other than permission to view. AI image generators train a neural net on many images (with descriptive text encoded in the inputs) in order to do what they do. They do not 'sample' like in some forms of music. The generated images contain no pixels from the training set. If I don't need a licence beyond a license to view (implied if it's on the web) a bunch of images to train myself to paint/draw/photograph better, why would the AI?
(Score: 0) by Anonymous Coward on Saturday December 10 2022, @09:34AM (8 children)
People have been successfully sued for derivative works.
Those people didn't need a license to view copyrighted work but they sure needed to pay to produce derivative works.
(Score: 2) by bzipitidoo on Saturday December 10 2022, @04:04PM (1 child)
>they sure needed pay to produce derivative works.
Why?
Such a requirement just gums up the works. When did you have in mind that payment should be made? Likely, you thought of it as a "sale", to be paid at the point of sale? If so, it puts a lot of risk and burden on the buyers. They don't know if their expenses will ever be recovered. If there is to be any payment at all, maybe it should be a cut of the profits, if any of those are made?
(Score: 0) by Anonymous Coward on Sunday December 11 2022, @04:44PM
I used past tense for a reason. Go ask the judges why or read the judgement: https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org]
(Score: 0) by Anonymous Coward on Saturday December 10 2022, @05:01PM
Fonts are a bit weird when it comes to copyright. As usual, rules vary between countries. In the US, the actual appearance of a typeface is not eligible for copyright protection so in the US such images will not be any problem at all.
What is copyrightable are the font files since Type 1, TrueType, etc. font files are actually computer programs (and software is subject to copyright). But once text is rasterized/printed/whatever, the resulting images are not.
(Score: 2) by sjames on Sunday December 11 2022, @01:25AM (4 children)
What if it does? As long as they're not copies. Are you saying there can be one and only one painting of a cowboy riding an ostrich in the entire world?
(Score: 0) by Anonymous Coward on Sunday December 11 2022, @04:42PM (3 children)
https://en.wikipedia.org/wiki/Rogers_v._Koons [wikipedia.org]
(Score: 2) by sjames on Sunday December 11 2022, @11:37PM (2 children)
That's not quite the same issue. Koons made an actual intentional copy of Rogers' specific photo. Had he just taken a different picture of a man and woman holding puppies, there wouldn't have been an issue. Had the sculpture been meant as a parody of that picture in particular, his parody defense would have held up.
(Score: 0) by Anonymous Coward on Tuesday December 13 2022, @05:50AM (1 child)
(Score: 2) by sjames on Thursday December 15 2022, @08:57AM
So I guess you're all out of ammo? If you can site something that might show there can be only one picture of a cowboy riding an ostrich in the whole wide world, please present it. If you can present something where there was an award for something not a deliberate copy of the original work, please do present it.
(Score: 2) by oumuamua on Saturday December 10 2022, @05:43PM (1 child)
Here we have AI producing art competitive with real Artists.
Already in the works: AI producing music and videos.
With ChatGPT we have AI producing flawless article compositions and basic code competitive with entry level programmers: https://www.youtube.com/watch?v=0A8ljAkdFtg [youtube.com]
AIs now beat humans in every known game.
Expect full self driving soon.
The AI naysayers have been proven wrong. Especially considering the models will keep improving every year.
Now something has to give, the old advice for those out of work: 'learn to code' no longer applies as the AIs will soon be coding.
Time to support UBI.
(Score: 2) by takyon on Sunday December 11 2022, @12:19AM
I'm not convinced by ChatGPT code yet but someone will probably turn Stack Overflow into a coding monster soon. I was impressed by the ChatGPT poems that I saw, and I would like to see more of them and try it myself.
The AI art is best with a certain amount of human intervention (i.e. work/labor), particularly a sketch or something for image-to-image translation, and fine-tuning with tools like DreamBooth. But it is amazing how you can get something you like (not necessarily 100% what you were going for) in very little time with so few tries, even using Stable Diffusion v1.5. Stable Diffusion v2 has taken steps to limit the artists included in the model and the ability to generate NSFW images [archive.ph], so there will have to be a great forking event soon. The depth2img and inpainting changes [stability.ai] look great.
You can get great AI art in a few seconds, but spending some time on it helps. Graphic designers will continue to exist, but maybe less of them. Artists being paid by commission will take big or small hits depending on their area. Their starvation might happen slowly enough that they don't notice until 25-50% of the work is gone. I don't think Greg Rutkowski [artstation.com] will starve, but maybe artists that aspire to be Greg Rutkowski will just give up before reaching that level and adopt the AI tools instead.
Music/voice look very hard. I thought voice would have been completely conquered a couple years ago, but the latest I've heard still sounded janky to me. Maybe the best code is proprietary and will be "responsibly" milked for cash for a while, or you'll need to do something like "voice-to-voice" translation to make your deepfaked politician sound convincing. I hope we see one of those markup languages for text-to-speech catch on, and the reverse, speech-to-text+markup. Once this is nailed down, it eliminates low-level Mechanical Turk style transcription jobs which I believe there are a lot of. Obviously, improving language translation is another big fish, and we have a subtle way of knowing when ML translations accessible to the public have reached stupendously high levels of quality.
Video may be easier than (it) sounds. We see that AI can already be used to create 3D models. What if it were to arrange 3D models and motion vectors in one step, and then convert each frame to something photorealistic while preserving temporal stability. Things off-screen would be factored in and you can have fully-raytraced lighting (which can be an imitation rather than classic ray tracing).
Games like Chess and Go don't matter. Get back in the cobalt mines, fucker! I think AI will have some good implications for video games, particularly open world games. It could eliminate jobs, or you could make the game world 10x larger with the same amount of man hours. We'll have to see how NPC AI can be improved without introducing cascading problems that break largely linear storylines.
Regulators, market forces, and public apprehension will suppress self-driving cars. It has to work so damn well, perfect 99.9% of the time doesn't cut it. If you want to replace big rig truck drivers, it has to be nearly flawless since those vehicles have a lot of killing potential and crashes can lead to multi-million dollar settlements.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]