Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by hubie on Thursday November 30 2023, @09:21PM   Printer-friendly
from the AI-overlords dept.

https://arstechnica.com/information-technology/2023/11/stability-ai-releases-stable-video-diffusion-which-turns-pictures-into-short-videos/

On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

Last year, Stability AI made waves with the release of Stable Diffusion, an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.
[...]
In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for). In our experiments, the generated animation typically keeps a portion of the scene static and adds panning and zooming effects or animates smoke or fire. People depicted in photos often do not move, although we did get one Getty image of Steve Wozniak to slightly come to life.

Previously on SoylentNews:
Search: Stable Diffusion on SoylentNews.


Original Submission

Related Stories

Microsoft Accused of Selling AI Tool That Spews Violent, Sexual Images to Kids 13 comments

https://arstechnica.com/tech-policy/2024/03/microsoft-accused-of-selling-ai-tool-that-spews-violent-sexual-images-to-kids/

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by Anonymous Coward on Thursday November 30 2023, @11:57PM (2 children)

    by Anonymous Coward on Thursday November 30 2023, @11:57PM (#1334804)

    Behold the flood of AI enhanced RickRoll and Goatse videos.

  • (Score: 3, Informative) by gtomorrow on Friday December 01 2023, @12:21PM (8 children)

    by gtomorrow (2230) on Friday December 01 2023, @12:21PM (#1334849)

    So says the linked Ars article.

    Still looks like ass. Nobody in their right mind that didn't use a cane and a dog would be convinced.

    • (Score: 3, Interesting) by looorg on Friday December 01 2023, @02:32PM (7 children)

      by looorg (578) on Friday December 01 2023, @02:32PM (#1334857)

      Today yes. I have been in projects that have tried to make it write simple things, simple manuals or create simple things but it's very hit and miss and most of it is as noted shit -- so a lot more misses then hits and it requires a lot of hand-holding and checking and supervision -- so much so that you might as well just do it all yourself. But it's less shit then it used to be. So it could eventually be good or at least better or mediocre. But today? No it's still shit. It's just the novelty of it all. It can't even produce a simple paper or a suitably long article or summary of things that it's just a hallucinatory rehash of things it "read", but still have no concept of. It's just word-mash -- sometimes the infinite monkey theorem will produce gold, but most of the time it's just slinging feces at the wall.

      • (Score: 2) by Freeman on Friday December 01 2023, @02:43PM

        by Freeman (732) on Friday December 01 2023, @02:43PM (#1334859) Journal

        You're insulting feces flinging monkeys. They have actual intelligence.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2) by Freeman on Friday December 01 2023, @02:52PM (5 children)

        by Freeman (732) on Friday December 01 2023, @02:52PM (#1334860) Journal

        Nice, I just retried one of the first things I asked it to do for me. Write an essay about Isopods.

        The first citation actually existed.

        The second two citations were hallucinated.

        It also definitely didn't include any quotations, just a vague reference to an entire article that supposedly supports a statement.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by looorg on Friday December 01 2023, @03:48PM (4 children)

          by looorg (578) on Friday December 01 2023, @03:48PM (#1334865)

          It also definitely didn't include any quotations, just a vague reference to an entire article that supposedly supports a statement.

          This is my personal main grief about it all. There is no proper quotation or citation in any of it. Just vague references to "science", knowledge, papers or reports or that someone or things or wrote or said something. But not who, or links or what it is really basing any of its conclusions on. This make almost anything it produces worthless and more of a novelty. Without proper citation it's a novelty item and not actually useful since it become so hard to check the validity of it all.

          It can make text, it can make interesting and pretty pictures (and some really disturbing once). But beyond that it's of little actual usable value.

          The best way you can work with it then is to limit the data it can work with where you know where it all came from. Trying to find hidden patterns in data and such things. But that requires a lot of manual work from you to set it up correctly. Still checking the results is very tedious and time consuming. But it also somewhat defeats the purpose of it all if you have to spend all that time properly curating data for it.

          • (Score: 2) by Freeman on Friday December 01 2023, @04:23PM

            by Freeman (732) on Friday December 01 2023, @04:23PM (#1334872) Journal

            It's rather good at translating a bit I need to say in an email into "corporate speak" so I don't need to use valuable brain cells to convert my thoughts into "sounding good". It's easy enough to proof read and edit a good enough result.

            --
            Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
          • (Score: 0) by Anonymous Coward on Saturday December 02 2023, @12:21AM (2 children)

            by Anonymous Coward on Saturday December 02 2023, @12:21AM (#1334924)

            > There is no proper quotation or citation in any of it.

            Speculation: The developers were too cheap to buy access to the masses of scientific and engineering papers that are behind paywalls. So most of the research papers used in the training set is just abstracts.

            • (Score: 2) by aafcac on Sunday December 03 2023, @06:55AM

              by aafcac (17646) on Sunday December 03 2023, @06:55AM (#1335031)

              Or they couldn't. There's lots of issues when it comes to letting ML programs take information from various articles and synthesize them beyond just payment. Plagiarism and incorrect conclusions are a real issue.

            • (Score: 2) by looorg on Sunday December 03 2023, @11:59AM

              by looorg (578) on Sunday December 03 2023, @11:59AM (#1335041)

              There is no proper quotation or citation in any of it.

              Speculation: The developers were too cheap to buy access to the masses of scientific and engineering papers that are behind paywalls. So most of the research papers used in the training set is just abstracts.

              That is probably part of it. All these gigantic files of "training" data they scraped is probably not paid for properly. This is why we are seeing all these lawsuits and artists and rights-holders of various products are suing. They basically just copied everything thinking it was online so it was all free. Or they just didn't bother to make not of whom or what they copied. After all that is extra data that had little value to them. Or the AI at the moment just can't or won't share credits. Either it can't or it won't, it might not understand the concept or have the programming to be able to say that "this piece" comes from here so you can check it's work or conclusions. It might not want that since it would probably reveal the hallucinations. So no need to gather it. When and if that changes they'll just gather new data with that included.

              I think it's a missed gathering criteria tho as if you knew where things came from you could easier remove all the bad data that has been gathered that may or may not be causing some of the hallucinating. But then as noted if they mentioned where things came from they might have to pay for it. They certainly didn't want that.

              Basically as it stands to day the entire field is built on stolen data. One day they'll have to sort that out. It was painful for other companies that did the same things, some never managed to claw their way out of that dilemma. Others like Spotify spent a lot of effort removing all those illegally downloaded MP3 compressed audiofiles they built their service on. Here it might be worse since they might not know, it might be better to just scrap it all and starts scraping with purpose again.

  • (Score: 2, Troll) by bzipitidoo on Friday December 01 2023, @12:41PM

    by bzipitidoo (4388) on Friday December 01 2023, @12:41PM (#1334852) Journal

    All this AI stuff I've been seeing lately may be nothing more than brainless bandying about of whatever words and phrases are among the most popular. I've tested ChatGPT, and concluded it can't really do anything original. Filling in gaps in documentation is too much to ask of it.

    I tried asking it to create the moves of a chess game analogous to some real war. Couldn't do it. First it misunderstood, and simply assigned the leaders to the chess pieces, eg. FDR is the black king, Winston Churchill the black queen, General Patton a knight, etc. When I had managed to clarify that wasn't what I meant, it still wouldn't make up something.

    When I asked it to write a short, simple program, it gave me garbage that wouldn't compile thanks to simple syntax errors and calls to non-existent libraries. I queried why it hadn't checked its own work by seeing for itself whether its code would compile, and it responded that it wasn't allowed to do that! I don't quite believe that. Was it giving me an excuse? Lying?? Anyway, it's as dumb as a box of rocks.

(1)