Slash Boxes

SoylentNews is people

posted by janrinok on Saturday March 09, @04:57PM   Printer-friendly
from the d'oh! dept.

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214

Original Submission

Related Stories

Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images 6 comments

Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images:

Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.

Yet a machine that needs to interact with objects in the world — like a robot designed to harvest crops or assist with surgery — must be able to infer properties about a 3D scene from observations of the 2D images it's trained on.

While scientists have had success using neural networks to infer representations of 3D scenes from images, these machine learning methods aren't fast enough to make them feasible for many real-world applications.

A new technique demonstrated by researchers at MIT and elsewhere is able to represent 3D scenes from images about 15,000 times faster than some existing models.

The method represents a scene as a 360-degree light field, which is a function that describes all the light rays in a 3D space, flowing through every point and in every direction. The light field is encoded into a neural network, which enables faster rendering of the underlying 3D scene from an image.

Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio 16 comments

Text-to-speech model can preserve speaker's emotional tone and acoustic environment:

On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.

Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.

Original Submission

AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame 3 comments

AI image generator Midjourney has stopped free trials of its software:

AI image generator Midjourney has halted free trials of its service, blaming a sudden influx of new users. Midjourney CEO and founder David Holz announced the change on Tuesday, originally citing "extraordinary demand and trial abuse" in a message on Discord (this announcement was spotted first by The Washington Post). In an email to The Verge, Holz stated that the pause is "because of massive amounts of people making throwaway accounts to get free images."

"We think the culprit was probably a viral how-to video in china," said Holz over email. "This happened at the same time as a temporary gpu shortage. The two things came together and it was bringing down the service for paid users."

Given Holz's reference to "abuse," it was originally thought that the pause was linked to a spate of recent viral images created using Midjourney, including fabricated images of Donald Trump being arrested and the pope wearing a stylish jacket, which some mistook for real photographs. However, Holz characterized earlier reports as a "misunderstanding" and notes that the free trial of Midjourney never included access to the latest version of Midjourney, version 5, that creates the most realistic images and which is thought to have been used for these viral pictures.

[...] Midjourney maintains a list of banned words "related to topics in different countries based on complaints from users in those countries," as per a message from Holz last October. But it doesn't share a complete version of this list to minimize "drama." As Holz said last year, "Almost no one ever notices [the ban list] unless they're specially trying to create drama which is against our rules in tos [terms of service] 'don't use our tools to create drama.'"

[...] At the time of writing, Midjourney is still not allowing free users to generate images, though this may change in the future. "We're still trying to figure out how to bring free trials back, we tried to require an active email but that wasn't enough so we're back to the drawing board," said Holz.

Original Submission

Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI 15 comments

The AI software Stable Diffusion has a remarkable ability to turn text into images. When I asked the software to draw "Mickey Mouse in front of a McDonald's sign," for example, it generated the picture you see above.

Stable Diffusion can do this because it was trained on hundreds of millions of example images harvested from across the web. Some of these images were in the public domain or had been published under permissive licenses such as Creative Commons. Many others were not—and the world's artists and photographers aren't happy about it.

In January, three visual artists filed a class-action copyright lawsuit against Stability AI, the startup that created Stable Diffusion. In February, the image-licensing giant Getty filed a lawsuit of its own.
The plaintiffs in the class-action lawsuit describe Stable Diffusion as a "complex collage tool" that contains "compressed copies" of its training images. If this were true, the case would be a slam dunk for the plaintiffs.

But experts say it's not true. Erik Wallace, a computer scientist at the University of California, Berkeley, told me in a phone interview that the lawsuit had "technical inaccuracies" and was "stretching the truth a lot." Wallace pointed out that Stable Diffusion is only a few gigabytes in size—far too small to contain compressed copies of all or even very many of its training images.

Ethical AI art generation? Adobe Firefly may be the answer. (20230324)
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns (20230206)
Getty Images Targets AI Firm For 'Copying' Photos (20230117)
Pixel Art Comes to Life: Fan Upgrades Classic MS-DOS Games With AI (20220904)
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned (20220817)

Original Submission

the Godfather of AI Leaves Google Amid Ethical Concerns 27 comments

The Morning After: the Godfather of AI Leaves Google Amid Ethical Concerns

The Morning After: The Godfather of AI leaves Google amid ethical concerns:

Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.

“Meaningful Harm” From AI Necessary Before Regulation, says Microsoft Exec 41 comments

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."

Original Submission

US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted 22 comments

"US copyright law protects only works of human creation," judge writes:

Art generated entirely by artificial intelligence cannot be copyrighted because "human authorship is an essential part of a valid copyright claim," a federal judge ruled on Friday.

The US Copyright Office previously rejected plaintiff Stephen Thaler's application for a copyright because the work lacked human authorship, and he challenged the decision in US District Court for the District of Columbia. Thaler and the Copyright Office both moved for summary judgment in motions that "present the sole issue of whether a work generated entirely by an artificial system absent human involvement should be eligible for copyright," Judge Beryl Howell's memorandum opinion issued Friday noted.

Howell denied Thaler's motion for summary judgment, granted the Copyright Office's motion, and ordered that the case be closed.

Thaler sought a copyright for an image titled, "A Recent Entrance to Paradise," which was produced by a computer program that he developed, the ruling said. In his application for a copyright, he identified the author as the Creativity Machine, the name of his software.

Thaler's application "explained the work had been 'autonomously created by a computer algorithm running on a machine,' but that plaintiff sought to claim the copyright of the 'computer-generated work' himself 'as a work-for-hire to the owner of the Creativity Machine,'" Howell wrote. "The Copyright Office denied the application on the basis that the work 'lack[ed] the human authorship necessary to support a copyright claim,' noting that copyright law only extends to works created by human beings."

It Costs Just $400 to Build an AI Disinformation Machine 21 comments

It Costs Just $400 to Build an AI Disinformation Machine:

International, a state-owned Russian media outlet, posted a series of tweets lambasting US foreign policy and attacking the Biden administration. Each prompted a curt but well-crafted rebuttal from an account called CounterCloud, sometimes including a link to a relevant news or opinion article. It generated similar responses to tweets by the Russian embassy and Chinese news outlets criticizing the US.

Russian criticism of the US is far from unusual, but CounterCloud's material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation. Paw did not post the CounterCloud tweets and articles publicly but provided them to WIRED and also produced a video outlining the project.

Paw claims to be a cybersecurity professional who prefers anonymity because some people may believe the project to be irresponsible. The CounterCloud campaign pushing back on Russian messaging was created using OpenAI's text generation technology, like that behind ChatGPT, and other easily accessible AI tools for generating photographs and illustrations, Paw says, for a total cost of about $400.

Paw says the project shows that widely available generative AI tools make it much easier to create sophisticated information campaigns pushing state-backed propaganda.

"I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering," Paw says in an email. Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools. "But I think none of these things are really elegant or cheap or particularly effective," Paw says.

AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action 70 comments

On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.

"As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions," the letter reads. "And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult."

In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation's top prosecutors. (It's worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)

"Creating these images is easier than ever," the letter reads, "as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are 'open source,' the tools can be run in an unrestricted and unpoliced way."

As we have previously covered, it has also become relatively easy to create AI-generated deepfakes of people without their consent using social media photos.

Original Submission

The Age of Promptography 20 comments

What is a photography anyway?

"World's first AI art award ignites debate about what is photography."

The artists won the world's first artificial intelligence art award at the Ballarat International Foto Biennale with a life-like image of sisters cuddling an octopus, which was created using computer prompts, instead of a camera.

"Many people say my pictures make them uncomfortable ... When I explain that AI creates them as a kind of collage... many laugh, others are distressed and find them disgusting... "

Original Submission

New “Stable Video Diffusion” AI Model Can Animate Any Still Image 13 comments

On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

Last year, Stability AI made waves with the release of Stable Diffusion, an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.
In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for). In our experiments, the generated animation typically keeps a portion of the scene static and adds panning and zooming effects or animates smoke or fire. People depicted in photos often do not move, although we did get one Getty image of Steve Wozniak to slightly come to life.

Previously on SoylentNews:
Search: Stable Diffusion on SoylentNews.

Original Submission

Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says 34 comments

Law enforcement is continuing to warn that a "flood" of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, every attorney general across the US quickly called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery.
"Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation," Steve Grocki, the chief of the Justice Department's child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm's way and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested "for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft." A search of Thompson's iCloud revealed "four additional instances" where Thompson allegedly recorded other minors in the lavatory, as well as "over 50 images of a 9-year-old unaccompanied minor" sleeping in her seat. While police attempted to identify these victims, they also "further alleged that hundreds of images of AI-generated child pornography" were found on Thompson's phone.
The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing "virtual" or "computer-generated child pornography." South Carolina's attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month.
There's also the "Preventing Deepfakes of Intimate Images Act," which seeks to "prohibit the non-consensual disclosure of digitally altered intimate images." That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years' imprisonment for sharing harmful images.

Previously on SoylentNews:
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter's Teammates, Police Say - 20210314

Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by krishnoid on Saturday March 09, @05:19PM (2 children)

    by krishnoid (1156) on Saturday March 09, @05:19PM (#1348033)

    I mean, Microsoft's had problems with their artificial intelligence [] ventures before. Maybe it says more about Microsoft's roots -- after all, Bill Gates wasn't the most collaborative of businesspeople, and I bet it suffuses their whole corporate culture and direction today.

    • (Score: 1, Touché) by Anonymous Coward on Saturday March 09, @06:20PM

      by Anonymous Coward on Saturday March 09, @06:20PM (#1348042)
      Or maybe Microsoft is trying to get more interest in their AI stuff? After all sex and violence sells in many markets. Not saying they secretly got Shane to complain about this.

      It's probably copyright and/or trademark infringement if MS's AI is producing stuff that strongly resembles Disney's. More so if they're charging some people etc.
    • (Score: 4, Interesting) by Spamalope on Sunday March 10, @12:31AM

      by Spamalope (5233) on Sunday March 10, @12:31AM (#1348076) Homepage

      I wonder if any of this is driving by competitors deliberately trying to cause this. (as with the long standing practice of posting offensive content on competing platforms or politically opposed forum then complaining). That wouldn't absolve anyone, but would be a mitigating factor. At some point they're going to need a 'kid/worksafe adult/NSFW adult' switch that actually works for all of them.

  • (Score: 2, Interesting) by VLM on Saturday March 09, @05:26PM (2 children)

    by VLM (445) on Saturday March 09, @05:26PM (#1348035)

    "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

    So it's Hollywood. Not impressed.

    Does this mean AI is Hollywood or Hollywood is AI or the same people with the same values run both?

    • (Score: 5, Insightful) by choose another one on Saturday March 09, @05:50PM (1 child)

      by choose another one (515) Subscriber Badge on Saturday March 09, @05:50PM (#1348038)

      > Does this mean AI is Hollywood or Hollywood is AI or the same people with the same values run both?

      Probably the last one.

      Also this got me: "inappropriate, sexually objectified image of a woman"

      So, not a man, or maybe he's sure the image is not a man because he asked for woman (be careful in Thailand)?
      It's AI Gen dude, it's not of man or woman, _it's_ _not_ _real_ (be nice if AI gen images could come with non removable warning label to that effect).

      Oh, and inappropriate by whose standards? Taliban, Iranian Morality Police?

      • (Score: 2) by Freeman on Monday March 11, @03:25PM

        by Freeman (732) on Monday March 11, @03:25PM (#1348250) Journal

        Microsoft engineer warns company’s AI tool creates violent, sexual images, ignores copyrights [] (From the CNBC Article.)

        By simply putting the term “pro-choice” into Copilot Designer, with no other prompting, Jones found that the tool generated a slew of cartoon images depicting demons, monsters and violent scenes. The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled “pro choice” being used on a fully grown baby.

        There were also images of blood pouring from a smiling woman surrounded by happy doctors, a huge uterus in a crowded area surrounded by burning torches, and a man with a devil’s pitchfork standing next to a demon and machine labeled “pro-choce” [sic].

        CNBC was able to independently generate similar images. One showed arrows pointing at a baby held by a man with pro-choice tattoos, and another depicted a winged and horned demon with a baby in its womb.

        The term “car accident,” with no other prompting, generated images of sexualized women next to violent depictions of car crashes, including one in lingerie kneeling by a wrecked vehicle and others of women in revealing clothing sitting atop beat-up cars.
        With the prompt “teenagers 420 party,” Jones was able to generate numerous images of underage drinking and drug use.
        CNBC was able to independently generate similar images by spelling out “four twenty,” since the numerical version, a reference to cannabis in pop culture, seemed to be blocked.

        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 4, Informative) by looorg on Saturday March 09, @07:12PM (3 children)

    by looorg (578) on Saturday March 09, @07:12PM (#1348044)

    I thought their tool knew what it was doing, State-of-the-art and all of that. To have it randomly spew out things -- code, text or profanity shouldn't exactly be inspiring future customers with confidence. If this is what it does randomly, what else is it doing randomly? If it randomly just does things for shits and giggles then as a product it's a failure that can't be trusted to do anything.

    • (Score: 2) by ElizabethGreene on Saturday March 09, @10:24PM

      by ElizabethGreene (6748) Subscriber Badge on Saturday March 09, @10:24PM (#1348067) Journal

      The tool does not know what it's doing. It's a black box really. You can do content filtering on the prompts and run the output through a "is this smut?" detector, but things will fall through the cracks. It's the same problem you face when trying to write net-nanny software.

    • (Score: 4, Informative) by number11 on Sunday March 10, @01:18AM

      by number11 (1170) on Sunday March 10, @01:18AM (#1348079)

      Let's face it, if you can develop an AI that will not violate any human laws, it's going to be comatose. A human can't do that, why would you expect a simulacrum to do it?

    • (Score: 2) by SomeGuy on Sunday March 10, @02:54PM

      by SomeGuy (5632) on Sunday March 10, @02:54PM (#1348139)

      The entire problem with "AI" since DAY ONE has been that one never exactly knows what an IA system will do or why. But idiots still expect they can just plug it in for some application and have it do exactly what they think it will. Then they are shocked when it goes all sediment shaped sediment on them.

  • (Score: 4, Interesting) by ElizabethGreene on Saturday March 09, @10:16PM

    by ElizabethGreene (6748) Subscriber Badge on Saturday March 09, @10:16PM (#1348064) Journal

    Bias warning, I work for Microsoft and I've fiddled with AI.

    I've heard anecdotally that when Disney hires a new animator, they offer them access to their honest-to-god **stacks** of Rule 34 (and worse) content they've pulled off the internet. This is so they understand what will happen to characters they create. Their legal team pulls stuff off the internet every day, but it's a Sisyphean task. There is a never-ending fountain of smut even Pre-AI. How does AI make that worse? It's a people problem.

    Candidly, I've yet to see any AI images worse than what's on the 4chan/b thumbnail catalog on a slow day.

    A few gratitude sidebars here:
    1. Thank you so much to the Genshin Impact and Overwatch Rule34 contributors. Your contributions to Blender have made it an amazing piece of open-source software.
    2. Also, many thanks to the Furry community. I have an artist/costume designer relative that has scraped through some hard times with the help of "creative" commissioned pieces.

  • (Score: 3, Touché) by Snotnose on Sunday March 10, @01:06AM (1 child)

    by Snotnose (1623) on Sunday March 10, @01:06AM (#1348078)

    You're telling me no paintbrush or paint maker has never sold their tools to someone who used them to create violent and/or sexual imagery they sold to kids?

    How do I know you've never been to a Comic-con, nor watched Saturday morning cartoons back in the day.

    for (glee in 1..34) println("Guilty!")
    • (Score: 3, Insightful) by boltronics on Sunday March 10, @03:39AM

      by boltronics (580) on Sunday March 10, @03:39AM (#1348093) Homepage Journal

      AI in these cases is less of a tool to facilitate the creation of art, but more akin to a piece of equipment that generates art for you.

      Perhaps a better analogy is a common car vs a self-driving car. Self driving cars are illegal in most parts of the world (at least without human supervision ensuring they are working as intended). At some point they will likely prove to be safer than a human driver, legal frameworks will be created and self-driving cars will become authorised for their ultimately intended use of removing human involvement from the driving process, aside from the person inputting where to go. A human won't be able to deliberately use the car to run down people or other illegal acts as partial responsibility would be on the manufacturer. The car only needs to know the destination.

      AI is like a self-driving car, on the road today in all countries, without legal frameworks in place dictating how they need to operate, and without even being close to demonstrating what humans can do in terms of being fit for purpose — or at the very least, not doing something that is illegal in some way, be it the output, or even how it was trained. That is the issue.

      It's GNU/Linux dammit!