A Spanish youth court has sentenced 15 minors to one year of probation after spreading AI-generated nude images of female classmates in two WhatsApp groups.
The minors were charged with 20 counts of creating child sex abuse images and 20 counts of offenses against their victims' moral integrity.
[...] Many of the victims were too ashamed to speak up when the inappropriate fake images began spreading last year. Prior to the sentencing, a mother of one of the victims told The Guardian that girls like her daughter "were completely terrified and had tremendous anxiety attacks because they were suffering this in silence."
[...] Teens using AI to sexualize and harass classmates has become an alarming global trend. Police have probed disturbing cases in both high schools and middle schools in the US, and earlier this year, the European Union proposed expanding its definition of child sex abuse to more effectively "prosecute the production and dissemination of deepfakes and AI-generated material." Last year, US President Joe Biden issued an executive order urging lawmakers to pass more protections.
[...] In an op-ed for The Guardian today, journalist Lucia Osborne-Crowley advocated for laws restricting sites used to both generate and surface deepfake pornography, including regulating this harmful content when it appears on social media sites and search engines.
[...] An FAQ said that "WhatsApp has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware they are sharing content that exploits or endangers children," but it does not mention AI.
Previously on SoylentNews:
A High School's Deepfake Porn Scandal is Pushing US Lawmakers Into Action - 20231203
Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter's Teammates, Police Say - 20210314
Related stories on SoylentNews:
Microsoft Unveils Deepfake Tech That's Too Good To Release - 20240422
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
Taylor Swift Deepfakes Spark Calls in Congress for New Legislation - 20240127
Jail Terms in UK for Sharing or Creating Explicit Images Without Consent - 20230627
Deepfakes Pose a Growing Danger, New Research Says - 20220809
Man Arrested for Uncensoring Japanese Porn with AI in First Deepfake Case - 20211023
FBI Warns Imminent Deepfake Attacks "Almost Certain" - 20210328
MIT Team Creates Deepfake of President Nixon Reading "Moon Disaster" Apollo 11 Contingency Speech - 20200721
This Open-Source Program Deepfakes You During Zoom Meetings, in Real Time - 20200421
I Created My Own Deepfake—It Took Two Weeks and Cost $552 - 20191219
Related Stories
Submitted via IRC for SoyCow4408
Deepfake technology uses deep neural networks to convincingly replace one face with another in a video. The technology has obvious potential for abuse and is becoming ever more widely accessible. Many good articles have been written about the important social and political implications of this trend.
This isn't one of those articles. Instead, in classic Ars Technica fashion, I'm going to take a close look at the technology itself: how does deepfake software work? How hard is it to use—and how good are the results?
I thought the best way to answer these questions would be to create a deepfake of my own. My Ars overlords gave me a few days to play around with deepfake software and a $1,000 cloud computing budget. A couple of weeks later, I have my result, which you can see above. I started with a video of Mark Zuckerberg testifying before Congress and replaced his face with that of Lieutenant Commander Data (Brent Spiner) from Star Trek: The Next Generation. Total spent: $552.
The video isn't perfect. It doesn't quite capture the full details of Data's face, and if you look closely you can see some artifacts around the edges.
Still, what's remarkable is that a neophyte like me can create fairly convincing video so quickly and for so little money. And there's every reason to think deepfake technology will continue to get better, faster, and cheaper in the coming years.
In this article I'll take you with me on my deepfake journey. I'll explain each step required to create a deepfake video. Along the way, I'll explain how the underlying technology works and explore some of its limitations.
This Open-Source Program Deepfakes You During Zoom Meetings, in Real Time:
Video conferencing apps like Zoom and Skype are usually boring and often frustrating. With more people than ever using this software to work from home, users are finding new ways to spice up endless remote meetings and group hangs by looping videos of themselves looking engaged, adding wacky backgrounds, and now, using deepfake filters for impersonating celebrities when you're tired of your own face staring back at you in the front-facing camera window.
Avatarify is a program that superimposes someone else's face onto yours in real-time, during video meetings. The code is available on Github for anyone to use.
Programmer Ali Aliev used the open-source code from the "First Order Motion Model for Image Animation," published on the arxiv preprint server earlier this year, to build Avatarify. First Order Motion, developed by researchers at the University of Trento in Italy as well as Snap, Inc., drives a photo of a person using a video of another person—such as footage of an actor—without any prior training on the target image.
With other face-swap technologies, like deepfakes, the algorithm is trained on the face you want to swap, usually requiring several images of the person's face you're trying to animate. This model can do it in real-time, by training the algorithm on similar categories of the target (like faces).
"I ran [the First Order Model] on my PC and was surprised by the result. What's important, it worked fast enough to drive an avatar real-time," Aliev told Motherboard. "Developing a prototype was a matter of a couple of hours and I decided to make fun of my colleagues with whom I have a Zoom call each Monday. And that worked. As they are all engineers and researchers, the first reaction was curiosity and we soon began testing the prototype."
A Nixon Deepfake, a 'Moon Disaster' Speech and an Information Ecosystem at Risk
What can former U.S. president Richard Nixon possibly teach us about artificial intelligence today and the future of misinformation online? Nothing. The real Nixon died 26 years ago.
But an AI-generated likeness of him shines new light on a quickly evolving technology with sizable implications, both creative and destructive, for our current digital information ecosystem. Starting in 2019, media artists Francesca Panetta and Halsey Burgund at the Massachusetts Institute of Technology teamed up with two AI companies, Canny AI and Respeecher, to create a posthumous deepfake. The synthetic video shows Nixon giving a speech he never intended to deliver—half a century after the subject it addresses.
Here's the (real) backstory: In July 1969, as the Apollo 11 astronauts glided through space on their trajectory toward the moon, William Safire, then one of Nixon's speechwriters, wrote "In Event of Moon Disaster" as a contingency. The speech is a beautiful homage to Neil Armstrong and Edwin "Buzz" Aldrin, the two astronauts who descended to the lunar surface—never to return in this version of history. It ends by saying, "For every human being who looks up at the moon in the nights to come will know that there is some corner of another world that is forever mankind."
The full deepfake speech can be viewed at https://moondisaster.org.
Cheer mom used deepfake nudes and threats to harass daughter's teammates, police say:
An anonymous cyberbully in Pennsylvania seemed to have one goal in mind: Force a trio of cheerleaders off their formidable local team, the Victory Vipers.
Doctored images were sent to the coach of the competitive squad that appeared to show the teen girls in humiliating or compromising situations that could get them kicked off the team, like appearing nude, drinking alcohol and using drugs, according to the criminal complaint.
In anonymous texts and calls, the bully told one girl "you should kill yourself."
When police unmasked the alleged culprit late last year, they found the bully hiding within the Victory Viper circle.
Raffaela Spone, a local cheer mom whose daughter is on the team, was charged last week with three misdemeanor counts of cyber harassment of a child and related offenses, according to the Bucks County District Attorney.
[...] If convicted, Spone could face between six months to a year in prison, though Weintraub, the district attorney, said the maximum penalty for low-level misdemeanors is unlikely.
Citron said the criminal justice system still lags behind deepfake technology when it comes to investigations and prosecutions. She and Weintraub each said deepfakes and similar technology pose a broader threat to the truth by muddying the information ecosystem.
"It's disturbing to me because we rely on being able to authenticate evidence as a foundation of the criminal justice system," Weintraub said. "If everyday people are capable of using deepfakes, that's going to make doing our job a lot more difficult."
[Ed. note: As much as this goes against the norm here, I strongly encourage folk to read the entire linked article. We continue to witness dramatic advances in computer capabilities. Just consider what we already have today: AMD's Epyc and Threadripper processors, Apple Silicon (of which the M1 processor is only a taste), multi-terabyte DDR6 memories, huge farms of SSD storage all help leverage the tremendous capabilities of the latest ray-tracing video cards. Consider this a PSA (Public Service Announcement): You've Been Warned.-martyb)
FBI Warns Imminent Deepfake Attacks "Almost Certain" - The Debrief:
The Federal Bureau of Investigation (FBI) has issued a unique Private Industry Notification (PIN) on deepfakes, warning companies that "malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months."
[...] Creating or manipulating images and videos to depict events that never actually happened is hardly new. However, advances in machine learning and artificial intelligence have allowed for the creation of compelling and nearly indistinguishable fake videos and images.
Legacy photo editing software uses various graphic editing techniques to alter, change, or enhance images. Photo editing software such as PhotoShop can manipulate pictures to include details or even people that weren't originally in a photo. However, creating convincing false images is highly-dependent on a user's skill in using the editing software.
In contrast, deepfakes use machine learning, and a type of neural network called an autoencoder. An encoder reduces an image to a lower-dimensional latent space, allowing for a decoder to reconstruct an image from the latent representation.
Japanese police on Monday arrested a 43-year-old man for using artificial intelligence to effectively unblur pixelated porn videos, in the first criminal case in the country involving the exploitative use of the powerful technology.
Masayuki Nakamoto, who runs his own website in the southern prefecture of Hyogo, lifted images of porn stars from Japanese adult videos and doctored them with the same method used to create realistic face swaps in deepfake videos.
But instead of changing faces, Nakamoto used machine learning software to reconstruct the blurred parts of the video based on a large set of uncensored nudes and sold the content online. Penises and vaginas are pixelated in Japanese porn because an obscenity law forbids the explicit depictions of genitalia.
Nakamoto reportedly made about 11 million yen ($96,000) by selling over 10,000 manipulated videos, though he was arrested specifically for selling 10 fake photos at about 2,300 yen ($20) each.
Deepfakes Pose a Growing Danger, New Research Says
Deepfakes Pose a Growing Danger, New Research Says:
A new report from VMware shows that cybersecurity professionals are seeing more deepfakes being used in cyber attacks.
Deepfakes use artificial intelligence to manipulate video and audio are make it seem like someone is saying or doing something that they're not. Deepfakes are increasingly being used in cyberattacks, a new report said, as the threat of the technology moves from hypothetical harms to real ones.
Reports of attacks using the face- and voice-altering technology jumped 13% last year, according to VMware's annual Global Incident Response Threat Report, which was released Monday. In addition, 66% of the cybersecurity professionals surveyed for this year's report said they had spotted one in the past year.
"Deepfakes in cyberattacks aren't coming," Rick McElroy, principal cybersecurity strategist at VMware, said in a statement. "They're already here."
Deepfakes use artificial intelligence to make it look as if a person is doing or saying things he or she actually isn't. The technology entered the mainstream in 2019, sparking fears it could convincingly re-create other people's faces and voices. Victims could see their likeness used for artificially created pornography and the technique could be used to sow political upheaval, experts warned.
While early deepfakes were largely easy to spot, the technology has since evolved and become much more convincing. In March, a video posted to social media appeared to show Ukrainian President Volodymyr Zelenskyy directing his soldiers to surrender to Russian forces. It was quickly denounced by Zelenskyy but showed the potential for harm posed by deepfakes.
Jail terms for sharing or creating explicit images without consent:
People caught sharing or creating explicit images without consent could face time in jail in England and Wales.
Amendments to the Online Safety Bill will introduce a six-month prison term for sharing deepfake and revenge porn. This would rise to two years if intent to cause distress, alarm or humiliation, or to obtain sexual gratification can be proved.
Those who share an image for sexual gratification could also be placed on the sex offenders' register.
"Revenge porn" is sharing an intimate image without consent. "Deepfake porn" involves creating a fake explicit image or video of a person. Revenge porn was criminalised in 2015 but up until now prosecutors had to prove there was an intention to cause humiliation or distress.
[...] The government announced its intention to legislate last year, and the amendments are part of the Online Safety Bill, which is due to be voted on by MPs later this month before it becomes law.
A high school's deepfake porn scandal is pushing US lawmakers into action:
Efforts from members of Congress to clamp down on deepfake pornography are not entirely new. In 2019 and 2021, Representative Yvette Clarke introduced the DEEPFAKES Accountability Act, which requires creators of deepfakes to watermark their content. And in December 2022, Representative Morelle, who is now working closely with Francesca, introduced the Preventing Deepfakes of Intimate Images Act. His bill focuses on criminalizing the creation and distribution of pornographic deepfakes without the consent of the person whose image is used. Both efforts, which didn't have bipartisan support, stalled in the past.
But recently, the issue has reached a "tipping point," says Hany Farid, a professor at the University of California, Berkeley, because AI has grown much more sophisticated, making the potential for harm much more serious. "The threat vector has changed dramatically," says Farid. Creating a convincing deepfake five years ago required hundreds of images, he says, which meant those at greatest risk for being targeted were celebrities and famous people with lots of publicly accessible photos. But now, deepfakes can be created with just one image.
Farid says, "We've just given high school boys the mother of all nuclear weapons for them, which is to be able to create porn with [a single image] of whoever they want. And of course, they're doing it."
Clarke and Morelle, both Democrats from New York, have reintroduced their bills this year. Morelle's now has 18 cosponsors from both parties, four of whom joined after the incident involving Francesca came to light—which indicates there could be real legislative momentum to get the bill passed. Then just this week, Representative Kean, one of the cosponsors of Morelle's bill, released a related proposal intended to push forward AI-labeling efforts—in part in response to Francesca's appeals.
Taylor Swift deepfakes spark calls in Congress for new legislation:
Deepfakes use artificial intelligence (AI) to make a video of someone by manipulating their face or body. A study in 2023 found that there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI.
US Representative Joe Morelle called the spread of the pictures "appalling".
In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.
It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed." While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.
[...] There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at state level to tackle the issue.
In the UK, the sharing of deepfake pornography became illegal as part of its Online Safety Act in 2023.
Law enforcement is continuing to warn that a "flood" of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.
Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, every attorney general across the US quickly called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery.
[...]
"Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation," Steve Grocki, the chief of the Justice Department's child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm's way and making it harder for law enforcement to find actual children being harmed.In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested "for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft." A search of Thompson's iCloud revealed "four additional instances" where Thompson allegedly recorded other minors in the lavatory, as well as "over 50 images of a 9-year-old unaccompanied minor" sleeping in her seat. While police attempted to identify these victims, they also "further alleged that hundreds of images of AI-generated child pornography" were found on Thompson's phone.
[...]
The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing "virtual" or "computer-generated child pornography." South Carolina's attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month.
[...]
There's also the "Preventing Deepfakes of Intimate Images Act," which seeks to "prohibit the non-consensual disclosure of digitally altered intimate images." That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years' imprisonment for sharing harmful images.
Previously on SoylentNews:
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter's Teammates, Police Say - 20210314
Arthur T Knackerbracket has processed the following story:
Microsoft this week demoed VASA–1, a framework for creating videos of people talking from a still image, audio sample, and text script, and claims – rightly – it's too dangerous to be released to the public.
These AI-generated videos, in which people can be convincingly animated to speak scripted words in a cloned voice, are just the sort of thing the US Federal Trade Commission warned about last month, after previously proposing a rule to prevent AI technology from being used for impersonation fraud.
Microsoft's team acknowledge as much in their announcement, which explains the technology is not being released due to ethical considerations. They insist that they're presenting research for generating virtual interactive characters and not for impersonating anyone. As such, there's no product or API planned.
"Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications," the Redmond boffins state. "It is not intended to create content that is used to mislead or deceive.
"However, like other related content generation techniques, it could still potentially be misused for impersonating humans. We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection."
Kevin Surace, Chair of Token, a biometric authentication biz, and frequent speaker on generative AI, told The Register in an email that while there have been prior technology demonstrations of faces animated from a still frame and cloned voice file, Microsoft's demonstration reflects the state of the art.
"The implications for personalizing emails and other business mass communication is fabulous," he opined. "Even animating older pictures as well. To some extent this is just fun and to another it has solid business applications we will all use in the coming months and years."
(Score: 4, Funny) by Rosco P. Coltrane on Saturday July 13 2024, @11:47AM (5 children)
I just can't wait. It's turning out to be all the greatness OpenAI promised it would be and more...
(Score: 2, Insightful) by khallow on Saturday July 13 2024, @12:37PM (2 children)
One can't regulate without tools to control.
(Score: 5, Insightful) by Thexalon on Saturday July 13 2024, @04:11PM (1 child)
To be fair, this instance of prosecution wasn't going after the social media platforms, but the kids who created what amounted to kiddie porn of their classmates. Which, you know, seems to be putting the responsibility in the right place. And probation for doing it seems appropriate as well, and since they're juveniles it's going to get erased soon enough from their record.
And a message to the teenage boys: If you want to know what your classmates look like nekkid, either have awkward teenage moments with them, or use your imagination. It's not that difficult, I promise you.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 2, Insightful) by Anonymous Coward on Saturday July 13 2024, @07:55PM
Much more harm is done to children when you teach them that sex is evil. It is the suppression that makes them crazy, and results in deviant anti-social behavior, making men more aggressive and women more submissive, which I guess is the purpose. Notice that control, prohibition of sex is at the very root of patriarchal authoritarianism. It is the ultimate weapon. Who are we to deny the tyrants?
(Score: 2, Disagree) by corey on Saturday July 13 2024, @11:47PM (1 child)
Yeah. The sad thing for me is that the kids (that’s what they are, immature and dealing with stuff they don’t understand) are the ones who got their arse kicked for it. No damage to WhatsApp or the AI product developer which enabled all this. Scott free. It’s so absurdly upside down. I did weird shit while growing up too, looking back I have no idea why I did it but I was just an immature kid growing up and trying things, learning, somewhat ignorant or unaware of either safety or the law or even good morality. I’m talking back in the late 80s and 90s before the internet.
(Score: 3, Insightful) by Freeman on Monday July 15 2024, @03:19PM
They're learning life lessons. Unfortunately, technology has gotten to the point where you can very easily, do really stupid stuff for the entire world to see. This is more of a breakdown in parenting and/or morality than anything. Though, even with good parents, kids can do some really mean and/or really stupid stuff. It's almost as if they may make poor decisions due to hormones / in-experience / etc . . .
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 1, Troll) by Snotnose on Saturday July 13 2024, @01:12PM
At least they weren't watching South Korean videos in North Korea [msn.com]
Every time a Christian defends Trump an angel loses it's lunch.
(Score: 1, Interesting) by Anonymous Coward on Saturday July 13 2024, @06:24PM (1 child)
We are being ruled by an angry mob. We have to neutralize these people. You just have to assume the images are fake, and thus harmless.
You certainly have no right to regulate cartoon drawings!
I have to admit though that these are ideal distractions from the political/social psychoses we suffer over sex
(Score: 0) by Anonymous Coward on Monday July 15 2024, @07:10PM
Flamebait
And I'm being modded by an angry mob (of one)
It's totally irrational to deny the truth...
word...