from the every-smile-you-fake-I'll-be-watching-you dept.
Deepfakes Pose a Growing Danger, New Research Says
Deepfakes Pose a Growing Danger, New Research Says:
A new report from VMware shows that cybersecurity professionals are seeing more deepfakes being used in cyber attacks.
Deepfakes use artificial intelligence to manipulate video and audio are make it seem like someone is saying or doing something that they're not. Deepfakes are increasingly being used in cyberattacks, a new report said, as the threat of the technology moves from hypothetical harms to real ones.
Reports of attacks using the face- and voice-altering technology jumped 13% last year, according to VMware's annual Global Incident Response Threat Report, which was released Monday. In addition, 66% of the cybersecurity professionals surveyed for this year's report said they had spotted one in the past year.
"Deepfakes in cyberattacks aren't coming," Rick McElroy, principal cybersecurity strategist at VMware, said in a statement. "They're already here."
Deepfakes use artificial intelligence to make it look as if a person is doing or saying things he or she actually isn't. The technology entered the mainstream in 2019, sparking fears it could convincingly re-create other people's faces and voices. Victims could see their likeness used for artificially created pornography and the technique could be used to sow political upheaval, experts warned.
While early deepfakes were largely easy to spot, the technology has since evolved and become much more convincing. In March, a video posted to social media appeared to show Ukrainian President Volodymyr Zelenskyy directing his soldiers to surrender to Russian forces. It was quickly denounced by Zelenskyy but showed the potential for harm posed by deepfakes.
How to Spot a Deepfake? One Simple Trick is All You Need
How to spot a deepfake? One simple trick is all you need:
With criminals beginning to use deepfake video technology to spoof an identity in live online job interviews, security researchers have highlighted one simple way to spot a deepfake: just ask the person to turn their face sideways on. The reason for this as a potential handy authentication check is that deepfake AI models, while good at recreating front-on views of a person's face, aren't good at doing side-on or profile views like the ones you might see in a mug shot.
Metaphysics.ai highlights the instability of recreating full 90° profile views in live deepfake videos, making the side profile check a simple and effective authentication procedure for companies conducting video-based online job interviews.
Deepfakes or synthetic AI-enabled recreations of audio, image and video content of humans has been on the radar as a potential identity threat for several years.
However, in June, the Federal Bureau of Investigations warned it had seen an uptick in scammers using deepfake audio and video when participating in online job interviews, which became more widely used in the pandemic. The FBI noted that tech vacancies were targeted by deepfake candidates because the roles would give the attacker access to corporate IT databases, private customer data, and proprietary information.
The FBI warned that video participants could spot a deepfake when coughing, sneezing or other sounds don't line up with what's in the video. The side profile check could be a quick and easy-to-follow way for humans to check before beginning an online video meeting.
Writing for Metaphsyics.ai, Martin Anderson details the company's experiments. Most of deepfakes it created failed obviously when the head reached 90° and revealed elements of the person's actual side profile. The profile view recreation fails because of a lack of good-quality training data about the profile, requiring the deepfake model to invent or "inpaint" a lot about what's missing.
[...] Another useful way to rattle a live deepfake model is to ask the video participant to wave their hands in front of their face. It disrupts the model and reveals latency and quality issues with the superimposition over the deepfake face.
(Score: 0, Troll) by Anonymous Coward on Wednesday August 10 2022, @11:53PM
I'll take the synthetic revenge pornography and "political upheaval" rather than the continued dangerous sanitization of the Internet.
(Score: 4, Insightful) by Barenflimski on Thursday August 11 2022, @12:58AM (4 children)
This is another one of them, "Duh" things.
Since when is it good for a human being to be imitated so that others believe its them, while saying things they didn't say?
Do these people live in labs their whole lives without contact with humans and not know these things? The only thing these papers will ever be used for is for some politician, or some corporation to say, "Well, this study says its not as bad as you think, therefore deepfakes aren't that bad. We'll see how it plays out...."
I find it difficult to watch all of the parsing that goes on to justify things that ultimately harm most of us.
(Score: 0) by Anonymous Coward on Thursday August 11 2022, @03:20AM (1 child)
I don't know, that second story was pretty interesting.
(Score: 2) by deimtee on Thursday August 11 2022, @03:49AM
It was basically a call to deep-fake makers "Hey, you need to improve your profile algos."
No problem is insoluble, but at Ksp = 2.943×10−25 Mercury Sulphide comes close.
(Score: 2) by Freeman on Thursday August 11 2022, @01:16PM
Research, seems to point to, yes.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Insightful) by bussdriver on Thursday August 11 2022, @03:03PM
There are so many unbelievably gullible fools ALREADY who can be tricked so easily -- 5...nearing 6 years of Trump has made it clear to anybody paying the slightest attention who isn't a rube.
Just claim the opposite over and over while hurling ad hominems; doesn't matter if you slipped and said the truth in the preceding paragraph. It's expensive and costly to produce deep fakes and it provides a path for your enemies to retaliate; far easier to not provide evidence and just reference the tech in the classic "lying press" strategy (translated in modern english as "fake news"; FYI, it originated from Germany a century ago.)
(Score: 2) by inertnet on Thursday August 11 2022, @07:04AM (2 children)
Deepfakes are a perfect tool for dictators, for when they're sick or even dead.
(Score: 0, Offtopic) by Anonymous Coward on Thursday August 11 2022, @08:06AM (1 child)
I notice that you never see Biden in profile.
(Score: 2) by hendrikboom on Thursday August 11 2022, @08:16PM
If you did, I suspect the deep-fake trainers would jump at the opportunity to get profile training data.