New method detects deepfake videos with up to 99% accuracy:
Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
Developments in video editing software have made it easy to exchange the face of one person for another and alter the expressions on original faces. As unscrupulous leaders and individuals deploy manipulated videos to sway political or social opinions, the ability to identify these videos is considered by many essential to protecting free democracies. Methods exist that can detect with reasonable accuracy when faces have been swapped. But identifying faces where only the expressions have been changed is more difficult and to date, no reliable technique exists.
[...] The UC Riverside method divides the task into two components within a deep neural network. The first branch discerns facial expressions and feeds information about the regions that contain the expression, such as the mouth, eyes, or forehead, into a second branch, known as an encoder-decoder. The encoder-decoder architecture is responsible for manipulation detection and localization.
More information: Ghazal Mazaheri, Amit K. Roy-Chowdhury, Detection and Localization of Facial Expression Manipulations. arXiv:2103.08134v1 [cs.CV], arxiv.org/abs/2103.08134
(Score: 2, Insightful) by MIRV888 on Friday May 06 2022, @10:39PM (9 children)
Sciencey words are not going to convince the unwashed masses. It is a question of what news outlets people trust.
Unfortunately we are in deep sh1t on that level.
(Score: 3, Insightful) by Anonymous Coward on Friday May 06 2022, @11:17PM (8 children)
Don't worry, friend. The deepfake makers will use this detector to improve their deepfakes.
(Score: 4, Insightful) by Opportunist on Friday May 06 2022, @11:47PM (7 children)
What for? Have you seen just what harebrained bullshit some people believe? Why bother improving your lies when the crappy and cheap ones are absolutely sufficient?
(Score: 0) by Anonymous Coward on Saturday May 07 2022, @12:49AM (5 children)
If the detection rate is close to 100%, the social media giants can flag or delete the videos automatically. If the detection rate craters due to deepfake improvements, the videos can survive and spread.
(Score: 4, Insightful) by maxwell demon on Saturday May 07 2022, @04:42AM (3 children)
Not only that, they even will have added credibility, since they all passed that filter.
The Tao of math: The numbers you can count are not the real numbers.
(Score: -1, Spam) by Anonymous Coward on Saturday May 07 2022, @07:50AM (2 children)
99% is certainly way beyond janrinok's 16.37% success rate on aristarchus detection. Just sayin', please don't Spam Mod me, bro!!
(Score: -1, Spam) by Anonymous Coward on Saturday May 07 2022, @09:08AM (1 child)
Make that 14.897%. (Incorrect Spam mod thrown in anger and error) janrinok fails again. So sad. And Runaway rages in the blood of her second mending fantasies. Really sad.
(Score: 0) by adamantine on Sunday May 08 2022, @07:17AM
And now the specificity is down around 9.36%? Repeated iterations of false guesses results in lower accuracy. We should know this from Runaway's days at the range. But, I, personally, do not know. Keep Calm, and janrinok on, I say.
(Score: 2) by Opportunist on Saturday May 07 2022, @09:47AM
So flag it, all that does is make it more credible to the rabbit hole enthusiasts because "the man" wants to slander "truths".
(Score: 2) by Michael on Saturday May 07 2022, @08:25PM
"Shallowfakes"
(Score: 5, Insightful) by DannyB on Saturday May 07 2022, @12:06AM (6 children)
1. Deep fakes
2. 99% accurate deep fake detector
3. New deep fakes, trained using item (2) so that they are 99% likely NOT to be detected as deep fakes.
4. GOSUB 2.
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Saturday May 07 2022, @12:58AM
proving the news reader was fake, does not do anything about the news itself.
(Score: 4, Touché) by Anonymous Coward on Saturday May 07 2022, @02:01AM (4 children)
deep fakes
deep deep fakes
deep deep deep fakes
deep deep deep deep fakes
deep deep deep deep deep *** STACK OVERFLOW ***
(Score: 2) by Mojibake Tengu on Saturday May 07 2022, @02:44AM (3 children)
Modded informative because parent is so much pre-conditioned by scholar dogmatic hypnotic framing to hate GOTO so much that he improperly wrote GOSUB instead.
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 0) by Anonymous Coward on Saturday May 07 2022, @11:13AM (1 child)
Well, in practice, this is an infinite loop.
( We are screwed. )
(Score: 1, Informative) by Anonymous Coward on Sunday May 08 2022, @02:47PM
GOTO would be an infinite loop. GOSUB will eventually crash out due to depth of recursion.
(Score: 2) by DannyB on Sunday May 08 2022, @08:46PM
By 1980 it was frowned upon to use GOTO. Languages should not even have GOTO. So the next bestest thing is to use GOSUB. What could go wrong?
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Saturday May 07 2022, @01:48AM
For those of us shallow nerds.
Come on, don't discriminate - we may be shallow, but we still nerds.
(Score: -1, Offtopic) by Anonymous Coward on Saturday May 07 2022, @02:29AM