Submitted via IRC for SoyCow8093
The rate at which deepfake videos are advancing is both impressive and deeply unsettling. But researchers have described a new method for detecting a "telltale sign" of these manipulated videos, which map one person's face onto the body of another. It's a flaw even the average person would notice: a lack of blinking.
Researchers from the University at Albany, SUNY's computer science department recently published a paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking." The paper details how they combined two neural networks to more effectively expose synthesized face videos, which often overlook "spontaneous and involuntary physiological activities such as breathing, pulse and eye movement."
Source: https://gizmodo.com/most-deepfake-videos-have-one-glaring-flaw-1826869949
Related Stories
Currently to get a realistic Deep Fake, shots from multiple angles are needed. Russian researchers have now taken this a step further, generating realistic video sequences based off a single photo.
Researchers trained the algorithm to understand facial features' general shapes and how they behave relative to each other, and then to apply that information to still images. The result was a realistic video sequence of new facial expressions from a single frame.
As a demonstration, they provide details and synthesized video sequences of historical figures such as Albert Einstein and Salvador Dali, as well as sequences based on paintings such as the Mona Lisa.
The authors are aware of the potential downsides of their technology and address this:
We realize that our technology can have a negative use for the so-called "deepfake" videos. However, it is important to realize, that Hollywood has been making fake videos (aka "special effects") for a century, and deep networks with similar capabilities have been available for the past several years (see links in the paper). Our work (and quite a few parallel works) will lead to the democratization of the certain special effects technologies. And the democratization of the technologies has always had negative effects. Democratizing sound editing tools lead to the rise of pranksters and fake audios, democratizing video recording lead to the appearance of footage taken without consent. In each of the past cases, the net effect of democratization on the World has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different. Our belief is supported by the ongoing development of tools for fake video detection and face spoof detection alongside with the ongoing shift for privacy and data security in major IT companies.
While it works with as few as one frame to learn from, the technology benefits in accuracy and 'identity preservation' from having multiple frames available. This becomes obvious when observing the synthesized Mona Lisa sequences, which, while accurate to the original, appear to be essentially three different individuals to the human eye watching them.
Journal Reference: https://arxiv.org/abs/1905.08233v1
Related Coverage
Most Deepfake Videos Have One Glaring Flaw: A Lack of Blinking
My Struggle With Deepfakes
Discord Takes Down "Deepfakes" Channel, Citing Policy Against "Revenge Porn"
AI-Generated Fake Celebrity Porn Craze "Blowing Up" on Reddit
As Fake Videos Become More Realistic, Seeing Shouldn't Always be Believing
(Score: 3, Insightful) by Anonymous Coward on Sunday June 17 2018, @02:27AM (4 children)
Algorithm being updated as we speak.
Reminds me, I need to do some research on this topic for... uhh, research.
(Score: 1) by Ethanol-fueled on Sunday June 17 2018, @02:35AM (1 child)
Yet another deep-state fight percolating to the surface of American consciousness.
(Score: 2, Funny) by Anonymous Coward on Sunday June 17 2018, @02:52AM
AC back again.
Research was not successful. Somehow I ended up watching celeb "tribute videos" on xhamster. Would not recommend -1
(Score: 3, Informative) by Anonymous Coward on Sunday June 17 2018, @06:08PM (1 child)
Indeed - already fixed:
https://web.stanford.edu/~zollhoef/papers/SG2018_DeepVideo/page.html [stanford.edu]
(Score: 2) by takyon on Sunday June 17 2018, @06:53PM
That's what I'm talking about +1
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by physicsmajor on Sunday June 17 2018, @02:46AM (5 children)
A pair of adversarial CNNs will make adding these as easy as possible a button.
(Score: 0) by Anonymous Coward on Sunday June 17 2018, @03:08AM (1 child)
Sounds f-ing hot! Post pics.
(Score: 0) by Anonymous Coward on Sunday June 17 2018, @04:41AM
Pics? I want video of people clicking that button!
(Score: 2) by takyon on Sunday June 17 2018, @05:05AM (2 children)
As long as you match the skin color, pretty much anyone's eyelids could do.
This is a minor blip in Deepfakes' great long journey, which is only beginning. The hardware will get at least an order of magnitude better, and the software will improve as well.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Sunday June 17 2018, @05:47AM (1 child)
Can they help with plot-lines?
Actually, the ultimate would be to create Deepfake versions of real movies but with sex scenes added. Basically, porn parodies but with the original cast.
(Score: 3, Funny) by Anonymous Coward on Sunday June 17 2018, @12:27PM
The Cosby Show will never be the same again.
(Score: 2) by janrinok on Sunday June 17 2018, @08:50AM
Who the hell is looking at the eyes? Damn perverts!
(Score: 3, Funny) by maxwell demon on Sunday June 17 2018, @09:37AM
No, this isn't a deepfake. They are not blinking because weeping angels were around.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by srobert on Sunday June 17 2018, @12:58PM (1 child)
... in the videos with the urinating prostitutes?
(Score: 0) by Anonymous Coward on Monday June 18 2018, @09:29AM
He never blinks, never laughs, does not actually defecate. Android!