Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Sunday June 17 2018, @02:22AM   Printer-friendly
from the I-can-tell-by-the-pixels dept.

Submitted via IRC for SoyCow8093

The rate at which deepfake videos are advancing is both impressive and deeply unsettling. But researchers have described a new method for detecting a "telltale sign" of these manipulated videos, which map one person's face onto the body of another. It's a flaw even the average person would notice: a lack of blinking.

Researchers from the University at Albany, SUNY's computer science department recently published a paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking." The paper details how they combined two neural networks to more effectively expose synthesized face videos, which often overlook "spontaneous and involuntary physiological activities such as breathing, pulse and eye movement."

Source: https://gizmodo.com/most-deepfake-videos-have-one-glaring-flaw-1826869949


Original Submission

Related Stories

Deep Fakes Advance to Only Needing a Single Two Dimensional Photograph 4 comments

Currently to get a realistic Deep Fake, shots from multiple angles are needed. Russian researchers have now taken this a step further, generating realistic video sequences based off a single photo.

Researchers trained the algorithm to understand facial features' general shapes and how they behave relative to each other, and then to apply that information to still images. The result was a realistic video sequence of new facial expressions from a single frame.

As a demonstration, they provide details and synthesized video sequences of historical figures such as Albert Einstein and Salvador Dali, as well as sequences based on paintings such as the Mona Lisa.

The authors are aware of the potential downsides of their technology and address this:

We realize that our technology can have a negative use for the so-called "deepfake" videos. However, it is important to realize, that Hollywood has been making fake videos (aka "special effects") for a century, and deep networks with similar capabilities have been available for the past several years (see links in the paper). Our work (and quite a few parallel works) will lead to the democratization of the certain special effects technologies. And the democratization of the technologies has always had negative effects. Democratizing sound editing tools lead to the rise of pranksters and fake audios, democratizing video recording lead to the appearance of footage taken without consent. In each of the past cases, the net effect of democratization on the World has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different. Our belief is supported by the ongoing development of tools for fake video detection and face spoof detection alongside with the ongoing shift for privacy and data security in major IT companies.

While it works with as few as one frame to learn from, the technology benefits in accuracy and 'identity preservation' from having multiple frames available. This becomes obvious when observing the synthesized Mona Lisa sequences, which, while accurate to the original, appear to be essentially three different individuals to the human eye watching them.

Journal Reference: https://arxiv.org/abs/1905.08233v1

Related Coverage
Most Deepfake Videos Have One Glaring Flaw: A Lack of Blinking
My Struggle With Deepfakes
Discord Takes Down "Deepfakes" Channel, Citing Policy Against "Revenge Porn"
AI-Generated Fake Celebrity Porn Craze "Blowing Up" on Reddit
As Fake Videos Become More Realistic, Seeing Shouldn't Always be Believing


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by Anonymous Coward on Sunday June 17 2018, @02:27AM (4 children)

    by Anonymous Coward on Sunday June 17 2018, @02:27AM (#694114)

    Algorithm being updated as we speak.

    Reminds me, I need to do some research on this topic for... uhh, research.

    • (Score: 1) by Ethanol-fueled on Sunday June 17 2018, @02:35AM (1 child)

      by Ethanol-fueled (2792) on Sunday June 17 2018, @02:35AM (#694118) Homepage

      Yet another deep-state fight percolating to the surface of American consciousness.

      • (Score: 2, Funny) by Anonymous Coward on Sunday June 17 2018, @02:52AM

        by Anonymous Coward on Sunday June 17 2018, @02:52AM (#694122)

        AC back again.

        Research was not successful. Somehow I ended up watching celeb "tribute videos" on xhamster. Would not recommend -1

    • (Score: 3, Informative) by Anonymous Coward on Sunday June 17 2018, @06:08PM (1 child)

      by Anonymous Coward on Sunday June 17 2018, @06:08PM (#694308)
  • (Score: 2) by physicsmajor on Sunday June 17 2018, @02:46AM (5 children)

    by physicsmajor (1471) on Sunday June 17 2018, @02:46AM (#694121)

    A pair of adversarial CNNs will make adding these as easy as possible a button.

    • (Score: 0) by Anonymous Coward on Sunday June 17 2018, @03:08AM (1 child)

      by Anonymous Coward on Sunday June 17 2018, @03:08AM (#694126)

      Sounds f-ing hot! Post pics.

      • (Score: 0) by Anonymous Coward on Sunday June 17 2018, @04:41AM

        by Anonymous Coward on Sunday June 17 2018, @04:41AM (#694146)

        Pics? I want video of people clicking that button!

    • (Score: 2) by takyon on Sunday June 17 2018, @05:05AM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday June 17 2018, @05:05AM (#694152) Journal

      Most training datasets fed to neural networks don’t include closed-eye photos, as photos of people posted online generally depict their eyes open. That’s consequential, given someone needs to collect plenty of photos of an individual in order to create a deepfake of them, and this can be done through an open-source photo-scraping tool which grabs publicly available photos of the target online.

      As long as you match the skin color, pretty much anyone's eyelids could do.

      This is a minor blip in Deepfakes' great long journey, which is only beginning. The hardware will get at least an order of magnitude better, and the software will improve as well.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Sunday June 17 2018, @05:47AM (1 child)

        by Anonymous Coward on Sunday June 17 2018, @05:47AM (#694163)

        Can they help with plot-lines?

        Actually, the ultimate would be to create Deepfake versions of real movies but with sex scenes added. Basically, porn parodies but with the original cast.

        • (Score: 3, Funny) by Anonymous Coward on Sunday June 17 2018, @12:27PM

          by Anonymous Coward on Sunday June 17 2018, @12:27PM (#694220)

          The Cosby Show will never be the same again.

  • (Score: 2) by janrinok on Sunday June 17 2018, @08:50AM

    by janrinok (52) Subscriber Badge on Sunday June 17 2018, @08:50AM (#694186) Journal

    Who the hell is looking at the eyes? Damn perverts!

  • (Score: 3, Funny) by maxwell demon on Sunday June 17 2018, @09:37AM

    by maxwell demon (1608) on Sunday June 17 2018, @09:37AM (#694196) Journal

    No, this isn't a deepfake. They are not blinking because weeping angels were around.

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by srobert on Sunday June 17 2018, @12:58PM (1 child)

    by srobert (4803) on Sunday June 17 2018, @12:58PM (#694230)

    ... in the videos with the urinating prostitutes?

    • (Score: 0) by Anonymous Coward on Monday June 18 2018, @09:29AM

      by Anonymous Coward on Monday June 18 2018, @09:29AM (#694455)

      He never blinks, never laughs, does not actually defecate. Android!

(1)