Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Thursday April 12 2018, @01:31PM   Printer-friendly
from the deeplearn-vs-deeplearn dept.

Submitted via IRC for SoyCow9228

We all know that AI can be used to swap faces in photos and videos. People have, of course, taken advantage of this tool for some disturbing uses, including face-swapping people into pornographic videos -- the ultimate revenge porn. But if AI can be used to face swap, can't it also be used to detect when such a practice occurs? According to a new paper on arXiv.org, a new algorithm promises to do just that, identifying forged videos as soon as they are posted online.

Source: https://www.engadget.com/2018/04/11/machine-learning-face-swaps-xceptionnet/

Technology Review continues:

But the work also has sting in the tail. The same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first placeā€”and that could make them harder to detect.

The new technique relies on a deep-learning algorithm that [Andreas] Rossler and co have trained to spot face swaps. These algorithms can only learn from huge annotated data sets of good examples, which simply have not existed until now.

So the team began by creating a large data set of face-swap videos and their originals.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Thursday April 12 2018, @02:52PM (1 child)

    by DannyB (5839) Subscriber Badge on Thursday April 12 2018, @02:52PM (#665951) Journal

    This technique can be used for anything where machine learning does not do an entirely convincing job.

    I'll use the term "fakes" to refer to a machine learning result attempting to reproduce the real thing. But such that it is known that it is a machine learning creation.

    You train a 2nd AI to spot the fakes. Use that to better train the 1st AI to produce better fakes. Repeat.

    The 2nd AI's training data consists of images of the real thing vs fakes produced by the 1st AI. Each item in that data set is KNOWN to be either the real thing, or a result produced by the 1st AI. So training of the 2nd AI is good.

    Thus the improved training of the 1st AI becomes good. So they you have to carefully watch the 2nd AI's results because you're training it to distinguish images that are fake from real. At some point, the difference should disappear and the 2nd AI will be unable to actually tell the difference.

    Now, I'm not just talking about fake pr0n here. This could be for AI creating fake cat pictures indistinguishable from the real thing. Or altered auto accident footage. Or altered "locker room talk" by politicians that goes way beyond what normal people talk about in the locker room.

    --
    The lower I set my standards the more accomplishments I have.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Funny) by realDonaldTrump on Thursday April 12 2018, @05:33PM

    by realDonaldTrump (6614) on Thursday April 12 2018, @05:33PM (#666051) Homepage Journal

    Some of these Fakes are amazing. Let me tell you, they fooled me. With the Access Hollywood "tape." They call it a "tape." It's not a tape. It's a Fake. But it sounds just like me. At first I thought it was me. And I apologized. But it turned out to be TOTALLY FAKE!!!