Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday January 26 2018, @10:39PM   Printer-friendly
from the porn-driving-innovation dept.

Fake celebrity porn is blowing up on Reddit, thanks to artificial intelligence.

Back in December, the unsavory hobby of a Reddit user by the name of deepfakes became a new centerpiece of artificial intelligence debate, specifically around the newfound ability to face-swap celebrities and porn stars. Using software, deepfakes was able to take the face of famous actresses and swap them with those of porn actresses, letting him live out a fantasy of watching famous people have sex. Now, just two months later, easy-to-use applications have sprouted up with the ability to perform this real-time editing with even more ease, according to Motherboard, which also first reported about deepfakes late last year.

Thanks to AI training techniques like machine learning, scores of photographs can be fed into an algorithm that creates convincing human masks to replace the faces of anyone on video, all by using lookalike data and letting the software train itself to improve over time. In this case, users are putting famous actresses into existing adult films. According to deepfakes, this required some extensive computer science know-how. But Motherboard reports that one user in the burgeoning community of pornographic celebrity face swapping has created a user-friendly app that basically anyone can use.

The same technique can be used for non-pornographic purposes, such as inserting Nicolas Cage's face into classic movies. One user also "outperformed" the Princess Leia scene at the end of Disney's Rogue One (you be the judge, original footage is at the top of the GIF).

The machines are learning.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by buswolley on Friday January 26 2018, @11:27PM (9 children)

    by buswolley (848) on Friday January 26 2018, @11:27PM (#628620)

    They're not hard to detect, although some are more convincing than others. Check the Leia remake from the Force Awakens. It competes with the special effects crew's attempt pretty well.

    However, in the future, this is going to be a big problem. Video evidence is now weak, if there is ever a hint of conspiracy against someone. Politicians will use it. Media desperate for a story may use it. Russia will use it.

    --
    subicular junctures
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Informative=2, Overrated=1, Total=4
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by MichaelDavidCrawford on Friday January 26 2018, @11:45PM (5 children)

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Friday January 26 2018, @11:45PM (#628628) Homepage Journal

    I swear I'm not making this up:

    I don't remember much about the episode but the 5-0 people had a photo whose authenticity was called into question.

    They consulted an image processing expert who advised them to enlarge the photo enough that the pixels could be seen clearly.

    If the photo were shopped there would be a discontinuity at the edges of the bits that were pasted in.

    Now of course it won't be so easy to spot today's fakes, because one can blend adjacent edges together to make it less obvious there is a seam. But I expect the principle still applies.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 5, Insightful) by takyon on Friday January 26 2018, @11:56PM (3 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday January 26 2018, @11:56PM (#628639) Journal

      But I expect

      I expect that the machine learning techniques will become better with more revisions, greater access to curated training data, and better hardware. Eventually, the fakes could be made in near real-time.

      In real life, you don't need to convince experts with your video, you just need to convince Facebook users who unironically share or get enraged by fake news stories.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1, Insightful) by Anonymous Coward on Saturday January 27 2018, @01:32AM

        by Anonymous Coward on Saturday January 27 2018, @01:32AM (#628671)

        Don't forget the confirmation bias that will occur when "they" say it is fake which will only make people think it is a cover up.

      • (Score: 4, Interesting) by edIII on Saturday January 27 2018, @03:49AM

        by edIII (791) on Saturday January 27 2018, @03:49AM (#628695)

        What's so funny is that his theory has worked from the beginning. Back in the 80's they have the same stuff, except it was like, far more hilarious. Like maybe an order of magnitude better than pasting a magazine cutout on the monitor, but not much better. Then the lines started being blurred on the neck at least, and from there it was iterative improvements to the point we need extensive AI analysis to identify a fraud.

        I honestly wonder if will get to the point where we dismiss visual and audio evidence as something akin to hearsay, while giving weight only to that which has been verified. Even then, who is to say that the cryptographically signed surveillance video wasn't doctored at it's inputs and is passing a transcoded stream? Security cameras will have to be tamper-proof and audited to be legally valid. Given our hilarious and almost scary lack of security right now, how could we ever say conclusively, the shit wasn't modified? It's going to be degree of confidences at best.

        Didn't some researcher crack something about being able create any protein, and they're working on a way of scaling the process? How much longer till you can replicate DNA and evidence to be placed somewhere?

        With plastic surgery getting better day by day, you might be fucking this celebrity and still be wondering if she isn't a fembot quietly stealing your cryptocurrency.

        You just can't trust shit about shit about shit :)

        --
        Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 0) by Anonymous Coward on Saturday January 27 2018, @12:50PM

        by Anonymous Coward on Saturday January 27 2018, @12:50PM (#628836)

        And BOOM orange clown is the president of the United States. Putin will be laughing all the way to the bank.

    • (Score: 4, Insightful) by Anonymous Coward on Saturday January 27 2018, @01:04AM

      by Anonymous Coward on Saturday January 27 2018, @01:04AM (#628666)

      If there's an algorithm/method to detect a fake, there's input to a training algorithm to ensure it doesn't trigger that algorithm.

  • (Score: 5, Interesting) by takyon on Friday January 26 2018, @11:48PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday January 26 2018, @11:48PM (#628631) Journal

    GPU hardware is improving quite well, custom machine learning hardware such as TPUs could improve on that further, and there's talk of quantum machine learning or neuromorphic computing. 6-8 GB of VRAM is not uncommon, and memory bandwidth could increase a lot with GDDR6 and HBM. The quantity of RAM that home users can afford will probably double soon. We may have unified storage and memory in the future (current attempts like XPoint are weak).

    The users on the subreddit report having to match up angles, sizes, and such in the training data to get good results. Improvements to the software could reduce the amount of manual fiddling needed. Machine learning could be used on the selection of the training data itself.

    In the near term, instead of matching existing videos with the desired faces, you could film using a (small-time) actor with some similarity to the replacement face/body. You can do everything you need to do to ensure good training data and a convincing scene, and create your "Assange dies escaping the Ecuadorian embassy" video with controlled conditions.

    https://www.theverge.com/2017/7/12/15957844/ai-fake-video-audio-speech-obama [theverge.com]
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama [ieee.org]

    Don't forget the audio:

    https://www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird [theverge.com]

    Vocaloid has been around for years, but now there's intense research efforts by Amazon, Google, Facebook, Baidu, Apple, Samsung, etc. as they are all pitching voice assistants and want to make them sound more realistic.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by wonkey_monkey on Saturday January 27 2018, @05:58PM

    by wonkey_monkey (279) on Saturday January 27 2018, @05:58PM (#628998) Homepage

    Check the Leia remake from the Force Awakens. It competes with the special effects crew's attempt pretty well.

    Eh, barely. That video is horribly low resolution, and it certainly doesn't - as the article claims - "outperforms Disney."

    --
    systemd is Roko's Basilisk
  • (Score: 2) by Joe Desertrat on Sunday January 28 2018, @12:12AM

    by Joe Desertrat (2454) on Sunday January 28 2018, @12:12AM (#629231)

    However, in the future, this is going to be a big problem. Video evidence is now weak, if there is ever a hint of conspiracy against someone. Politicians will use it. Media desperate for a story may use it. Russia will use it.

    People already believe and repost the most outlandish "evidence" on social media. It won't take much more advancement in video editing to convince them.