Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 03 2018, @09:31PM   Printer-friendly
from the a-girl-for-all-geeks dept.

How an A.I. 'Cat-and-Mouse Game' Generates Believable Fake Photos (archive)

The woman in the photo seems familiar. She looks like Jennifer Aniston, the "Friends" actress, or Selena Gomez, the child star turned pop singer. But not exactly. She appears to be a celebrity, one of the beautiful people photographed outside a movie premiere or an awards show. And yet, you cannot quite place her. That's because she's not real. She was created by a machine.

The image is one of the faux celebrity photos generated by software under development at Nvidia, the big-name computer chip maker that is investing heavily in research involving artificial intelligence.

At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.

The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by DannyB on Wednesday January 03 2018, @10:11PM (1 child)

    by DannyB (5839) Subscriber Badge on Wednesday January 03 2018, @10:11PM (#617384) Journal

    Nvidia might not have legitimate access to all of your photos.

    But FaceTwit, Google, YouTube, Microsoft, Apple, Instagram, Flikr, Imgur and others probably do. And you probably agreed to it. About 2/3 of the way through the EULA, on page 349, right after the clause that allows them to sneak in the middle of the night and harvest your and your family's vital organs -- unless your ISP has already gotten them first.

    So surely Google has an absolutely immense collection of photos to use for this. And nobody would know or could prove that their photo was used as part of a large neural net training data set.

    I suspect that Google, et al, would argue that this is not a copyright infringing use. Your photo is not used in public. Never published. Google (or others with large photo data sets) might suddenly be able to generate fake faces, but none of them would look like you. They would merely meet the rules of looking realistic -- as trained by the large data set, of which your face is a part.

    If a human learns about faces by looking at a large collection of faces, and that isn't copyright infringement, then why is it suddenly copyright infringement if an AI learns about what makes a realistic face by looking at a large collection of faces?

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by HiThere on Wednesday January 03 2018, @11:57PM

    by HiThere (866) Subscriber Badge on Wednesday January 03 2018, @11:57PM (#617430) Journal

    Actually, people have a built-in face recognizer. The "large collection of faces" is used to limit what is seen as a valid face. And you still get people seeing faces in sandwiches, etc. It's sort of like "Start off with everything that has two spots above a third spot being considered a face, and find good limitations". I'm not sure this is the normal form of learning. OTOH, we seem to learn phonemes the same way, so maybe it is.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.