Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday February 15 2019, @05:30AM   Printer-friendly
from the nowyouknowyouknow dept.

By now you're probably already aware of https://thispersondoesnotexist.com/

It is a website that uses Generative Adversarial Networks (GANs) to generate photos of people who do not exist. This story was sourced from https://www.inverse.com/article/53280-this-person-does-not-exist-gans-website

The article goes into some depth on how the researchers achieved this. But deepfakes of this nature present a problem. There are the obvious "safety concerns" for users of dating websites but these fakes are good enough to get on linkedin as well and perhaps fool an employer or other unaware person.

Fortunately, AI of any stripe, including GANs are not and never will be perfect and on this soylent exclusive, I wanted to take a moment to ring the alarm bells and explain how you can detect fakes like this.

AIs are not magical. They are merely complex heuristic and statistical systems, built and trained for a specific purpose. As such, they are only as good as their training data.

Obviously, the researchers had to source a training set from somewhere and it is clear that they used Facebook, Linkedin and Instagram.

As such, these photos, even the high quality ones all contain mistakes that AI would make (statistical mistakes), that a real human would not have and these mistakes are all tells.

Go ahead and go to the website, let it generate a photo. 8 out of 10 look perfect at first glance. I even found a few that looked like me and family members.
But remember people are describing these as "eerie"? Well there's a reason these look eerie, even the best ones.

That reason is from a statistical mistake in the eyes, specifically the pupils. Every one of these photos has at least two problems.

#1 is that the pupils are never dilated the same. In a normal healthy human the pupils dilate to the same extent, always. The only time they don't is when there is a brain injury such as acute head trauma or stroke. Since none of these people appear to be in a medical context where we might expect blown pupils, they appear creepy, deranged, crazy, brain injured etc.

#2 is that normal humans have no red in their irises. This appears to be a statistical mistake. Normal amateur photographers will frequently capture a concept called "red eye" where the flash of the camera reflects off the retina producing a red glow that we are all familiar with. The training set for this AI appears to have had a large number of red-eye photos in it and as a result there is red in the eye. But the red is not inside the pupils, it gets painted onto the irises. As a result, everyone appears to have toxic heavy metal poisoning (the usual cause of red splotches in the iris), in addition to traumatic brain injury.

There is also a third problem that is present in about 40% of the images. That mistake has to do with the hair. The way the hair is generated especially on men produces a "doll hair" effect, where hundreds maybe thousands of strands all pop up from clumps, in the same way it does on a doll. Unless the person has recently had hair transplant surgery, this is just not a thing that happens in real life.

So now you know. You'll be able to pick out the very best fakes this AI has to offer.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday February 15 2019, @08:09AM

    by Anonymous Coward on Friday February 15 2019, @08:09AM (#801455)

    I didn't notice much red in irises except in one case, anyway I noticed that the light reflected in one eye is often quite different than the one in the other. Usually, they reflect the same scene, $distance_between_eyes is too small for big differences.