
from the seeing-isn't-necessarily-believing dept.
Microsoft launches a deepfake detector tool ahead of US election
Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.
The tool, called Video Authenticator, provides what Microsoft calls "a percentage chance, or confidence score" that the media has been artificially manipulated.
"In the case of a video, it can provide this percentage in real-time on each frame as the video plays," it writes in a blog post announcing the tech. "It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye."
Related Stories
The Subtle Effects of Blood Circulation Can Be Used to Detect Deep Fakes
This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, "FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals," [DOI: 10.1109/TPAMI.2020.3009287] [DX] the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye.
In particular, video of a person's face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.
Deep fakes don't lack such circulation-induced shifts in color, but they don't recreate them with high fidelity. The researchers at SUNY and Intel found that "biological signals are not coherently preserved in different synthetic facial parts" and that "synthetic content does not contain frames with stable PPG." Translation: Deep fakes can't convincingly mimic how your pulse shows up in your face.
The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person's face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.
That newer work, posted to the arXiv pre-print server on 26 August, was titled, "How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals." In it, the researchers showed they that can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators (DeepFakes, Face2Face, FaceSwap or NeuralTex) was used to create a bogus video.
6 months later: Deep Fakes Get the Blood Pumping.
Related:
Deep Fakes Advance to Only Needing a Single Two Dimensional Photograph
MIT Team Creates Deepfake of President Nixon Reading "Moon Disaster" Apollo 11 Contingency Speech
Microsoft Announces a Deepfake Detector Tool
(Score: 3, Insightful) by jelizondo on Friday September 04 2020, @12:50AM (1 child)
If it works like other stuff in Windows, expect real videos to labeled "fake" and viceversa!
Not trusting this tool am I
(Score: 2) by RS3 on Friday September 04 2020, @04:02AM
Not to worry- it'll soon figure out that Windows itself is a deepfake and delete it.
(Score: 0) by Anonymous Coward on Friday September 04 2020, @12:53AM (1 child)
This is a good approach, as long as it has seen quite a few deepfakes in its time.
(Score: 3, Funny) by MostCynical on Friday September 04 2020, @01:28AM
artificially produced test sets..
https://www.arxiv-vanity.com/papers/1901.08971/ [arxiv-vanity.com]
https://www.arxiv-vanity.com/papers/2006.07397/ [arxiv-vanity.com]
It will detect the test videos, sometimes.
"I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
(Score: 5, Insightful) by Runaway1956 on Friday September 04 2020, @01:40AM (5 children)
I am very uncomfortable with Microsoft, Google, Facebook, or any other tech giant preparing these argument from authority spiels. "Our algorithm blah blah blah with an nn% accuracy rate blah blah blah."
It's all bullshit. CEO's and the like tell the engineers what they want, the engineers set about creating what they've been ordered to create. CEO's trot out this algorithm to appease the public, as well as to appease the legal system. No on in the public, no one in the courts, has any idea what the parameters were to begin with. Nor do we know how it works. Have no idea what the accuracy is - it's beta software, maybe. It might even be considered alpha software, if people could look under the hood. They're just ad libbing from the start. "We're going to detect fake stuff."
If/when this stuff is ever introduced into a court, a judge will almost certainly accept the word of the "experts". The "experts" built this thing after all, we gotta take their word!!
And, bottom line, someone programmed the software to do precisely whatever the CEO's demanded.
Tell me again, why I should trust any of the megacorporations? Doubly so, when they have displayed their partiality in legitimate political issues.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 1) by fustakrakich on Friday September 04 2020, @04:47AM (2 children)
Who else will produce your canned beans?
La politica e i criminali sono la stessa cosa..
(Score: 2) by Runaway1956 on Friday September 04 2020, @05:08AM
Actually, I can prepare my own canned beans, LOL!
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 1, Informative) by Anonymous Coward on Friday September 04 2020, @05:36AM
https://growagoodlife.com/canning-dried-beans/ [growagoodlife.com]
(Score: 5, Touché) by mhajicek on Friday September 04 2020, @04:54AM
Two possibilities. One, it doesn't work reliably, in which case it's useless. Two, it works reliably, in which case it will be used to train systems to produce better, undetectable deepfakes.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: -1, Flamebait) by Anonymous Coward on Friday September 04 2020, @06:13AM
Finally, we will know who is the real Runaway1956, and who is the DeepFake, althought it is possible that he has been down there so long, that not even Runawy1965 knows who Runaway1956 is. Untie the Right! Dyslexics for Congerass!!
(Score: 2, Insightful) by shrewdsheep on Friday September 04 2020, @07:01AM (5 children)
Any such tool can be used in an adversarial approach to make the deep fake network escape the detection.
(Score: 2) by takyon on Friday September 04 2020, @02:25PM
Shh, don't tell anybody. We could enter an infinite funding loop.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by DeathMonkey on Friday September 04 2020, @05:13PM (3 children)
This is true of literally every type of security on the planet be it physical or digital.
(Score: 2, Interesting) by Anonymous Coward on Friday September 04 2020, @09:12PM (2 children)
It makes me wonder what can be left as evidence of sexual harassment. At my last job, I got in the habit of recording any time I was around the self-identified "womyn-born-womyn" (who will tell you that TERF is just a right-wing slur). The problem with the womyn-born-womyn is that they have the tacit support of the DNC's "sisterhood of the uterus." That is, in a rape culture, having AFAB status makes somebody's word supersede the word of anybody AMAB, so if you're AMAB, you need evidence. Audio recordings can be evidence of sexual harassment, for example Trump's pussy grabbing comment. #metoo also requires AMAB people to be constantly gathering evidence of innocence.
So it seems this technology means we can no longer trust recordings as reliable evidence. Sure, they could be edited before, but not completely fabricated. A tool like Micros~1 is selling here could easily declare any authentic recording of an AFAB person spewing hatred and sexual harassment as fake.
We can see plainly with Apple's problem with recognizing black faces that the people who get to work on this technology don't realize how easy it is to train one of these tools to recognize something that is merely correlated. Training sets are hard. And it would be very easy to train one of these algorithms to measure the perceived gender (granted could go both ways) of the voice spewing hatred and declare authenticity on that basis. Male voices spewing hatred will always be authentic recordings. Recordings of the hatred that TERFs spew would be declared fake. This could be done unintentionally.
Then of course there are the sentencing algorithms used in the criminal injustice system. In Apple's case, I doubt that it was intentional. With sentencing algorithms, it seems likely that the racism is a feature, not a bug. So we know these algorithms not only aren't perfect but can be used intentionally for reactionary and regressive ends.
What does this leave as a way to prove that an AFAB person is sexually harassing you? Recordings allow me to get around ESR's rule, which is a stupid rule because maybe 50% of AFAB people are not sexual harassers. It's just the other half that are poisoned* M&Ms that can wreck your life. So if an algorithm declares a recording I've made of AFAB abuse to be a deepfake, what other corroborating evidence can I hope to obtain? Or do I just need to start following ESR's stupid rule so there are witnesses?
* At least here we know who put the poison there: men like Trump and Biden. But it is still toxic, regardless of its origin.
(I would like to cancel gender out of both sides of this problem since some AFAB people claim to have experienced the same disbelief I've run into with sexual harassment, but AFAB people have #metoo, rape cultures, and shelters where they need no evidence whatsoever. So I find it hard to believe these claims of disbelief. Gender would cancel out in a perfect world, but I guess with the humans it is not possible to have gender equity.)
I just never want to be abused like I was ever again. But if I am, I want to know how I can prove it in an era where recordings are no longer the evidence they used to be. The reason I have to prove it is because of the reply to this comment calling me an "incel," which demonstrates the disbelief and further sexual abuse just for speaking up that AMAB survivors of sexual abuse have to put up with.
Well, maybe all I really can do is just never return to tech. Otoh, I can't really live my life on the basis of fear and avoidance of the gender war. I need some way to fight back the next time it happens.
(Score: 2) by The Vocal Minority on Saturday September 05 2020, @06:25AM
Incel! :p
Seriously though - welcome back. Your unique viewpoint has been notably missing from SN in recent times.
(Score: 2) by takyon on Saturday September 05 2020, @05:16PM
Audio synthesis sounds like garbage, at this time. Even an Obama or Trump deepfake made by CS researchers sounds tinny and unnatural, last time I checked.
That situation could change in as little as 6 months. But I think there is a few years before the post-truth world really kicks in for you at your level. It could take something like deepfake audio/video (not mere "deceptive editing") being used to try and cause a major political scandal, and the perpetrator being exposed. Then tens of millions of people will be exposed to the deepfake discussion and start to become more paranoid about audio/video.
But the post-truth world doesn't necessarily demand authenticity. People will continue to believe what they want to believe. Even if your recordings may be unverifiable, they can still be valuable. A real recording is at least as good as the best deepfake. If you have to show it to a boss, it *should* cause the boss to doubt the other person. Better yet, give it to a trusted/competent lawyer and sue your employer and the employee. If your name is being dragged through the mud on social media, you can release it to try and clear your name and take back control of the narrative.
There might be a way to use cloud services to your advantage. E.g. automatically record live audio to a cloud storage service, where it is timestamped, and only make it publicly accessible when you need to use it.
If you fear imminent destruction, you need to keep your head down, keep doing what you're doing, avoid making any waves, and eventually move to another industry, self-underemployment, or retire. Is the tech industry really such a problem? W*m*n are a minority in that industry. Maybe moving to a different company in a different state would solve all your problems. You could pick the state/city based on the culture, perceived friendliness to MTFs, or even the laws pertaining to defamation and employment. Some places are worse than others.
The teething problems with AI are temporary, and an excuse for money to be thrown around. Will you help build your destroyer?
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Bot on Friday September 04 2020, @08:04AM
Unless they trained them with the videos Ghislaine Maxwell may leak, so that they are all tagged fake. We're talking about microsoft here, that's one step lower than the average corporation which is one step lower than the average mafia cartel.
Account abandoned.
(Score: 2) by ikanreed on Friday September 04 2020, @01:28PM
Can't they just feed this same algorithm back into the adversarial side of the generative adversarial network that is being used to create the deepfakes?
That's the whole underlying structure of the algorithm in the first place "Evolve to fool this thing that evolves to correctly identify the thing you're trying to spoof". The result of this algorithm could be added as a single top level neuron, and edge finding would do the rest.