Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by Fnord666 on Friday September 04 2020, @12:37AM   Printer-friendly
from the seeing-isn't-necessarily-believing dept.

Microsoft launches a deepfake detector tool ahead of US election

Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.

The tool, called Video Authenticator, provides what Microsoft calls "a percentage chance, or confidence score" that the media has been artificially manipulated.

"In the case of a video, it can provide this percentage in real-time on each frame as the video plays," it writes in a blog post announcing the tech. "It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by shrewdsheep on Friday September 04 2020, @07:01AM (5 children)

    by shrewdsheep (5215) on Friday September 04 2020, @07:01AM (#1046232)

    Any such tool can be used in an adversarial approach to make the deep fake network escape the detection.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 2) by takyon on Friday September 04 2020, @02:25PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday September 04 2020, @02:25PM (#1046312) Journal

    Shh, don't tell anybody. We could enter an infinite funding loop.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by DeathMonkey on Friday September 04 2020, @05:13PM (3 children)

    by DeathMonkey (1380) on Friday September 04 2020, @05:13PM (#1046397) Journal

    This is true of literally every type of security on the planet be it physical or digital.

    • (Score: 2, Interesting) by Anonymous Coward on Friday September 04 2020, @09:12PM (2 children)

      by Anonymous Coward on Friday September 04 2020, @09:12PM (#1046555)

      It makes me wonder what can be left as evidence of sexual harassment. At my last job, I got in the habit of recording any time I was around the self-identified "womyn-born-womyn" (who will tell you that TERF is just a right-wing slur). The problem with the womyn-born-womyn is that they have the tacit support of the DNC's "sisterhood of the uterus." That is, in a rape culture, having AFAB status makes somebody's word supersede the word of anybody AMAB, so if you're AMAB, you need evidence. Audio recordings can be evidence of sexual harassment, for example Trump's pussy grabbing comment. #metoo also requires AMAB people to be constantly gathering evidence of innocence.

      So it seems this technology means we can no longer trust recordings as reliable evidence. Sure, they could be edited before, but not completely fabricated. A tool like Micros~1 is selling here could easily declare any authentic recording of an AFAB person spewing hatred and sexual harassment as fake.

      We can see plainly with Apple's problem with recognizing black faces that the people who get to work on this technology don't realize how easy it is to train one of these tools to recognize something that is merely correlated. Training sets are hard. And it would be very easy to train one of these algorithms to measure the perceived gender (granted could go both ways) of the voice spewing hatred and declare authenticity on that basis. Male voices spewing hatred will always be authentic recordings. Recordings of the hatred that TERFs spew would be declared fake. This could be done unintentionally.

      Then of course there are the sentencing algorithms used in the criminal injustice system. In Apple's case, I doubt that it was intentional. With sentencing algorithms, it seems likely that the racism is a feature, not a bug. So we know these algorithms not only aren't perfect but can be used intentionally for reactionary and regressive ends.

      What does this leave as a way to prove that an AFAB person is sexually harassing you? Recordings allow me to get around ESR's rule, which is a stupid rule because maybe 50% of AFAB people are not sexual harassers. It's just the other half that are poisoned* M&Ms that can wreck your life. So if an algorithm declares a recording I've made of AFAB abuse to be a deepfake, what other corroborating evidence can I hope to obtain? Or do I just need to start following ESR's stupid rule so there are witnesses?

      * At least here we know who put the poison there: men like Trump and Biden. But it is still toxic, regardless of its origin.

      (I would like to cancel gender out of both sides of this problem since some AFAB people claim to have experienced the same disbelief I've run into with sexual harassment, but AFAB people have #metoo, rape cultures, and shelters where they need no evidence whatsoever. So I find it hard to believe these claims of disbelief. Gender would cancel out in a perfect world, but I guess with the humans it is not possible to have gender equity.)

      I just never want to be abused like I was ever again. But if I am, I want to know how I can prove it in an era where recordings are no longer the evidence they used to be. The reason I have to prove it is because of the reply to this comment calling me an "incel," which demonstrates the disbelief and further sexual abuse just for speaking up that AMAB survivors of sexual abuse have to put up with.

      Well, maybe all I really can do is just never return to tech. Otoh, I can't really live my life on the basis of fear and avoidance of the gender war. I need some way to fight back the next time it happens.

      • (Score: 2) by The Vocal Minority on Saturday September 05 2020, @06:25AM

        by The Vocal Minority (2765) on Saturday September 05 2020, @06:25AM (#1046690) Journal

        Incel! :p

        Seriously though - welcome back. Your unique viewpoint has been notably missing from SN in recent times.

      • (Score: 2) by takyon on Saturday September 05 2020, @05:16PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday September 05 2020, @05:16PM (#1046842) Journal

        Audio synthesis sounds like garbage, at this time. Even an Obama or Trump deepfake made by CS researchers sounds tinny and unnatural, last time I checked.

        That situation could change in as little as 6 months. But I think there is a few years before the post-truth world really kicks in for you at your level. It could take something like deepfake audio/video (not mere "deceptive editing") being used to try and cause a major political scandal, and the perpetrator being exposed. Then tens of millions of people will be exposed to the deepfake discussion and start to become more paranoid about audio/video.

        But the post-truth world doesn't necessarily demand authenticity. People will continue to believe what they want to believe. Even if your recordings may be unverifiable, they can still be valuable. A real recording is at least as good as the best deepfake. If you have to show it to a boss, it *should* cause the boss to doubt the other person. Better yet, give it to a trusted/competent lawyer and sue your employer and the employee. If your name is being dragged through the mud on social media, you can release it to try and clear your name and take back control of the narrative.

        There might be a way to use cloud services to your advantage. E.g. automatically record live audio to a cloud storage service, where it is timestamped, and only make it publicly accessible when you need to use it.

        If you fear imminent destruction, you need to keep your head down, keep doing what you're doing, avoid making any waves, and eventually move to another industry, self-underemployment, or retire. Is the tech industry really such a problem? W*m*n are a minority in that industry. Maybe moving to a different company in a different state would solve all your problems. You could pick the state/city based on the culture, perceived friendliness to MTFs, or even the laws pertaining to defamation and employment. Some places are worse than others.

        The teething problems with AI are temporary, and an excuse for money to be thrown around. Will you help build your destroyer?

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]