Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Submission Preview

Link to Story

FBI Warns Imminent Deepfake Attacks "Almost Certain" - the Debrief

Accepted submission by upstart at 2021-03-27 14:40:28
News

████ # This file was generated bot-o-matically! Edit at your own risk. ████

FBI Warns Imminent Deepfake Attacks "Almost Certain" - The Debrief [thedebrief.org]:

The Federal Bureau of Investigation (FBI) has issued a unique Private Industry Notification [ic3.gov] (PIN) on deepfakes, warning companies that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”

The FBI’s grim warning comes at a time when cybersecurity and defense officials have been increasingly vocal [forbes.com] about the dangers of synthetic media content, more commonly referred to as: “Deepfakes.”

In the PIN, the FBI warns they anticipate deepfakes “will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”

BACKGROUND: WHAT IS A DEEPFAKE?

Creating or manipulating images and videos to depict events that never actually happened is hardly new. However, advances in machine learning and artificial intelligence have allowed for the creation of compelling and nearly indistinguishable fake videos and images.

Legacy photo editing software uses various graphic editing techniques to alter, change, or enhance images. Photo editing software such as PhotoShop can manipulate pictures to include details or even people that weren’t originally in a photo. However, creating convincing false images is highly-dependent on a user’s skill in using the editing software.

In contrast, deepfakes use machine learning, and a type of neural network called an autoencoder. An encoder reduces an image to a lower-dimensional latent space, allowing for a decoder to reconstruct an image from the latent representation.

Because the latent or original image contains critical features, such as a person’s facial features and body posture, this allows for deepfakes to be decoded with a machine learning model trained for a specific target. Ultimately, the result is a persuasive and highly detailed superimposed representation of the original video or image’s underlying facial or body features.

The most often used type of deepfake processing attaches a machine learning generative adversarial network (GAN) to a decoder. The GAN trains a generator and discriminator in an adversarial relationship, resulting in extraordinarily compelling images that virtually mimic reality.

Recently, Belgium VFX specialist Chris Ume created a significant buzz when several compelling deepfake videos he made of actor Tom Cruise went viral on TikTok [theverge.com].

In an interview with The Verge [theverge.com], Ume said it took him two months to train the base AI models on Cruise’s footage to create the brief Tik Tok clips. Even then, Ume says he had to go through each video, frame-by-frame, to make minor adjustments to make the clips convincing. “The most difficult thing is making it look alive,” Ume told The Verge. “You can see it in the eyes when it’s not right.”

Because of the time and effort, it takes to make false videos look realistic, Ume says he doesn’t believe deepfakes are something the public should be too concerned about right now. The visual effects artist likens the modern AI manipulations to “Photoshop 20 years ago.”

ANALYSIS OF DEEPFAKE THREATS

While Ume takes a relatively positive outlook on deepfake technology, in the recently published PIN warning, the FBI takes a different tone, saying the potential for highly-sophisticated deepfakes software to sow disinformation and change a person’s view of reality is a genuine and serious imminent threat.

In the warning, the FBI notes that in 2020, numerous instances of Russian, Chinese, and Chinese-language actors were detected using deepfake profile images to make fake online social media accounts known as “sock-puppets [techopedia.com].” These seemingly authentic accounts have been used by hostile governments to push propaganda and engage in social influence campaigns.

Highlighted in the PIN was a 2017 incident in which The Independent published [buzzfeed.com] an article that a fictitious “journalist” produced. According to the FBI, the use of deepfakes to develop a robust fake online presence and create fictitious “journalists” to generate content that can be unwittingly published and shared by various online and print media outlets will dramatically increase in the near future.

The FBI also warns of a “newly defined attack vector,” called “Business Identity Compromise (BIC), whereby malicious cyber actors will leverage synthetic media and deepfakes to commit attacks on the private sector.

The Bureau says bad actors will use deepfake tools to create “synthetic corporate personas” or impersonate existing employees, to commit attacks that will likely have “very significant financial and reputational impacts to victim businesses and organizations.”

OUTLOOK: THE FUTURE OF SYNTHETIC MEDIA

Most security analysts have been echoing similar warnings to the FBI’s recent PIN, with some saying coming advances in deepfake technology could “wreak havoc on society [forbes.com].” There are, however, some encouraging signs that advances in AI to detect deepfakes are equally improving.

In a study published by the IEEE International Conference on Acoustics, Speech and Signal Processing [arxiv.org], researchers from the University of Buffalo demonstrated they had developed an AI program that was 94% effective in spotting frauds by examining light reflection in the eyes of deepfake portraits.

Another study published in Nature [nature.com]just last week found that the vast majority of people actively seek not to share false information or “fake news.”

To guard against deepfakes, the FBI encourages using the: Stop, Investigate the source, Find trusted coverage, and Trace the original content when consuming information online, or “SIFT” methodology.

The PIN also provides some tips on visual clues to identify deepfakes, “such as distortions, warping, or inconsistencies in images and video.” The FBI gives some examples of where to look for these visual clues including, “consistent eye spacing and placement, noticeable glitches in head and torso movements, as well as syncing issues between face and lip movement, and any associated audio.”

The FBI concludes the recent PIN warning by encouraging anyone who wants to report suspicious or criminal cyber activity to contact the FBI by phone at (855) 292-3937 or by e-mail at CyWatch@fbi.gov [mailto].

Follow and connect with author Tim McMillan on Twitter:@LtTimMcMillan [twitter.com]

Don’t forget to follow us on Twitter [twitter.com], Facebook [facebook.com], and Instagram [instagram.com], to weigh in and share your thoughts. You can also get all the latest news and exciting feature content from The Debrief on Flipboard [flipboard.com], and Pinterest [pinterest.com]. And subscribe to The Debrief YouTube Channel [youtube.com] to check out all of The Debrief’s exciting original shows: The Official Debrief Podcast with Michael Mataluni [youtube.com] ,Debriefed: Digging Deeper with Cristina Gomez [youtube.com] , Rebelliously Curious with Chrissy Newton [youtube.com]


Original Submission