Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Submission Preview

Link to Story

Automated Voice Imitation Can Fool Humans and Machines

Accepted submission by martyb at 2015-10-04 09:03:08
Software

dept idea: reference "Nipper" and "His master's voice."

In research presented at the 2015 European Symposium on Research in Computer Security, [sba-research.org] University of Alabama at Birmingham (UAB) researchers have found that automated and human verification for voice-based user authentication systems are vulnerable to voice impersonation attacks. [uab.edu] Nitesh Saxena, Ph.D., is the director of the Security and Privacy In Emerging computing and networking Systems (SPIES) lab and associate professor of computer and information sciences at UAB.

The researchers were able to fool automated systems 80%-90% of the time, and humans about 50% of the time. They warn that computer hardware and voice imitation software continue to improve while the human ability to distinguish real from imitation likely will not.

NOTE TO EDITORS: the following blockquote needs to be winnowed down, or summarized instead of quoted. (This submission was written between 3:30am and 5:00am. Though I did the best I could at this time, it definitely could benefit from more sleep and coffee!)

Using an off-the-shelf voice-morphing tool, the researchers developed a voice impersonation attack to attempt to penetrate automated and human verification systems.

[...] Advances in technology, specifically those that automate speech synthesis such as voice morphing, allow an attacker to build a very close model of a victim’s voice from a limited number of samples. Voice morphing can be used to transform the attacker’s voice to speak any arbitrary message in the victim’s voice.

[...] “Voice biometrics is the new buzzword among banks and credit card companies... Many banks and credit card companies are striving for giving their users a hassle-free experience in using their services in terms of accessing their accounts using voice biometrics.”

[...] Voice biometrics is based on the assumption that each person has a unique voice that depends not only on his or her physiological features of vocal cords but also on his or her entire body shape, and on the way sound is formed and articulated.

[...] Once the attacker defeats voice biometrics using fake voices, he could gain unfettered access to the system, which may be a device or a service, employing the authentication functionality.

[...] If an attacker can imitate a victim’s voice, the security of remote conversations could be compromised. The attacker could make the morphing system speak literally anything that the attacker wants to, in the victim’s tone and style of speaking, and can launch an attack that can harm a victim’s reputation, his or her security, and the safety of people around the victim.

[...] The results show that the state-of-the-art automated verification algorithms were largely ineffective to the attacks developed by the research team. The average rate for rejecting fake voices was less than 10 to 20 percent for most victims. Even human verification was vulnerable to the attacks. According to two online studies with about 100 users, researchers found that study participants rejected the morphed voice samples of celebrities as well as somewhat familiar users about half the time.

“Our research showed that voice conversion poses a serious threat, and our attacks can be successful for a majority of cases,” Saxena said. “Worryingly, the attacks against human-based speaker verification may become more effective in the future because voice conversion/synthesis quality will continue to improve, while it can be safely said that human ability will likely not.”

I spent well over an hour in failed attempts to locate the original research paper and the "off-the-shelf-morphing tool." Do any Soylentils have any experience in this realm or have pointers to where one could download such a tool?

I find the security implications to be staggering. Think of all the current and historical recordings once could use for samples: speeches, presentations, court testimony, movies, and simple YouTube postings.

See our recent story: Stealing Fingerprints — Authentication in the Digital Age [soylentnews.org] where there were several comments about using voice prints in lieu of fingerprints for biometric authentication.


Original Submission