Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Fnord666 on Wednesday September 05 2018, @07:32AM   Printer-friendly
from the not-related-to-camping dept.

Ars Technica:

The success of Internet of Things devices such as Amazon's Echo and Google Home have created an opportunity for developers to build voice-activated applications that connect ever deeper—into customers' homes and personal lives. And—according to research by a team from the University of Illinois at Urbana-Champaign (UIUC)—the potential to exploit some of the idiosyncrasies of voice-recognition machine-learning systems for malicious purposes has grown as well.

Called "skill squatting," the attack method (described in /ma paper presented at USENIX Security Symposium in Baltimore this month) is currently limited to the Amazon Alexa platform—but it reveals a weakness that other voice platforms will have to resolve as they widen support for third-party applications. Ars met with the UIUC team (which is comprised of Deepak Kumar, Riccardo Paccagnella, Paul Murley, Eric Hennenfent, Joshua Mason, Assistant Professor Adam Bates, and Professor Michael Bailey) at USENIX Security. We talked about their research and the potential for other threats posed by voice-based input to information systems.

[...] But skill-squatting attacks could pose a more immediate risk—it appears, the researchers found, that developers are already giving their applications names that are similar to those of popular applications. Some of these—such as "Fish Facts" (a skill that returns random facts about fish, the aquatic vertebrates) and "Phish Facts" (a skill that returns facts about the Vermont-based jam band)—are accidental, but others such as "Cat Fax" (which mimics "Cat Facts") are obviously intentional.

Thanks to the way Alexa handles requests for new "skills"—the cloud applications that register with Amazon—it's possible to create malicious skills that are named with homophones for existing legitimate applications. Amazon made all skills in its library available by voice command by default in 2017, and skills can be "installed" into a customer's library by voice. "Either way, there's a voice-only attack for people who are selectively registering skill names," said Bates, who leads UIUC's Secure and Transparent Systems Laboratory.

This sort of thing offers all kinds of potential for malicious developers. They could build skills that intercept requests for legitimate skills in order to drive user interactions that steal personal and financial information. These would essentially use Alexa to deliver phishing attacks (the criminal fraud kind, not the jam band kind). The UIUC researchers demonstrated (in a sandboxed environment) how a skill called "Am Express" could be used to hijack initial requests for American Express' Amex skill—and steal users' credentials.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday September 05 2018, @09:20PM

    by Anonymous Coward on Wednesday September 05 2018, @09:20PM (#730959)

    Since the "skills" have to be installed it should at least warn you when installing homophonic skills. At execution time it could also warn of being unsure what to do, "Do you mean lift with an 'i' or lyft with a 'y'?" - teach those users to heed your warning about installing homophonic skills at least.