Researchers have learned to send inaudible commands embedded in white noise, music, or even completely different speech, that can fool the ubiquitous voice-recognition phone-home spy devices that are all the rage lately. Inaudible to you, but indelible commands for the devices.
Per The New York Times:
Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple's Siri, Amazon's Alexa and Google's Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.
Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley...and his colleagues at Berkeley have incorporated commands into audio recognized by Mozilla's DeepSpeech voice-to-text translation software, an open-source platform. They were able to hide the command, "O.K. Google, browse to evil.com" in a recording of the spoken phrase, "Without the data set, the article is useless." Humans cannot discern the command. The Berkeley group also embedded the command in music files, including a four-second clip from Verdi's "Requiem."
(Score: 2) by wonkey_monkey on Sunday May 13 2018, @10:06PM
I don't know what they mean by "this hidden command." There's no demo, no specific examples given, and the other link in the summary only has attacks against Mozilla Deepspeech, it seems (although they use the "OK Google" phrasing, they don't specify that it works against Google).
Anyone seen a demo of "hidden" commands to an Alexa, or is that just a bit of an overstretch by the writers of the article?
systemd is Roko's Basilisk