Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Thursday September 07 2017, @01:46PM   Printer-friendly
from the careless-whispers dept.

Submitted via IRC for SoyCow1937

Hacks are often caused by our own stupidity, but you can blame tech companies for a new vulnerability. Researchers from China's Zheijiang University found a way to attack Siri, Alexa and other voice assistants by feeding them commands in ultrasonic frequencies. Those are too high for humans to hear, but they're perfectly audible to the microphones on your devices. With the technique, researchers could get the AI assistants to open malicious websites and even your door if you had a smart lock connected.

The relatively simple technique is called DolphinAttack. Researchers first translated human voice commands into ultrasonic frequencies (over 20,000 hz). They then simply played them back from a regular smartphone equipped with an amplifier, ultrasonic transducer and battery -- less than $3 worth of parts.

What makes the attack scary is the fact that it works on just about anything: Siri, Google Assistant, Samsung S Voice and Alexa, on devices like smartphones, iPads, MacBooks, Amazon Echo and even an Audi Q3 -- 16 devices and seven system in total. What's worse, "the inaudible voice commands can be correctly interpreted by the SR (speech recognition) systems on all the tested hardware." Suffice to say, it works even if the attacker has no device access and the owner has taken the necessary security precautions.

Source: https://www.engadget.com/2017/09/06/alexa-and-siri-are-vulnerable-to-silent-nefarious-commands/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Thursday September 07 2017, @03:44PM (5 children)

    by DannyB (5839) Subscriber Badge on Thursday September 07 2017, @03:44PM (#564611) Journal

    voice control is stupid, pointless and downright dangerous out of literal toy applications.

    Once upon a time I thought of voice control as fantastically convenient. That was in an era when PowerMac had crude voice recognition, scotch taped to AppleScript actions in the voice commands folder. The AppleScript could then inter operate with X10 software controlling X10 devices. (Yes, this was the mid 1990's.) It seemed so amazingly cool. Except the voice recognition didn't work very well. You had to hold the microphone near your mouth. It was not tolerant of any kind of background noise. Etc.

    Voice control seemed great on Star Trek: The Next Generation. (ST:TNG) They never seemed to address the issue that someone could simulate someone else's voice and give commands. (Destruct sequence 1, code 1-1 A) In an original 1960's Star Trek series episode (ST:TOS), "A Taste of Armageddon", on the Enterprise, Spock realizes that a command he is receiving from Kirk could not be true, but yet his equipment could detect that it wasn't produced by a "voice synthesizer", whatever that means.

    I suppose on ST:TNG, the computer would realize the actual whereabouts of the person whose voice it is recognizing. Or it wouldn't care what the voice sounds like, but could identify which exact person in a room is the speaker giving the command, and identify them as a person authorized to do so.

    You might as well just leave a computer logged in and automatically accepting any Bluetooth request from any device, and let your neighbour type on it from their living room.

    In the meantime, I definitely won't use bluetooth on that logged in computer on my back porch.

    Better yet, why not just put the Alexa or Google Home (or both!) on the back porch. For your convenience.

    --
    Every performance optimization is a grate wait lifted from my shoulders.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by tangomargarine on Thursday September 07 2017, @05:09PM (3 children)

    by tangomargarine (667) on Thursday September 07 2017, @05:09PM (#564653)

    Voice control seemed great on Star Trek: The Next Generation. (ST:TNG) They never seemed to address the issue that someone could simulate someone else's voice and give commands. (Destruct sequence 1, code 1-1 A)

    Wasn't there an episode where Data went crazy and took over the ship using Picard's command codes? Then he locked them out with like a 30-character password and everybody was flummoxed.

    here we go [wikipedia.org]

    I mean, voice recognition and the command codes was a two-factor system; it's just that the codes themselves are super short and saying them out loud to authenticate doesn't beat the eavesdropper test.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 3, Informative) by DannyB on Thursday September 07 2017, @05:30PM (2 children)

      by DannyB (5839) Subscriber Badge on Thursday September 07 2017, @05:30PM (#564663) Journal

      The three factors for authentication:
      1. Something you know. (eg, a code, a password, a PIN, an algorithm)
      2. Something you have. (eg, a metal key, a credit card, a code-generating key-fob, a mobile phone)
      3. Something you are. (eg, your fingerprint, your voice, your retina scan, your blood, semen, DNA)
      (If you know of anything more than these three -- please publish at once and become famous!)

      Two factor authentication uses any two from the above list.

      The failure in the ST:TNG episode "Brothers" is that the system probably should have verified factor 3, something you are. It should be positive that the actual authentic person is giving the command (authentication), and then verify they are authorized to give that command (authorization).

      If Cmdr Data could could simply use Picard's voice as the "something you are" factor, and recite the command codes "something you know", then there are two failures here.
      1. A voice that sounds like Picard, is not a very good "something you are" test.
      2. Anything that is a "something you know" test should never be stated aloud in an episode. Because now the TV audience knows.

      Aside . . . remember back to the 1970's, a TV series called "Space 1999"? Remember Barbara Bain's monotone emotionless "acting"? I suspect THAT is where they got the idea for Cmdr Data!

      --
      Every performance optimization is a grate wait lifted from my shoulders.
      • (Score: 0) by Anonymous Coward on Thursday September 07 2017, @11:32PM

        by Anonymous Coward on Thursday September 07 2017, @11:32PM (#564826)

        Hock a lugi onto the machine and let it sequence your DNA.

        -- OriginalOwner_ [soylentnews.org]

      • (Score: 2) by Pslytely Psycho on Friday September 08 2017, @12:06PM

        by Pslytely Psycho (1218) on Friday September 08 2017, @12:06PM (#565030)

        "3. Something you are. (eg, your fingerprint, your voice, your retina scan, your blood, semen, DNA"

        Plot-lines of so very many movies and TV shows, Salt, SNG, Minority Report, Ultraviolet, Two Broke Girls (Beth Behrs did a lot of semen testing), Alien Resurrection's breath analyzer or GATTACA to cover most of them at once.
        Yes, I watch a lot of bad movies and your post brought examples bubbling to the surface. Especially when you mentioned Barbara Bain, man, by comparison she made Data and Spock look like emotional wrecks! I think she may of been an actual robot....

        I raised my children on MST3K. I am a horrible person with neither culture or taste.

        --
        Alex Jones lawyer inspires new TV series: CSI Moron Division.
  • (Score: 2) by bob_super on Thursday September 07 2017, @07:00PM

    by bob_super (1357) on Thursday September 07 2017, @07:00PM (#564720)