Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday September 07 2017, @01:46PM   Printer-friendly
from the careless-whispers dept.

Submitted via IRC for SoyCow1937

Hacks are often caused by our own stupidity, but you can blame tech companies for a new vulnerability. Researchers from China's Zheijiang University found a way to attack Siri, Alexa and other voice assistants by feeding them commands in ultrasonic frequencies. Those are too high for humans to hear, but they're perfectly audible to the microphones on your devices. With the technique, researchers could get the AI assistants to open malicious websites and even your door if you had a smart lock connected.

The relatively simple technique is called DolphinAttack. Researchers first translated human voice commands into ultrasonic frequencies (over 20,000 hz). They then simply played them back from a regular smartphone equipped with an amplifier, ultrasonic transducer and battery -- less than $3 worth of parts.

What makes the attack scary is the fact that it works on just about anything: Siri, Google Assistant, Samsung S Voice and Alexa, on devices like smartphones, iPads, MacBooks, Amazon Echo and even an Audi Q3 -- 16 devices and seven system in total. What's worse, "the inaudible voice commands can be correctly interpreted by the SR (speech recognition) systems on all the tested hardware." Suffice to say, it works even if the attacker has no device access and the owner has taken the necessary security precautions.

Source: https://www.engadget.com/2017/09/06/alexa-and-siri-are-vulnerable-to-silent-nefarious-commands/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by tangomargarine on Thursday September 07 2017, @05:09PM (3 children)

    by tangomargarine (667) on Thursday September 07 2017, @05:09PM (#564653)

    Voice control seemed great on Star Trek: The Next Generation. (ST:TNG) They never seemed to address the issue that someone could simulate someone else's voice and give commands. (Destruct sequence 1, code 1-1 A)

    Wasn't there an episode where Data went crazy and took over the ship using Picard's command codes? Then he locked them out with like a 30-character password and everybody was flummoxed.

    here we go [wikipedia.org]

    I mean, voice recognition and the command codes was a two-factor system; it's just that the codes themselves are super short and saying them out loud to authenticate doesn't beat the eavesdropper test.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by DannyB on Thursday September 07 2017, @05:30PM (2 children)

    by DannyB (5839) Subscriber Badge on Thursday September 07 2017, @05:30PM (#564663) Journal

    The three factors for authentication:
    1. Something you know. (eg, a code, a password, a PIN, an algorithm)
    2. Something you have. (eg, a metal key, a credit card, a code-generating key-fob, a mobile phone)
    3. Something you are. (eg, your fingerprint, your voice, your retina scan, your blood, semen, DNA)
    (If you know of anything more than these three -- please publish at once and become famous!)

    Two factor authentication uses any two from the above list.

    The failure in the ST:TNG episode "Brothers" is that the system probably should have verified factor 3, something you are. It should be positive that the actual authentic person is giving the command (authentication), and then verify they are authorized to give that command (authorization).

    If Cmdr Data could could simply use Picard's voice as the "something you are" factor, and recite the command codes "something you know", then there are two failures here.
    1. A voice that sounds like Picard, is not a very good "something you are" test.
    2. Anything that is a "something you know" test should never be stated aloud in an episode. Because now the TV audience knows.

    Aside . . . remember back to the 1970's, a TV series called "Space 1999"? Remember Barbara Bain's monotone emotionless "acting"? I suspect THAT is where they got the idea for Cmdr Data!

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 0) by Anonymous Coward on Thursday September 07 2017, @11:32PM

      by Anonymous Coward on Thursday September 07 2017, @11:32PM (#564826)

      Hock a lugi onto the machine and let it sequence your DNA.

      -- OriginalOwner_ [soylentnews.org]

    • (Score: 2) by Pslytely Psycho on Friday September 08 2017, @12:06PM

      by Pslytely Psycho (1218) on Friday September 08 2017, @12:06PM (#565030)

      "3. Something you are. (eg, your fingerprint, your voice, your retina scan, your blood, semen, DNA"

      Plot-lines of so very many movies and TV shows, Salt, SNG, Minority Report, Ultraviolet, Two Broke Girls (Beth Behrs did a lot of semen testing), Alien Resurrection's breath analyzer or GATTACA to cover most of them at once.
      Yes, I watch a lot of bad movies and your post brought examples bubbling to the surface. Especially when you mentioned Barbara Bain, man, by comparison she made Data and Spock look like emotional wrecks! I think she may of been an actual robot....

      I raised my children on MST3K. I am a horrible person with neither culture or taste.

      --
      Alex Jones lawyer inspires new TV series: CSI Moron Division.