Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday April 25 2018, @05:39PM   Printer-friendly
from the giving-the-kids-a-big-brother dept.

Amazon has unveiled a Kids Edition of its Echo Dot smart speaker:

The $79 Echo Dot Kids Edition takes the original device's design and wraps it in a kid-friendly, colorful case. Otherwise, the hardware is the same as the tiny smart speaker that debuted in 2016. While the regular, $49 Dot is considered a more affordable and accessible version of the regular Echo speaker, the Kids Edition costs more thanks to its bundled software. Amazon includes a two-year warranty plus a one-year subscription to the new Amazon FreeTime Unlimited service, an expanded version of Amazon's new FreeTime for Alexa.

FreeTime gives users "family-focused features" and new parental controls that adults can use to restrict what their kids can do with Alexa. Family features include "Education Q&A," allowing kids to ask Alexa science, math, spelling, and definition questions, "Alexa Speaks 'Kid,'" which gives Alexa kid-appropriate answers to nebulous statements that kids may say such as, "Alexa, I'm bored." Parents can also limit the times during which kids can speak to Alexa (like no talking to it after bedtime), restrict the skills kids can use, filter out songs with explicit lyrics, and more.

[...] But even with the added parental controls, some will be wary of a speaker designed to listen to their children. Like the original Dot, the Kids Edition has a mute button and parents can put the device in "sleep mode" to prevent it from responding to commands. However, the mic will always be listening for its wake-word just like other Echo devices.

In the new Parent Dashboard in the Alexa app and online, parents can monitor how kids are using their Echos (including all their utterances, or the phrases Alexa thinks it heard before trying to respond) and limit their abilities. According to a Buzzfeed report, Amazon claims it isn't making back-end profiles for users with data harvested from Alexa. While the virtual assistant can now recognize voices and provide personalized answers based on who's talking, the company maintains that data is only being used to make Alexa smarter and more tailored to each user.

Also at CNN and Fortune.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Disagree) by fyngyrz on Wednesday April 25 2018, @07:55PM (7 children)

    by fyngyrz (6567) on Wednesday April 25 2018, @07:55PM (#671818) Journal

    It actually is listening all the time. It HAS to be in order to be voice activated.

    What it's supposed to be doing is simply erasing/overwriting what it hears, until the keyword is detected.

    It's listening for its wake word, with algorithms that are only (and barely) able to detect it. It's not broad STT. Ergo, it's not listening for the wake word in the "you need to be worried about it" sense.

    Which seems reasonable, but how do you verify it? On a device that accepts updates? Impossible.

    1) It doesn't have the storage to keep much audio
    2) It doesn't have the local STT that would be required to triage what goes into storage
    3) It only sends stuff to Amazon when you talk

    I don't even count that Amazon has told us explicitly that it doesn't do what you're worried about. It's not possible for it to do it without getting caught - it'll have to use the network in order to know what to listen to / send it along, which would get it caught immediately. Lots of users, including myself, monitor its network activity. It has no other means of communication available to it.

    If I felt the need for something like this, which I don't, then what I would consider 'adequate' privacy safeguards might include having a button-activated mode, where it doesn't even start listening until the button is pushed.

    The echo has a mute; you hit that, it's not listening to anything. So mute it until you want to talk to it, then talk to it, then mute it again. The ring lights up red when it's muted, so you can tell what state it is in.

    If pushing the button were required to physically connect the microphone to the rest of the device, that would be a verifiable safeguard that could not just be eliminated 'in a flash' so to speak.

    Hack it. Nothing stopping you from getting in there and cutting out the microphones.

    Starting Score:    1  point
    Moderation   0  
       Disagree=1, Total=1
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 4, Informative) by frojack on Wednesday April 25 2018, @08:12PM (6 children)

    by frojack (1554) on Wednesday April 25 2018, @08:12PM (#671829) Journal
    --
    No, you are mistaken. I've always had this sig.
    • (Score: 2) by fyngyrz on Wednesday April 25 2018, @08:16PM (3 children)

      by fyngyrz (6567) on Wednesday April 25 2018, @08:16PM (#671835) Journal

      Yeah, no. They had a bug for a while. They fixed it. Keep googling.

      • (Score: 4, Informative) by Arik on Wednesday April 25 2018, @08:46PM (1 child)

        by Arik (4543) on Wednesday April 25 2018, @08:46PM (#671851) Journal
        "Yeah, no. They had a bug for a while. They fixed it."

        And they can "fix it" back again any time they like.

        This is EXACTLY what I mean when I say you can't trust it. The fact that they can do it is PROVEN correct by the fact they already did it! And no less so if they did it by accident. They can do it anytime they want, but more importantly, *so can others.* When you have a system that is fundamentally insecure by design, it's not just the manufacturer who can 'pwn' you at will, but also any number of other bad actors who can 'exploit' the same capabilities the manufacturer has, but who will NOT be deterred by any concerns about the manufacturers reputation.

        And yes, you can be super paranoid and sniff all the traffic and inspect it constantly. And you have to watch it CONSTANTLY for this to be any good. It could behave exactly as it's supposed to until it receives an update, or possibly just a particular obfuscated command, and then suddenly it's a hostile device. This could happen in mid-sentence. It won't matter how closely you watched the traffic every day for the last 2 months if you aren't on the ball TODAY you won't even notice the change. Who wants to do that though? Who wants to pay good money for something that fundamentally can never be trusted and is always going to have to be watched like a hawk? In practice I doubt very much you are inspecting the traffic in real time, and even if you are the vast majority of owners will not.

        Why not spend a few pennies more per unit and make them right?

        Oh, what an old fashioned idea. The world economy would die overnight without forced obsolescence and landfills growing across once fertile landscapes. I forgot, carry on.
        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 2, Insightful) by tftp on Wednesday April 25 2018, @10:38PM

          by tftp (806) on Wednesday April 25 2018, @10:38PM (#671918) Homepage
          The vast majority of people with these devices do not monitor the traffic. Maybe 1 in 100,000 does. The streaming code may be in each device, waiting activation from outside or on secret code words (weapons, crime.) It is unwise to believe that the system does not permit uncommanded listening.
      • (Score: 0) by Anonymous Coward on Thursday April 26 2018, @12:06AM

        by Anonymous Coward on Thursday April 26 2018, @12:06AM (#671959)

        It's always a bug, isn't it. Even the 100th time - "Ooops. Our bad. It's a bug."

    • (Score: 2) by fyngyrz on Wednesday April 25 2018, @08:19PM

      by fyngyrz (6567) on Wednesday April 25 2018, @08:19PM (#671836) Journal

      Also, I already discussed the patents here. [soylentnews.org]

    • (Score: 0) by Anonymous Coward on Wednesday April 25 2018, @08:39PM

      by Anonymous Coward on Wednesday April 25 2018, @08:39PM (#671846)

      similar, more recent story: [theverge.com]

      [...] users with Alexa-enabled devices have reported hearing strange, unprompted laughter. [...] Amazon said its planned fix will involve disabling the phrase, “Alexa, laugh,” and changing the command to “Alexa, can you laugh?” The company says the latter phrase is “less likely to have false positives,” or in other words the Alexa software is likely to mistake common words and phrases that sound similar to the one that makes Alexa start laughing.