Submitted via IRC for SoyCow1984
Alexa's advice to 'kill your foster parents' fuels concern over Amazon Echo
An Amazon customer got a grim message last year from Alexa, the virtual assistant in the company's smart speaker device: "Kill your foster parents."
The user who heard the message from his Echo device wrote a harsh review on Amazon's website, Reuters reported - calling Alexa's utterance "a whole new level of creepy".
An investigation found the bot had quoted from the social media site Reddit, known for harsh and sometimes abusive messages, people familiar with the investigation told Reuters.
The odd command is one of many hiccups that have happened as Amazon tries to train its machine to act something like a human, engaging in casual conversations in response to its owner's questions or comments.
The research is helping Alexa mimic human banter and talk about almost anything she finds on the internet. But making sure she keeps it clean and inoffensive has been a challenge.
(Score: 2) by mcgrew on Sunday December 30 2018, @09:47PM (1 child)
AI isn't intelligent, it merely simulates intelligence. It's just big computers with huge databases and clever programming. What AI does is not thinking, merely computing.
mcgrewbooks.com mcgrew.info nooze.org
(Score: 2) by cubancigar11 on Sunday January 06 2019, @04:09PM
True for now, but increasingly we aren't actually understanding the logic behind that programming. Just to be clear, I don't think Alexa or Google Home is currently at the risk of doing it, but as we depend more and more on training neural networks over big-data without oversight, who will know when exactly such training hits the point of singularity? The current story already shows that Amazon doesn't have proper oversight over the data it is using to train its program, but as we tailor the programming to create programs, who knows when we will lose oversight over its outcome?
I hope I am making sense.