Google Assistant fired a gun: We need to talk
For better or worse, Google Assistant can do it all. From mundane tasks like turning on your lights and setting reminders to convincingly mimicking human speech patterns, the AI helper is so capable it's scary. Its latest (unofficial) ability, though, is a bit more sinister. Artist Alexander Reben recently taught Assistant to fire a gun. Fortunately, the victim was an apple, not a living being. The 30-second video, simply titled "Google Shoots," shows Reben saying "OK Google, activate gun." Barely a second later, a buzzer goes off, the gun fires, and Assistant responds "Sure, turning on the gun." On the surface, the footage is underwhelming -- nothing visually arresting is really happening. But peel back the layers even a little, and it's obvious this project is meant to provoke a conversation on the boundaries of what AI should be allowed to do.
As Reben told Engadget, "the discourse around such a(n) apparatus is more important than its physical presence." For this project he chose to use Google Assistant, but said it could have been an Amazon Echo "or some other input device as well." At the same time, the device triggered "could have been a back massaging chair or an ice cream maker."
But Reben chose to arm Assistant with a gun. And given the concerns raised by Google's Duplex AI since I/O earlier this month, as well as the seemingly never-ending mass shootings in America, his decision is astute.
"OK Google, No more talking." / "OK Google, No more Mr. Nice Guy." / "OK Google, This is America." / "OK Google, [Trigger word]."
(Score: 2) by DutchUncle on Friday June 01 2018, @02:19PM
Isaac Asimov's robot stories posited that - through government regulation, or engineers' good sense - *all* robots were based on the Three Laws, the first of which was "A robot may not injure a human being, or, through inaction, allow a human being to come to harm." Many of the stories then dealt with how wrong things could go despite - or because of - the details of each situation in which one tries to follow that apparently simple law. The most basic workaround, discussed between two characters in (I think) "The Naked Sun", involves lack of knowledge and/or lack of context. One robot could be instructed to mix up a poisonous liquid "for use in the garden"; a different robot, unaware of the contents, could be instructed to take the liquid and use it in cooking; a third could serve the poisoned food to humans. Since the Google Assistant has no idea what it is "activating", and what it can do, all of the responsibility is on the human in this story.