Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday May 31 2018, @09:55PM   Printer-friendly
from the OK-Google,-open-the-pod-bay-doors dept.

Google Assistant fired a gun: We need to talk

For better or worse, Google Assistant can do it all. From mundane tasks like turning on your lights and setting reminders to convincingly mimicking human speech patterns, the AI helper is so capable it's scary. Its latest (unofficial) ability, though, is a bit more sinister. Artist Alexander Reben recently taught Assistant to fire a gun. Fortunately, the victim was an apple, not a living being. The 30-second video, simply titled "Google Shoots," shows Reben saying "OK Google, activate gun." Barely a second later, a buzzer goes off, the gun fires, and Assistant responds "Sure, turning on the gun." On the surface, the footage is underwhelming -- nothing visually arresting is really happening. But peel back the layers even a little, and it's obvious this project is meant to provoke a conversation on the boundaries of what AI should be allowed to do.

As Reben told Engadget, "the discourse around such a(n) apparatus is more important than its physical presence." For this project he chose to use Google Assistant, but said it could have been an Amazon Echo "or some other input device as well." At the same time, the device triggered "could have been a back massaging chair or an ice cream maker."

But Reben chose to arm Assistant with a gun. And given the concerns raised by Google's Duplex AI since I/O earlier this month, as well as the seemingly never-ending mass shootings in America, his decision is astute.

"OK Google, No more talking." / "OK Google, No more Mr. Nice Guy." / "OK Google, This is America." / "OK Google, [Trigger word]."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by urza9814 on Friday June 01 2018, @02:35AM

    by urza9814 (3954) on Friday June 01 2018, @02:35AM (#687038) Journal

    Eh, that's all this demo is, but I think you're missing the bigger picture here.

    You've been able to verbally command a computer to turn on a light or trigger whatever else for decades. The purpose of these new AI assistants is that they bring a lot more of their own decision making capabilities (or at least they're attempting to). If you say "OK Google, fire the gun" and it shoots someone, that's no different from you pulling the trigger. But what if you tell it "OK Google, if any face you don't recognize comes through this door, fire the gun" -- with your intention being to "protect from intruders", which is perfectly legal in many jurisdictions...but then your house catches on fire, the neighbors call the fire department, and your Google gun kills a firefighter. These things are sold as being "intelligent", but they're far from being able to handle the kinds of decisions that people are throwing at them.

    I don't think it's something that really can be prevented, but I do think it's worth discussing in terms of how these things are marketed. The way so many devices are sold these days, they market a dream of what the software might one day be capable of (and they're not always wrong even) but they give people a false impression of its capabilities today. If you tell people it can recognize faces and objects and it can make intelligent decisions, and you don't give the proper context about what the limitations of those abilities are, then it's not entirely unreasonable that someone might think some crazy stunt -- like plugging it in to a gun for a DIY defense turret -- is a reasonable idea.

    I'm sure most of us here have seen some horrors created by newbie developers...now imagine every person alive gets the ability to program nearly any device they own by having the device attempt to parse natural language into code....

    And a big part of our legal system is focused on intent. Can you prove intent if an action was carried out through a poorly programmed smart device? You can kinda prove intent if someone writes a crappy computer program, because they gave step by step instructions...but natural language has a lot more ambiguity and implied meaning. So that could cause some issues too...

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2