Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday December 09 2022, @12:02PM   Printer-friendly
from the maybe-they'll-try-Evil-Otto-next dept.

San Francisco Decides Killer Police Robots Aren't Such a Great Idea

San Francisco decides killer police robots aren't such a great idea:

The robot police dystopia will have to wait. Last week the San Francisco Board of Supervisors voted to authorize the San Francisco Police Department to add lethal robots to its arsenal. The plan wasn't yet "robots with guns" (though some police bomb disposal robots fire shotgun shells already, and some are also used by the military as gun platforms) but to arm the bomb disposal robots with bombs, allowing them to drive up to suspects and detonate. Once the public got wind of this, the protests started, and after an 8–3 vote authorizing the robots last week, now the SF Board of Supervisors has unanimously voted to (at least temporarily) ban lethal robots.

Shortly after the initial news broke, a "No Killer Robots" campaign started with the involvement of the Electronic Frontier Foundation, the ACLU, and other civil rights groups. Forty-four community groups signed a letter in opposition to the policy, saying, "There is no basis to believe that robots toting explosives might be an exception to police overuse of deadly force. Using robots that are designed to disarm bombs to instead deliver them is a perfect example of this pattern of escalation, and of the militarization of the police force that concerns so many across the city."

San Francisco cancels plans for 'killer police robots'

San Francisco supervisors have nixed their plan to allow police officers to use robots to kill in emergency situations, a board member confirmed on Tuesday.

"The people of San Francisco have spoken loud and clear: There is no place for killer police robots in our city," supervisor Dean Preston told ABC News in a statement. "There have been more killings at the hands of police than any other year on record nationwide. We should be working on ways to decrease the use of force by local law enforcement, not giving them new tools to kill people."

[...] The news comes a day after community groups protested outside San Francisco's City Hall condemning the ordinance, which the San Francisco Board of Supervisors approved in an 8-3 vote on Nov. 29.

San Francisco Reverses Approval of Killer Robot Policy

San Francisco reverses approval of killer robot policy:

[...] The San Francisco Police Department made the proposal after a law came into effect requiring California officials to define the authorized uses of their military-grade equipment. It would have allowed cops to equip robots with explosives "to contact, incapacitate, or disorient violent, armed, or dangerous suspects." Authorities could only use the robots for lethal force after they've exhausted all other possibilities, and a high-ranking official would have to approve their deployment. However, critics are concerned that the machines could be abused.

[...] While the supervisors voted to ban the use of lethal force by police robots — for now, anyway — they also sent the original policy proposing the use of killer robots back for review. The board's Rules Committee could now amend it further to have stricter rules for use of bomb-equipped robots, or it could scrap the old proposal altogether.


Original Submission #1Original Submission #2Original Submission #3

Related Stories

Robot Dogs Armed With AI-Targeting Rifles Undergo US Marines Special Ops Evaluation 35 comments

https://arstechnica.com/gadgets/2024/05/robot-dogs-armed-with-ai-targeting-rifles-undergo-us-marines-special-ops-evaluation/

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

[...] MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

[...] The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There's also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.

Previously on SoylentNews:
    You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000 - 20240426
    San Francisco Decides Killer Police Robots Aren't Such a Great Idea - 20221208


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by inertnet on Friday December 09 2022, @12:50PM (6 children)

    by inertnet (4071) on Friday December 09 2022, @12:50PM (#1281862) Journal

    I guess this now opens up an opportunity for deploying robots that have non lethal force capabilities? If you first suggest something that nobody wants, followed with something that's less evil (which you actually wanted in the first place), the minds are probably more willing to go along.

    • (Score: 2) by Freeman on Friday December 09 2022, @02:24PM (2 children)

      by Freeman (732) on Friday December 09 2022, @02:24PM (#1281869) Journal

      Define Non-Lethal? Tazers? Batons? Bean-bag rounds?

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2, Funny) by ikanreed on Friday December 09 2022, @05:07PM

        by ikanreed (3164) on Friday December 09 2022, @05:07PM (#1281889) Journal

        It doesn't matter. No matter what you equip a robot with, a human policeman will always be more efficient at delivering racial violence to our streets.

      • (Score: 2) by krishnoid on Saturday December 10 2022, @12:02AM

        by krishnoid (1156) on Saturday December 10 2022, @12:02AM (#1281912)

        In any event, make sure you keep the lethal option out of arm's reach [youtu.be].

    • (Score: 2) by Nobuddy on Saturday December 10 2022, @07:19PM (2 children)

      by Nobuddy (1626) on Saturday December 10 2022, @07:19PM (#1281948)

      it was an absurd concept in the first place. it is a drone, not automated. the only reason an officer should use lethal force is to protect themselves or others. Nonlethal force is preferred, but sometimes lethal is chosen to protect the officer.
      there is NO WAY for the officer to be in danger, so NO REASON to choose lethal force. Period.

      • (Score: 2) by maxwell demon on Sunday December 11 2022, @12:10AM

        by maxwell demon (1608) on Sunday December 11 2022, @12:10AM (#1281963) Journal

        Do the robots have microphones? Then the suspect could throw an F-bomb on the officer. ;-)

        --
        The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by MIRV888 on Sunday December 11 2022, @08:22AM

        by MIRV888 (11376) on Sunday December 11 2022, @08:22AM (#1281992)

        A hostage situation could require lethal force, but that's about it.

  • (Score: 2) by looorg on Friday December 09 2022, @01:43PM (8 children)

    by looorg (578) on Friday December 09 2022, @01:43PM (#1281867)

    I guess this was one the stories that got dropped when the db went on the fritz the last time and about a week or so of stories went to digital heaven (not to be confused with the cloud). It was on the site on the 26-27 or 28th of november (timezones be tricky yo!).

    It's kind of interesting to note how they are all now going back on this and instead this is now a horrible idea while they previously seemed to be very or somewhat encouraged by having Robocop fantasies (or probably more ED-209 fantasies). Even tho they didn't really specify what kind of "killer-robots" they had access to or if this was just mounting some kind of rifle to a drone or platform and not actually being some kind of Terminator style robot. But apparently that seems to be what people think it is, or will become, and not what it at least currently is or represents. So they can't have any of that.

    • (Score: 3, Interesting) by Phoenix666 on Friday December 09 2022, @04:20PM (1 child)

      by Phoenix666 (552) on Friday December 09 2022, @04:20PM (#1281881) Journal

      They really ought to read and watch more sci-fi than they do, because most of these schemes have already been played out as thought exercises there. Star Trek the Next Generation's episode, "Arsenal of Freedom [wikipedia.org]," hinged on a planet whose inhabitants were killed off by their own automated weapons. I also read an earlier short story, whose title and author I can't recall or find at the moment, that involved a future tyranny commissioning a generation of killbots that could detect if you were a party loyalist before deciding to execute you; a clever scientist reversed the code shortly before going live so that they would only kill true party loyalists.

      The bottom line is that taking human agency out of life-or-death decisions is a bad idea.

      --
      Washington DC delenda est.
      • (Score: 2) by krishnoid on Friday December 09 2022, @09:31PM

        by krishnoid (1156) on Friday December 09 2022, @09:31PM (#1281903)

        If it's a burning question, you can post it to scifi.stackexchange.com with the 'story-identification' and 'short-story' tags and someone might be able to identify it for you.

        Also, when it comes to agency for life-or-death decisions, maybe artificial intelligence needs to be trained [mit.edu] better [mit.edu] by its current human custodians. When it comes to sending a remote-control device to do a meatbag's job under meatbag control, though ... ?

    • (Score: 0) by Anonymous Coward on Friday December 09 2022, @05:01PM (5 children)

      by Anonymous Coward on Friday December 09 2022, @05:01PM (#1281888)

      It was going to be remotely controlled weapons, or even explosives. Not autonomous.

      https://www.reuters.com/article/us-texas-crime/no-charges-for-dallas-officers-who-killed-sniper-with-robot-bomb-idUSKBN1FK35W [reuters.com]

      • (Score: 2) by RS3 on Friday December 09 2022, @06:40PM (4 children)

        by RS3 (6367) on Friday December 09 2022, @06:40PM (#1281896)

        My problem with the idea, much like self-driving cars, is: who is responsible when something goes wrong? (and it will go wrong)

        • (Score: 2) by krishnoid on Friday December 09 2022, @09:32PM (3 children)

          by krishnoid (1156) on Friday December 09 2022, @09:32PM (#1281904)

          Same way it works now -- the artificial intelligence with the best ability to argue right and wrong (and legal) in a court of law.

          • (Score: 2) by RS3 on Friday December 09 2022, @10:38PM (2 children)

            by RS3 (6367) on Friday December 09 2022, @10:38PM (#1281908)

            Is that a thing, or did you make that up? :)

            Joking, cynicism, etc. aside, the real question is (and I don't know the answer): what are our lawmakers (such as they are) saying / doing about this legal conundrum?

            • (Score: 2) by krishnoid on Friday December 09 2022, @11:33PM (1 child)

              by krishnoid (1156) on Friday December 09 2022, @11:33PM (#1281911)

              It's getting there [americanbar.org]. Don't know what the law*makers* are doing about it, but since a lot of them are ex-lawyers themselves, another generation and I bet they'll have internalized the presence of artificial intelligence in the entire field of law, from drafting to interpreting to arguing.

              • (Score: 2) by RS3 on Saturday December 10 2022, @12:34AM

                by RS3 (6367) on Saturday December 10 2022, @12:34AM (#1281913)

                Good, thank you, and someday I might have mod points!

  • (Score: 2) by Joe Desertrat on Saturday December 10 2022, @01:25AM (2 children)

    by Joe Desertrat (2454) on Saturday December 10 2022, @01:25AM (#1281915)

    You see, killbots have a preset kill limit. Knowing their weakness, just send wave after wave of your own men at them until they reach their limit and shut down.

    • (Score: 0) by Anonymous Coward on Saturday December 10 2022, @03:49AM

      by Anonymous Coward on Saturday December 10 2022, @03:49AM (#1281920)

      You suck!

    • (Score: 2) by Tork on Saturday December 10 2022, @04:57AM

      by Tork (3914) Subscriber Badge on Saturday December 10 2022, @04:57AM (#1281922) Journal
      Hopefully the killbots aren't 64-bit.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈 - Give us ribbiti or make us croak! 🐸
  • (Score: 2) by MIRV888 on Sunday December 11 2022, @08:34AM

    by MIRV888 (11376) on Sunday December 11 2022, @08:34AM (#1281994)

    SF aside multiple militaries around the world already have automated explosive drones in their arsenal. Israel is an example. There were no pilots for the drones that hit the saudi refineries. As a drone hobby person, I can tell you it's not very difficult. You put in detailed gps coordinates, a rudimentary visual targeting system (think roomba), and a shape charge. Boom! Your refinery doesn't work anymore.
    Swarms of antipersonnel drones are surely out there. It just too easy and too cheap not to do.

(1)