Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday May 11, @10:29AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/gadgets/2024/05/robot-dogs-armed-with-ai-targeting-rifles-undergo-us-marines-special-ops-evaluation/

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

[...] MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

[...] The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There's also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.

Previously on SoylentNews:
    You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000 - 20240426
    San Francisco Decides Killer Police Robots Aren't Such a Great Idea - 20221208


Original Submission

Related Stories

San Francisco Decides Killer Police Robots Aren't Such a Great Idea 20 comments

San Francisco Decides Killer Police Robots Aren't Such a Great Idea

San Francisco decides killer police robots aren't such a great idea:

The robot police dystopia will have to wait. Last week the San Francisco Board of Supervisors voted to authorize the San Francisco Police Department to add lethal robots to its arsenal. The plan wasn't yet "robots with guns" (though some police bomb disposal robots fire shotgun shells already, and some are also used by the military as gun platforms) but to arm the bomb disposal robots with bombs, allowing them to drive up to suspects and detonate. Once the public got wind of this, the protests started, and after an 8–3 vote authorizing the robots last week, now the SF Board of Supervisors has unanimously voted to (at least temporarily) ban lethal robots.

Shortly after the initial news broke, a "No Killer Robots" campaign started with the involvement of the Electronic Frontier Foundation, the ACLU, and other civil rights groups. Forty-four community groups signed a letter in opposition to the policy, saying, "There is no basis to believe that robots toting explosives might be an exception to police overuse of deadly force. Using robots that are designed to disarm bombs to instead deliver them is a perfect example of this pattern of escalation, and of the militarization of the police force that concerns so many across the city."

You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000 11 comments

You can now buy a flame-throwing robot dog for under $10,000

If you've been wondering when you'll be able to order the flame-throwing robot that Ohio-based Throwflame first announced last summer, that day has finally arrived. The Thermonator, what Throwflame bills as "the first-ever flamethrower-wielding robot dog" is now available for purchase. The price? $9,420.

Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone.

[...] Flamethrowers are not specifically regulated in 48 US states, although general product liability and criminal laws may still apply to their use and sale. They are not considered firearms by federal agencies. Specific restrictions exist in Maryland, where flamethrowers require a Federal Firearms License to own, and California, where the range of flamethrowers cannot exceed 10 feet.


Original Submission

This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Touché) by Snospar on Saturday May 11, @11:15AM (4 children)

    by Snospar (5366) Subscriber Badge on Saturday May 11, @11:15AM (#1356537)

    And how does this fit in with all the talk about curtailing AI and ensuring guard rails are in place? Given that current AI is not even vaguely intelligent, who thinks that we should now put them in control of live weapons? Oh it's the US military: "Shoot first and ask questions later" (or Shoot first and get consent later).

    --
    Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
    • (Score: 4, Insightful) by Revek on Saturday May 11, @12:58PM

      by Revek (5022) on Saturday May 11, @12:58PM (#1356541)

      It always comes down to the old "Rules for thee but not for me" thing.

      --
      This page was generated by a Swarm of Roaming Elephants
    • (Score: 4, Insightful) by VLM on Saturday May 11, @03:45PM (1 child)

      by VLM (445) on Saturday May 11, @03:45PM (#1356549)

      Probably fits in right here:

      The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

      Its an air force drone that does not fly. And an air force drone is a slight advancement on various anti-radar and anti-armor missiles mixed with cruise missile tech they've had since the 70s/80s.

    • (Score: 3, Insightful) by gnuman on Sunday May 12, @08:05AM

      by gnuman (5013) on Sunday May 12, @08:05AM (#1356630)

      For military, AI is rails, not guardrails. They are in business of "kill more faster, deal with fallout later". Today, Israel is using "AI" to select which homes to bomb. Maybe it can now be used as an upgraded version of "I was just following orders" to "this was AI, not us, doing these things"

      https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets [theguardian.com]

      I have a lot to say on this, but you know, world is complicated and all that. But Israel today is not Israel of Rabin. Same like Russia today under Putin is not like Russia under Putin in 2004. Same for the US -- just look at FoxNews today vs. 2000 and you'll instantly see how idiotic it's become. See also China or India for same trends. Anyway, I'll stop now.

  • (Score: 4, Insightful) by wannabegeek2 on Saturday May 11, @12:50PM (9 children)

    by wannabegeek2 (4307) on Saturday May 11, @12:50PM (#1356540)

    You want the Terminator, this is how you start down the path to the Terminator.

    Humanity will not survive.

    --
    Never ascribe to malice or conspiracy that which can be adequately explained by stupidity or ignorance
    • (Score: 2, Interesting) by shrewdsheep on Saturday May 11, @06:57PM (8 children)

      by shrewdsheep (5215) on Saturday May 11, @06:57PM (#1356570)

      Sad thing, I do not see how this is avoidable. The military wants AI badly. And it wants to fight wars. Even if they could contain themselves, eventually some terrorists will get hold of the technology.

      • (Score: 5, Insightful) by tangomargarine on Saturday May 11, @09:06PM (4 children)

        by tangomargarine (667) on Saturday May 11, @09:06PM (#1356578)

        "It is well that war is so terrible, else we should grow too fond of it." - Robert E. Lee

        When we can fight wars without risking any human lives on our side, it removes the reason the people have to care. Even before we start worrying about Terminators, that part is scary enough for me.

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 3, Interesting) by pdfernhout on Sunday May 12, @04:26PM (2 children)

        by pdfernhout (5984) on Sunday May 12, @04:26PM (#1356668) Homepage

        As I wrote in 2010: https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
        "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead? ...
              There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
                The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
                We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working")."

        --
        The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
        • (Score: 2, Interesting) by khallow on Monday May 13, @01:57AM (1 child)

          by khallow (3766) Subscriber Badge on Monday May 13, @01:57AM (#1356745) Journal

          There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...

          The obvious rebuttal here: how are the "security" agencies able to do that? Answer: power is still scarce. The mindset of scarcity is backed by a reality of scarcity.

          • (Score: 2) by pdfernhout on Monday May 13, @05:06PM

            by pdfernhout (5984) on Monday May 13, @05:06PM (#1356823) Homepage

            Some ideas from me circa 2011 on how security agencies can build a more secure world:
            "The need for FOSS intelligence tools for sensemaking etc."
            https://web.archive.org/web/20130514103318/http://pcast.ideascale.com/a/dtd/-The-need-for-FOSS-intelligence-tools-for-sensemaking-etc.-/76207-8319 [archive.org]
            "This suggestion is about how civilians could benefit by have access to the sorts of "sensemaking" tools the intelligence community (as well as corporations) aspire to have, in order to design more joyful, secure, and healthy civilian communities (including through creating a more sustainable and resilient open manufacturing infrastructure for such communities). It outlines (including at a linked elaboration) why the intelligence community should consider funding the creation of such free and open source software (FOSS) "dual use" intelligence applications as a way to reduce global tensions through increased local prosperity, health, and with intrinsic mutual security.
                  I feel open source tools for collaborative structured arguments, multiple perspective analysis, agent-based simulation, and so on, used together for making sense of what is going on in the world, are important to our democracy, security, and prosperity. Imagine if, instead of blog posts and comments on topics, we had searchable structured arguments about simulations and their results all with assumptions defined from different perspectives, where one could see at a glance how different subsets of the community felt about the progress or completeness of different arguments or action plans (somewhat like a debate flow diagram), where even a year of two later one could go back to an existing debate and expand on it with new ideas. As good as, say, Slashdot is, such a comprehensive open source sensemaking system would be to Slashdot as Slashdot is to a static webpage. It might help prevent so much rehashing the same old arguments because one could easily find and build on previous ones. ...
                  As with that notion of "mutual security", the US intelligence community needs to look beyond seeing an intelligence tool as just something proprietary that gives a "friendly" analyst some advantage over an "unfriendly" analyst. Instead, the intelligence community could begin to see the potential for a free and open source intelligence tool as a way to promote "friendship" across the planet by dispelling some of the gloom of "want and ignorance" (see the scene in "A Christmas Carol" with Scrooge and a Christmas Spirit) that we still have all too much of around the planet. So, beyond supporting legitimate US intelligence needs (useful with their own closed sources of data), supporting a free and open source intelligence tool (and related open datasets) could become a strategic part of US (or other nation's) "diplomacy" and constructive outreach. ..."

            --
            The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
  • (Score: 2) by looorg on Saturday May 11, @02:00PM (5 children)

    by looorg (578) on Saturday May 11, @02:00PM (#1356543)

    If they are just going to be robotic machine gun dogs with AI that is a bit scary. Then we are slowly creeping towards killer robot terminators. The other way of seeing this is perhaps that it's a robotic stable platform that can recoil compensate and track multiple targets. A robotic sniper platform? That can put round after round after round into the same little circle without breathing and carry a multitude of sensors that can compensate for all the rotational effects, heat, wind, moisture etc etc.

    That said if I was to build, or even knew how to, a robot platform I would probably go for more legs -- robospider (8 legs) or some kind of bug (6 legs or more). Four legs might be ok, but it would seem more would be more stable? Up to a limit. Or what do I know about such things. Or it's the reverse, more legs are just more things that can break and making the thing nonoperational.

    The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

    This probably sounds more safe then it is. After all most people just click YES or whatever removes the popup window as soon as it pops up. I'm sure a little script or autoclicker or a piece of tape and something that pushes down the yes button will be created for when you don't have the time to acknowledge all the kill orders and you just want to go on a toilet break.

    • (Score: 2) by looorg on Saturday May 11, @02:22PM (1 child)

      by looorg (578) on Saturday May 11, @02:22PM (#1356544)

      Looking a bit more it doesn't seem to be carrying a very impressive gun, just normal weapons. Normal ammo caliber (5.56x45mm, 7.62x51mm, 6.5mm,.300) so anything standard NATO gear from assault rifles, rifles to/and (light) machine guns. It's not firing .50 caliber rounds or anything. Perhaps one should add -- yet. In that regard it seems more like a Sentry gun on legs then a killer robot -- it's going to be Aliens vs Predator vs Terminator in the wild. Not ED-209 just yet, even tho perhaps that is not something you would want to copy.

      If (or when) they go .50, and this thing can fire from clicks away from the target, they can add a speaker with a DALEK voice that goes EXTERMINATE! whenever it fires ...

    • (Score: 2) by captain normal on Saturday May 11, @05:55PM (2 children)

      by captain normal (2205) on Saturday May 11, @05:55PM (#1356563)

      "The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously."
      Until of course Sky Net tells it to kill all meat bags.

      --
      Everyone is entitled to his own opinion, but not to his own facts"- --Daniel Patrick Moynihan--
      • (Score: 1) by khallow on Monday May 13, @02:48AM (1 child)

        by khallow (3766) Subscriber Badge on Monday May 13, @02:48AM (#1356751) Journal
        Or a disgruntled meat bag tells it to "take a chance".
        • (Score: 2) by cmdrklarg on Monday May 13, @06:49PM

          by cmdrklarg (5048) Subscriber Badge on Monday May 13, @06:49PM (#1356829)

          This, boys and girls, is why it's important to keep your meatbags properly gruntled.

          --
          The world is full of kings and queens who blind your eyes and steal your dreams.
  • (Score: 5, Informative) by VLM on Saturday May 11, @03:59PM (4 children)

    by VLM (445) on Saturday May 11, @03:59PM (#1356550)

    Two comments from a former military guy about this quote:

    The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

    1) This is not a hunting-pack offensive "robot dog" its actually a "robot cat" that prowls and pounces. All you need for defense against this in offensive mode is a radio jammer and five minutes after you turn on your radar jammer they will drop a missile or laser guided bomb on your jammer's coordinates. The ideal use case is eliminating human armed guard patrols. Think of a secured facility like a SCIF or "special weapons site" where you have a donut of barbed wire fences and random armed patrols in between the barbed wire fences that shoot anything that moves. Now you can deploy what amounts to a drone in addition to human patrols.

    2) This is a test run for having REMFs in the decision loop giving permission to every grunt with a M-16, every time they pull the trigger in all circumstances. Just don't have the batteries and commo to do this ... yet. The future is going to be something like Amazon's Mechanical Turk reviewing the sight picture every time a grunt pulls the trigger and the rifle doesn't fire unless at least 11 out of 21 Turkers vote yes. Remember the military is a culture of minimizing risk and chance of blame, so it will be OK if entire infantry battalions get routed due to this tech as long as no one is individually responsible for the failure and no one is individually ever again responsible for accidentally shooting a civilian. Since that'll result in enormous numbers of deaths, that's why you mount the rifle on a robot cat. Then rely on global trade and logistics so if the opfor kills 20 robot cats for every opfor we kill, that's OK as long as we ship at least 50 to 100 robot cats to every battle. Of course, this relies on fragile global supply chains, reliable secure and unrestricted worldwide network bandwidth, and infinite petroleum for logistics, none of which will likely exist soon. So the idea is kind of dead on the vine already.

    • (Score: 2) by Snospar on Saturday May 11, @11:11PM

      by Snospar (5366) Subscriber Badge on Saturday May 11, @11:11PM (#1356593)

      And one bad bit of code switches this to an auto-shoot machine. That's all there is, a line (or two) of code saying that fire control is made by a human. Two or three updates down the line... hey Buzz, "Did you remember to take that thing out of debug mode".
      I do get what you're saying, I understand the army wants these very badly but I worry about when these go wrong/haywire. Maybe I've seen too many movies; maybe I just don't like retarded "AI" being equipped with real weaponry. Surely the military complex can come up with another name for this that isn't "Artificial Intelligence". Despite the glitz and the glamour we're still miles away from GAI.

      --
      Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
    • (Score: 0) by Anonymous Coward on Sunday May 12, @01:11AM (2 children)

      by Anonymous Coward on Sunday May 12, @01:11AM (#1356600)

      five minutes after you turn on your radar jammer they will drop a missile or laser guided bomb on your jammer's coordinates.

      That's fine, all you need are radio jammer drones and similar... Bonus points if you succeed in getting them to bomb their own stuff/sites.

      • (Score: 2) by VLM on Sunday May 12, @04:28PM (1 child)

        by VLM (445) on Sunday May 12, @04:28PM (#1356669)

        I wonder if you could make jammer-mines. Anything that moves, blast some wideband high power RF. Humans won't care unless they're licking the antenna while it transmits, but it would slow down / stop offensive robot use or if the mines are dropped into place they could prevent defensive robot use.

        The point of jamming is they know you're on to them if you jam them. Of course solar power getting cheap I guess you could do substantial continuous area denial during the day for a square meter or two of panel.

        • (Score: 0) by Anonymous Coward on Monday May 13, @06:47AM

          by Anonymous Coward on Monday May 13, @06:47AM (#1356775)
          Jammer mines seem doable. You probably don't have to jam for that long. So charge the battery by day for most of the time. Then when triggered (manual, scheduled or auto) jam for X minutes/hours.

          Attacker budget vs Defender budget. A:D

          So if the total sustained expenditure/cost for the mines to be effective vs the total sustained expenditure/cost of the stuff the mines are to protect from is significantly less than (Defender's budget)/(Attacker's Budget) then it might be viable option (there might be even better options for the Defender to spend their budget on though).

          FWIW if I'm a nation state I might spend on "avenging" assassination/insurgent sleeper cells. Sure you may successfully regime change or even nuke my country. But I might be able to be >70% sure that you and loved ones pay dearly for it, or even if one or two teams fail, you all might have to go in hiding for more than a decade. A concern with such teams is if you're not careful they might regime change their own country...

          Or I might just spread rumors and do other stuff to give the impression that I might have such teams even if I don't... 😉
  • (Score: 2) by VLM on Saturday May 11, @04:08PM (3 children)

    by VLM (445) on Saturday May 11, @04:08PM (#1356552)

    If you want to know how this tech will be used tactically, its basically a Bradley's CIV but very small and with a VERY long wireless extension cord.

    Also note that a non-wireless non-miniaturized CIV "actual delivered cost" is currently about $154M cash-on-the-barrel for about 1100 Bradleys so figure $140K each just for the optics and electronics; the rest of the robot will be extra cost and wireless and miniaturization will not be free.

    I'm sure that given 10, 20, 30 years until its fielded in bulk means there will be some advancements and improvements in price and tech level, but this gadget will not be cheap. Marketing might be able to BS something about two R+D bots being cheap, but in the real world bulk deployment will be expensive.

    Fundamentally, the CIV is "duct tape a webcam to the gun turret with a laptop inside the turret" and a decent webcam and laptop wouldn't set a home hobbyist back more than $1K or so however ruggedizing it and GI-proofing it makes it cost about 140x as much. So yeah I could build a robot cat with a M-16 duct taped to its tail for some small amount of money, but the deployed military model will plausibly cost 100x to 200x as much as my daydream.

    These things are likely to be VERY expensive. And in a post-AI world of mass unemployment and child soldiers etc, I think this ultra high-tech gadget will be a hard sell.

    • (Score: 3, Interesting) by Mojibake Tengu on Saturday May 11, @09:31PM

      by Mojibake Tengu (8598) on Saturday May 11, @09:31PM (#1356584) Journal

      That depends on the gun. I tell you, a P90 is a better option for robots and turrets than an M16 could ever be...

      Even a BB toy on a toy robot may scare the attacker to give himself up.

      --
      Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 1) by khallow on Sunday May 12, @12:45AM (1 child)

      by khallow (3766) Subscriber Badge on Sunday May 12, @12:45AM (#1356597) Journal

      And in a post-AI world of mass unemployment and child soldiers etc, I think this ultra high-tech gadget will be a hard sell.

      That depends on the post-AI world. For example, despite all the talk of coming mass unemployment, we don't see any signs of it coming from AI. We do see signs of mass unemployment from bad labor policy, but that's more a case of natural stupidity.

      • (Score: 2) by VLM on Sunday May 12, @04:25PM

        by VLM (445) on Sunday May 12, @04:25PM (#1356667)

        Maybe although my point was "will guard for food" makes human guards pretty cheap compared to hyperexpensive high labor cost robots.

  • (Score: 3, Interesting) by Username on Saturday May 11, @04:53PM

    by Username (4557) on Saturday May 11, @04:53PM (#1356558)

    >use lethal robots against suspects.

    If you have an active shooter, you're not going to have time to get c4 and strap it to a slow moving robot. Assassinating people like this only works when theyre pinned down some where. Which if that is the case you can build a brick wall around them and just walk away, come back when they decide to surrender, if you feel like it.

    We all know the obvious upgrade to this, robots just a slow moving missle. Not sure why they don't just use a RPG, or drone strike from the roof if this is their tactic. I guess the robot makes them feel less like a shitstain slaughtering people.

  • (Score: 2) by srobert on Sunday May 12, @12:40PM

    by srobert (4803) on Sunday May 12, @12:40PM (#1356650)

    Who is this "human in the loop"? The robot dog is probably safer to other humans, without the human.

  • (Score: 2) by SomeRandomGeek on Monday May 13, @08:56PM (1 child)

    by SomeRandomGeek (856) on Monday May 13, @08:56PM (#1356846)

    I have a hypothetical question for you. Would you rather serve in an army that had these weapons, and their opponents didn't, or an army that didn't have these weapons, and their opponents did?
    https://en.wikipedia.org/wiki/Prisoner%27s_dilemma [wikipedia.org]

    • (Score: 0) by Anonymous Coward on Tuesday May 14, @02:17AM

      by Anonymous Coward on Tuesday May 14, @02:17AM (#1356876)

      Given only these variables, I'd go for the army with robo assault doggos. So yes, the army that had these weapons.

  • (Score: 0) by Anonymous Coward on Tuesday May 14, @02:15AM

    by Anonymous Coward on Tuesday May 14, @02:15AM (#1356875)

    I thought it meant Mars Orbital Combat.

(1)