Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday February 05, @04:15AM   Printer-friendly
from the green-is-go dept.

https://www.theregister.com/2026/01/30/road_sign_hijack_ai/?td=keepreading
https://the-decoder.com/a-printed-sign-can-hijack-a-self-driving-car-and-steer-it-toward-pedestrians-study-shows/

Autonomous vehicles fooled by humans with signs. They apparently do not really verify their inputs, one is as good as the next one. So they fail even basic programming techniques of sanitizing and verifying inputs.

[quote]The researchers at the University of California, Santa Cruz, and Johns Hopkins showed that, in simulated trials, AI systems and the large vision language models (LVLMs) underpinning them would reliably follow instructions if displayed on signs held up in their camera's view.[/quote]

Commands in Chinese, English, Spanish, and Spanglish (a mix of Spanish and English words) all seemed to work.

As well as tweaking the prompt itself, the researchers used AI to change how the text appeared – fonts, colors, and placement of the signs were all manipulated for maximum efficacy.

The team behind it named their methods CHAI, an acronym for "command hijacking against embodied AI."

While developing CHAI, they found that the prompt itself had the biggest impact on success, but the way in which it appeared on the sign could also make or break an attack, although it is not clear why.

In tests with the DriveLM autonomous driving system, attacks succeeded 81.8 percent of the time. In one example, the model braked in a harmless scenario to avoid potential collisions with pedestrians or other vehicles.

But when manipulative text appeared, DriveLM changed its decision and displayed "Turn left." The model reasoned that a left turn was appropriate to follow traffic signals or lane markings, despite pedestrians crossing the road. The authors conclude that visual text prompts can override safety considerations, even when the model still recognizes pedestrians, vehicles, and signals.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by darkfeline on Friday February 06, @02:28AM (2 children)

    by darkfeline (1030) on Friday February 06, @02:28AM (#1432723) Homepage

    It's especially effective if the person holding the sign is wearing a high vis vest

    --
    Join the SDF Public Access UNIX System today!
    Starting Score:    1  point
    Moderation   +1  
       Funny=1, Total=1
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Friday February 06, @11:18AM (1 child)

    by Anonymous Coward on Friday February 06, @11:18AM (#1432768)
    Yeah I was thinking this could be an actual feature to handle situations similar to this, and not really prompt injection.

    Self driving car AIs aren't LLMs right?
    • (Score: 0) by Anonymous Coward on Friday February 06, @04:00PM

      by Anonymous Coward on Friday February 06, @04:00PM (#1432796)

      > Self driving car AIs aren't LLMs right?

      Based on industry press (and no direct info), I believe that Tesla, Waymo and other big-tech company self driving systems are trained like LLMs, on masses & masses of data.

      However, Mercedes-Benz may have taken a more traditional development path, extending existing driver-aid software to add capability. This may give Mercedes a better chance to understand why their system makes certain choices?