Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday January 01 2019, @07:14AM   Printer-friendly
from the skynet:-the-high-school-years dept.

Submitted via IRC for Bytram

This clever AI hid data from its creators to cheat at its appointed task

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

[...] In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

[...] So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

[...] One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Tuesday January 01 2019, @07:46PM (1 child)

    by Anonymous Coward on Tuesday January 01 2019, @07:46PM (#780684)

    That is exactly how most AI works. Throw shit at it and see what sticks. Pedantically, one can audit why an "AI" makes its decisions, and apparently they did just that in this case, but realistically who is going to do that for every AI program or every minor revision? The end result is that no one has any real idea precisely WHY an AI does what it does. It may have learned a very wrong thing but looks like it is doing the right thing. It will keep "working" until the environment somehow changes in a way the shit throwers didn't expect. There are certainly scenarios where mistakes are perfectly acceptable, but in others it can be catastrophic.

    I'd hate to be around "AI" powered self driving cars, especially 5 year old unsupported models that are not getting updates, when a sudden fashion trend of wearing ultra-reflective clothing (or something else equally strange and unexpected) makes everyone look like water puddles to the AI, so it is perfectly fine to drive through them.

    Splat!

    Only people with no intelligence would want to use artificial intelligence.

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1  
  • (Score: 1) by Ethanol-fueled on Wednesday January 02 2019, @12:03AM

    by Ethanol-fueled (2792) on Wednesday January 02 2019, @12:03AM (#780791) Homepage

    Sounds like the episode The Quality of Life [wikipedia.org] where Data runs simulations to test the machines and after a few seconds they shut down and don't do shit. He later discovers that the machines realized they were in a simulation and found it more efficient to shut down once no real danger was perceived.