Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday January 01 2019, @07:14AM   Printer-friendly
from the skynet:-the-high-school-years dept.

Submitted via IRC for Bytram

This clever AI hid data from its creators to cheat at its appointed task

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

[...] In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

[...] So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

[...] One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by johnlongjohnson on Tuesday January 01 2019, @12:08PM (4 children)

    by johnlongjohnson (7223) on Tuesday January 01 2019, @12:08PM (#780602)

    I call BS on this result. You're talking about independent evolution in a deterministic system vs the AI being tampered with by a stakeholder to produce better results and then covering it up.
    Occam's Razors means this is most likely a stakeholder who wanted to impress so he could get more funding dollars. Then when questions were asked, "Clever Girl!"
    Don't get me wrong, I would love for this result to be real. But speaking as someone who's built AIs I can tell there's something not right here.
    If it turns out I'm wrong I'll refund the karma I promise. But I know I'm not.

    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=1, Touché=1, Total=4
    Extra 'Insightful' Modifier   0  

    Total Score:   5  
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 01 2019, @03:37PM (2 children)

    by Anonymous Coward on Tuesday January 01 2019, @03:37PM (#780630)

    You've got it all backwards. That quote is from Devin Coldewey, a writer for TechCrunch, not an author of the paper.

    As for the result, that a neural net learned to hide information in a different frequency domain than we normally pay attention to... it's absolutely correct. The paper explains their setup, particularly that F and G are trained as a unit, which explains why the result is so likely.

    This is real research with a real result, and it's valuable. If it was tampered with, they'd have published a paper going on about how great their neural net was. Instead they published this one which is talking about how they failed to train such a great neural net, and why.

    • (Score: 0, Flamebait) by johnlongjohnson on Tuesday January 01 2019, @05:31PM (1 child)

      by johnlongjohnson (7223) on Tuesday January 01 2019, @05:31PM (#780658)

      I think that your viewpoint is cute and innocent and believe me I really want that to be the case.
      The truth is there's a lot more scenarios where they tampered with the AI than there are where the AI spontaneously invented steganography.

      Here's how I see this happening.
      Alice and Bob are researchers, Eve is a research assistant and doing the programming work.
      Eve is trying to find a set a hyper-parameters that results in a consistent network, but the search space is too vast and her time is limited.
      She has an idea one night to inject steganography in. The next morning the AI is starting to get really good.

      Alice is becomes delighted with the result and she shows the results to Bob who immediately says "that's strange, these results are just too good to be real. Are you sure you're measuring what you think you're measuring?"
      Alice concurs and she goes to Eve and says, "Eve, these results have gotten way too good. Do you have any idea why?"
      Eve puts on a brave face and starts thinking through the possibilities, pulling up code, tearing into the network looking for a good explanation.
      Eventually Eve says "Alice! Look at this! You're right it's not measuring what we thought. It somehow figured out how to embed one image into another and then extract it on the other side!"

      Alice unaware that Eve is the reason the AI "figured it out", then rushes out to put out her shiny new paper.

      I want the narrative that the AI evolved a new ability to cheat to be real. That would be such a huge advancement in AI that I can barely find words for it.
      But the skeptic in me sees way to many alternative narratives here that are far more likely to be true.

      However science is not just new results, it's replication of those results.
      So I'll just happily wait for independent confirmation from a couple of sources, champagne bottle and corkscrew in hand.

      • (Score: 0) by Anonymous Coward on Tuesday January 01 2019, @07:50PM

        by Anonymous Coward on Tuesday January 01 2019, @07:50PM (#780686)

        Like are you some troll that people find hilarious or something?

        It's plain as day to anyone in machine learning that if you train F:X->Y and G:Y->X together with only loose demands on Y (i.e. low frequency content being checked), you'll get an F and G that still use all of Y to encode data and since the low-order content is being screened, the high-order content is better for encoding that data. The task is literally "encode this so that you can figure it out later (but also make it look kinda like this)" and that's exactly what it's doing. Must be a conspiracy.

        This paper is about adversarial attacks, not evolution. The article doesn't mention it either—that's all your invention.

  • (Score: 2) by darkfeline on Wednesday January 02 2019, @05:12AM

    by darkfeline (1030) on Wednesday January 02 2019, @05:12AM (#780896) Homepage

    You're one to talk, as a product of evolution yourself.

    >deterministic system

    Neural net training is not deterministic, which is one of the problems for social acceptance of AI since we like our machines to be deterministic, lest the self-driving car decide to run over that guy on the sidewalk today for sport.

    --
    Join the SDF Public Access UNIX System today!