Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday January 01 2019, @07:14AM   Printer-friendly
from the skynet:-the-high-school-years dept.

Submitted via IRC for Bytram

This clever AI hid data from its creators to cheat at its appointed task

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

[...] In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

[...] So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

[...] One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 01 2019, @07:53AM (11 children)

    by Anonymous Coward on Tuesday January 01 2019, @07:53AM (#780572)

    I'm under the impression there are some AI experts hanging around here.
    Could you please provide the definition of "AI" that is being used here, and how it differs from "intelligence"?

    As I understand it, an intelligent agent is defined by the use of an "inner" model of the world, and acts based on the predictions of the model. Information about the world is being reduced, the "reduced world" is used to make predictions, and then actions in the real world are chosen based on the desired outcome.
    The essential bit is the reduction of the information --- this means that the agent is in fact successful in creating an abstraction of the world.

    So I would personally call stuff like alphago intelligent without the "A", because there's nothing artificial about the abstraction (it's certain that alphago does not have a full representation of the space of go moves).

    I would call specialized hardware for computation of trigonometric functions "AI" because they simply have a finite resolution representation of the real data embedded in them (there's no abstraction into a qualitatively different object). if we could do it with an analogue system, we would.

    But if the input and the output of the machine have the same dimension, then I'd just call it a matrix transform, without any projection. Because if the code can use high-frequency signals, it means it has the space to store those signals. So what's the point of applying the fancy transform if the result of the transform is just as big as the initial data?
    It's more or less an analogue system, applied for digitized data.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Tuesday January 01 2019, @08:35AM (1 child)

    by Anonymous Coward on Tuesday January 01 2019, @08:35AM (#780579)

    I've been in long debates on the definition. There probably is no simple answer that will make everyone happy.

    • (Score: 5, Informative) by Rosco P. Coltrane on Tuesday January 01 2019, @08:55AM

      by Rosco P. Coltrane (4757) on Tuesday January 01 2019, @08:55AM (#780582)

      The general accepted definition of an AI is whatever program is capable of developing functionalities that weren't explicitely coded in the program by the programmer in the first place - either by self-teaching, by observation, or by trial and error.

      Where it becomes fuzzy is what humans themselves perceive as intelligence: they tend to regard systems that don't behave, react or interact like humans as machines. That's why humans invented the Turing test [wikipedia.org], which tests for anthropomorphic marks of intelligence. Trouble is, the Turing test is a really poor test of general intelligence: clever chatbots with predefined chat lines can pass it. Machines that do their own kind of thinking in ways that make little to no sense to humans tend to be viewed as machines by them.

      By the same token, in a not-so-distant future, intelligent machines might view humans as clever biological automata because humans will not display the same kind of intelligence as them.

  • (Score: 2) by PiMuNu on Tuesday January 01 2019, @09:56AM

    by PiMuNu (3823) on Tuesday January 01 2019, @09:56AM (#780588)

    > But if the input and the output of the machine have the same dimension, then I'd just call it a matrix transform, without any projection.

    I think your statement is correct. However the "reduced world" step is that the analysis algorithm has searched the space of all possible "matrix transforms" and found one which best satisfies the requirements of the software developers. Obviously it is a multivariate analysis code, so it does something less crude than brute force search of all possible matrices.

  • (Score: 3, Insightful) by shortscreen on Tuesday January 01 2019, @10:04AM (2 children)

    by shortscreen (2252) on Tuesday January 01 2019, @10:04AM (#780590) Journal

    I don't know anything about "AI" except what I've deduced from reading TFS of stories like this one. My impression is that it's a more elaborate version of this: https://en.wikipedia.org/wiki/Naive_Bayes_classifier [wikipedia.org]

    I guess the AI converts its input into some kind of blob of statistical data. They keep feeding it street maps and aerial photos to train it, effectively throwing data at the wall until something sticks. Now surely the internal mystery data associated with an image is at least as complex in terms of information/bits as the image being fed in. The fact that their output street map had a lot of "hidden" data in it makes me think that the conversion process involved mutilating a data blob from a photo until it "resembled" one from a map, and working backwards from there, rather than messing with pixels directly.

    • (Score: 1, Informative) by Anonymous Coward on Tuesday January 01 2019, @07:46PM (1 child)

      by Anonymous Coward on Tuesday January 01 2019, @07:46PM (#780684)

      That is exactly how most AI works. Throw shit at it and see what sticks. Pedantically, one can audit why an "AI" makes its decisions, and apparently they did just that in this case, but realistically who is going to do that for every AI program or every minor revision? The end result is that no one has any real idea precisely WHY an AI does what it does. It may have learned a very wrong thing but looks like it is doing the right thing. It will keep "working" until the environment somehow changes in a way the shit throwers didn't expect. There are certainly scenarios where mistakes are perfectly acceptable, but in others it can be catastrophic.

      I'd hate to be around "AI" powered self driving cars, especially 5 year old unsupported models that are not getting updates, when a sudden fashion trend of wearing ultra-reflective clothing (or something else equally strange and unexpected) makes everyone look like water puddles to the AI, so it is perfectly fine to drive through them.

      Splat!

      Only people with no intelligence would want to use artificial intelligence.

      • (Score: 1) by Ethanol-fueled on Wednesday January 02 2019, @12:03AM

        by Ethanol-fueled (2792) on Wednesday January 02 2019, @12:03AM (#780791) Homepage

        Sounds like the episode The Quality of Life [wikipedia.org] where Data runs simulations to test the machines and after a few seconds they shut down and don't do shit. He later discovers that the machines realized they were in a simulation and found it more efficient to shut down once no real danger was perceived.

  • (Score: 0) by Anonymous Coward on Tuesday January 01 2019, @12:18PM

    by Anonymous Coward on Tuesday January 01 2019, @12:18PM (#780604)

    It depends on who you ask, but the fact that it was constructed artificially is the last line in the sand. Eventually we'll just have AIs raising one another and that line will blow away too.

    There is another angle though. These things are only fit, more or less, for the tasks they're designed for. The term "fit" is no accident either, they're literally fitted for that task, and in that sense, once again, they're artifacts. Really just another way of deriving the above. (And when we've got an ecosystem of AI that can adapt to any task, that's when the line blows away in this sense.)

  • (Score: 3, Interesting) by c0lo on Tuesday January 01 2019, @12:41PM

    by c0lo (156) Subscriber Badge on Tuesday January 01 2019, @12:41PM (#780609) Journal

    The essential bit is the reduction of the information --- this means that the agent is in fact successful in creating an abstraction of the world.

    Yes, but not being able to interact with the world in its entirety, it created a model of the world incongruous with the one of humans.

    And, my guess, it will happen every time the AI has different perceptual knowledge of this world, no matter the approach of the AI's implementation. A "successful" AI will need to sense the world the humans do - necessary but maybe not sufficient.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 1, Informative) by Anonymous Coward on Wednesday January 02 2019, @12:15AM

    by Anonymous Coward on Wednesday January 02 2019, @12:15AM (#780795)
    It’s a neural network being used in a computer vision application. This is not really artificial intelligence (in my opinion), it’s more like a filter that is evolved through trial and error rather than being explicitly programmed. It’s fed example inputs, the output validity is quantified, and random corrections are applied to the filter. The corrections are larger or smaller based on the new outputs, so as to make the filter evolve towards better outputs. Of course it’s more complicated than that but that’s the basic idea.

    So in this case there were two sets of outputs being judged: the conversion from aerial image to map, and the conversion from that generated map back to aerial image. The filter that was best at this was one that encoded the original aerial image in the map version using steganography. So the machine didn’t really “learn to cheat”, the training process just evolved the best filter for the given task. The resulting filter was just not obvious to the researchers when they started, it’s more of a problem in their training process. They should have evolved two separate filters, one that turns aerial images into maps, and a separate one that turns maps not generated from aerial images into aerial images, if that’s what they wanted.
  • (Score: 0) by Anonymous Coward on Wednesday January 02 2019, @05:08AM (1 child)

    by Anonymous Coward on Wednesday January 02 2019, @05:08AM (#780895)

    Intelligence is like pron, you know it when you see it.
    Anyway, as soon as a computer can do it, it's not 'intelligence' anymore. It is 'pattern matching' or 'machine learning', or 'clever programming'.
    Because of the appropriation of the term AI, the philosophical discussions have moved on to using the term AGI, where the G stands for General.

    • (Score: 2) by DannyB on Wednesday January 02 2019, @03:11PM

      by DannyB (5839) Subscriber Badge on Wednesday January 02 2019, @03:11PM (#781038) Journal

      Because of the appropriation of the term AI, the philosophical discussions have moved on to using the term AGI, where the G stands for General.

      Thanks for the clarification. I had mistakenly thought the G was for Grumpy or Grouchy.

      --
      People today are educated enough to repeat what they are taught but not to question what they are taught.