Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

Related Stories

Hackers Can Trick a Tesla into Accelerating by 50 Miles Per Hour 41 comments

Hackers can trick a Tesla into accelerating by 50 miles per hour:

This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

Mobileye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla's automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee's Advanced Threat Research team.

The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35, and in testing, both the 2016 Tesla Model X and that year's Model S sped up 50 miles per hour.

This is the latest in an increasing mountain of research showing how machine-learning systems can be attacked and fooled in life-threatening situations.

[...] Tesla has since moved to proprietary cameras on newer models, and Mobileye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.

There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.

"What we're trying to do is we're really trying to raise awareness for both consumers and vendors of the types of flaws that are possible," Povolny said "We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it."

So, it seems this is not so much that a particular adversarial attack was successful (and fixed), but that it was but one instance of a potentially huge set. Obligatory xkcd.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @02:11PM (5 children)

    by Anonymous Coward on Sunday August 06 2017, @02:11PM (#549516)

    Legally require an RFID tag on every stop sign. Make it a felony to remove the tag. Problem solved by leet lawyering, bros. You're welcome.

    • (Score: 1) by nitehawk214 on Sunday August 06 2017, @02:36PM (1 child)

      by nitehawk214 (1304) on Sunday August 06 2017, @02:36PM (#549527)

      RFID is really short range. You need to see a stop sign before you get within 10ft of it.

      The longer range ones are powered.

      --
      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
      • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:07PM

        by Anonymous Coward on Sunday August 06 2017, @03:07PM (#549537)

        Powered? Really? If only we had some kind of electrical infrastructure. Power lines, street lights, traffic lights, this-is-your-speed digital display signs! Futuristic science fiction nonsense, all of it. Let's use AI instead, that's so much more realistic.

    • (Score: 2) by SomeGuy on Sunday August 06 2017, @07:04PM (1 child)

      by SomeGuy (5632) on Sunday August 06 2017, @07:04PM (#549601)

      A particular Department of Transportation once had the brilliant idea of using RFID tags to inventory their signs. They envisioned literally just driving around and automatically getting inventory information from RFID tags. The problem is you are putting the tag behind a *huge sheet of metal*. Since the techs had to actually get out and look at the signs anyway they eventually just went with barcodes.

      And laws never stopped anyone from steeling a sign. Oh, look. Billybob Redneck just added another I-20 sign to his collection.

      • (Score: 0) by Anonymous Coward on Monday August 07 2017, @04:15AM

        by Anonymous Coward on Monday August 07 2017, @04:15AM (#549767)

        Out in the country around here, rednecks seem to use the signs for target practice. Even if the bullet missed, I wonder if any consumer-grade electronics could live through the shock-loading & vibration from a gunshot?

    • (Score: 2) by LoRdTAW on Monday August 07 2017, @11:47AM

      by LoRdTAW (3755) on Monday August 07 2017, @11:47AM (#549869) Journal

      You've never been a stupid teenager, have you?

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:18PM

    by Anonymous Coward on Sunday August 06 2017, @03:18PM (#549538)

    As well as vision (video) algorithms, there are also problems with ranging sensors. From a recent story here, https://soylentnews.org/article.pl?sid=17/07/28/0131243 [soylentnews.org]

    > Tech Review https://www.technologyreview.com/s/608348/low-quality-lidar-will-keep-self-driving-cars-in-the-slow-lane/ [technologyreview.com] [technologyreview.com] looks at the next generation of affordable LIDAR units and finds the tech lacking in resolution and/or range relative to the needs of self driving cars. The $80,000 "coffee can" LIDAR that has been used for R&D has 64 beams and 120 meter range. The low cost units have as few as 4 beams at wider spacing. This leads the author to suggest that the first generation of cars with LIDAR may only use the self-driving features at lower speeds.

    Tech Review article has nice pics showing different quality LIDAR.

  • (Score: 1, Informative) by Anonymous Coward on Sunday August 06 2017, @03:24PM (8 children)

    by Anonymous Coward on Sunday August 06 2017, @03:24PM (#549539)

    Here is an example of how a common neural network sees a car, with each horizontal block representing a step of processing in the method and each vertical block a neuron that is tuned to check for features of different objects such a horses, cars, trucks. Essentially whichever row has the strongest match of features to the object is what determines what the object will be classified as.

    https://3.bp.blogspot.com/-M5JZccnj1Ec/VY3zTmGV8YI/AAAAAAAAOP4/q0c-QCtmi2o/s1600/convnet.jpeg [blogspot.com]

    • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:34PM (7 children)

      by Anonymous Coward on Sunday August 06 2017, @03:34PM (#549542)

      I guess you can't edit as anonymous coward, many sites have cookies that let you do so.

      Well, to continue on, since many vision sensors have difficulty with depth perception, lighting, odd patches, etc, through placing artifacts in not so difficult to find spots, you can distort things so after processing it ranks strongest in a row representing another object. In the look inside of what the image looks like after being processed by the network I posted above, the artifact would end up creating or erasing pixels in the pipeline that would carry through. These pixels can end up being significant portions of the image, especially if the network lowers the resolution for memory and speed purposes.

      • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:43PM

        by Anonymous Coward on Sunday August 06 2017, @03:43PM (#549545)

        Your post shows how it works. The original post does not show the how or the why it is failing. It comes down to speed. The higher resolution you use the bigger the net you need. Basically the point of the article shows that aliasing is being used heavily to guess what an object is to a particular probability. If your net has not learned about those noise perturbations then your net will fail. But adding in that data could cause your other confidence items to fall even though it was right then.

      • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:47PM (2 children)

        by Anonymous Coward on Sunday August 06 2017, @03:47PM (#549546)

        So basically your neural network is stupider than a horse. Have you tried putting a horse brain in a car? Seriously just make a horse-car cyborg already. It would be better than any of your self driving fake AI.

        • (Score: 1, Interesting) by Anonymous Coward on Sunday August 06 2017, @03:52PM

          by Anonymous Coward on Sunday August 06 2017, @03:52PM (#549548)

          A simulation with actual rat neurons piloting a fighter jet

          https://www.newscientist.com/article/dn6573-brain-cells-in-a-dish-fly-fighter-plane/ [newscientist.com]

        • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @06:45PM

          by Anonymous Coward on Sunday August 06 2017, @06:45PM (#549594)

          That is an intriguing idea, and you might even be able to sell it as virtual paradise for the horse brain!! When not active the brain is networked with others and they run around an unimaginable paradise.

          Lol, the Matrix but for self driving cars. Oblig: "Are we the baddies?"

      • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @04:05PM

        by Anonymous Coward on Sunday August 06 2017, @04:05PM (#549553)

        Thanks for the link, very interesting way to show how NN works.

        > I guess you can't edit as anonymous coward, ...

        From one AC to another -- given the nature of many discussions here, I personally think that "no editing of posts" is a good thing, too many posters would be tempted to re-write history. I Preview all but the shortest of my posts.

        There are cases like this where it would have made sense for you to continue your explanation, but sorting those from all the rest would be a job for a neural network(grin).

      • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:33PM

        by Anonymous Coward on Sunday August 06 2017, @05:33PM (#549579)

        I guess you can't edit as anonymous coward, many sites have cookies that let you do so.

        Nobody can edit their posts. We like it that way.

      • (Score: 2) by maxwell demon on Sunday August 06 2017, @05:45PM

        by maxwell demon (1608) on Sunday August 06 2017, @05:45PM (#549582) Journal

        I guess you can't edit as anonymous coward

        You can't edit after posting, even if logged in, and even as subscriber.

        --
        The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 3, Interesting) by Spamalope on Sunday August 06 2017, @04:25PM (1 child)

    by Spamalope (5233) on Sunday August 06 2017, @04:25PM (#549558) Homepage

    So... speed trap towns will specially design 30 mph speed limit signs that read 55 mph to neural nets. You read it here 1st!

    • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @04:35PM

      by Anonymous Coward on Sunday August 06 2017, @04:35PM (#549560)

      No difference, the speed trap towns already do that for people (in various subtle ways).

  • (Score: 5, Insightful) by FakeBeldin on Sunday August 06 2017, @05:23PM (5 children)

    by FakeBeldin (3360) on Sunday August 06 2017, @05:23PM (#549574) Journal

    Interesting paper, thanks for submitting!

    Note that they're not talking about algorithms, but models.
    The key distinction is that they're discussing a trained classifier. If you were to train a classifier on a set of images including their "adversarial" images, they would never manage to fool the classifier 100% of the time. I.e. this paper puts some strict limits on the ability of some classifiers to classify novel items.

    In other words: your takeaway from the paper could very well be "classifiers are not necessarily good at classifying objects that differ sufficiently from the training set". They showed such classification mistakes in cases where the classification is obvious for humans.

    It's a good thing to highlight and an interesting paper. When reading it, do keep in mind the distinction between "we are able to fool our own classifier 100% of the time by using images we did not train it for" and "we are able to fool any reasonable classifier most of the time using images generated without knowing the classifier's training set".

    • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:44PM (3 children)

      by Anonymous Coward on Sunday August 06 2017, @05:44PM (#549581)

      Once the cars become more common it should be a lot easier to figure out what will fool a given type of classifier.

      This will remain so as long as "training" methods do not include adversarial stuff and do not actually train the classifier on what are the minimum required features that a stop sign will have. Seems to me the classifiers are triggering on stuff that's common to a given set of stop signs but not necessarily related to what actually is a stop sign.

      tldr; many AI people are merely at the "Alchemy" stage and don't actually know what they are doing. They are "throwing stuff into a pot" till it seems to work well enough.

      The human laws need to be changed before we can have robot vehicles.
      1) Drivers have more responsibilities than merely driving. In some places the drivers have to ensure that minors are wearing seatbelts and similar stuff.
      2) If an AI car keeps failing badly on some edge cases and it's not provably due to an adversarial attack, who gets their driving license revoked? All cars of that revision? The manufacturer?

      • (Score: 2) by FakeBeldin on Monday August 07 2017, @07:42AM (2 children)

        by FakeBeldin (3360) on Monday August 07 2017, @07:42AM (#549817) Journal

        Interesting comment!

        tldr; many AI people are merely at the "Alchemy" stage and don't actually know what they are doing. They are "throwing stuff into a pot" till it seems to work well enough.

        This is part of the strength of Tesla. Their "pot" is becoming larger and larger with every mile anyone drives in any Tesla - so if any training set has actual stop signs in there, theirs ought to.

        I think that that might be one way out of this conundrum: gather so much real-life driving data, that any algorithm that trains itself on matching that data performs as well as humans would.
        But that doesn't work in a lab setting - you need hundreds of thousands of miles of actual driving data. Google has a few million, Tesla has more. That might be sufficient.

        • (Score: 0) by Anonymous Coward on Monday August 07 2017, @09:23AM (1 child)

          by Anonymous Coward on Monday August 07 2017, @09:23AM (#549836)
          Yep basically in theory it should eventually work better than human. It may fail in unexpected edge or "stupid by human standards" cases but those failures will become rarer and rarer (e.g. retrieve the crash data and retrain so it won't happen again).

          The thing is, if we stick to this approach to AI it would still be a dead end, because there really is no actual understanding by the AI. It's just a fancier form of brute-forcing.

          Even a crow will understand the world better with its walnut-sized brain consuming fractions of a watt.
          • (Score: 2) by FakeBeldin on Monday August 07 2017, @01:16PM

            by FakeBeldin (3360) on Monday August 07 2017, @01:16PM (#549912) Journal

            Well, once we've pushed machine learning algorithms to the brink, then we will start to investigate why this brink exists.
            But for now, more (training data / hidden layers / processing time / ...) still provides easily-attainable improvements.

            Basically: once we're getting closer to optimal performance of the technique, then we'll start looking into minimizing the technique.

            Can't wait :)

    • (Score: 2) by captain normal on Sunday August 06 2017, @11:07PM

      by captain normal (2205) on Sunday August 06 2017, @11:07PM (#549675)

      It is interesting. I just wonder how these algorithms would handle stuff like a stop sign obscured by shrubbery, trees etc. This is something I have been seeing a lot of lately.

      --
      When life isn't going right, go left.
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:33PM (1 child)

    by Anonymous Coward on Sunday August 06 2017, @05:33PM (#549580)

    It seems to be a recurring theme in society today that we look for reasons not to do something. Yet take nearly anything today and it can be destroyed, thwarted, or worse in absurdly trivial ways.

      - You can turn cars into death traps with little more than a well placed snip with a pair of cutters.
      - Cut tires can cause substantial damage to a car and render inoperable (without even more damage) in just a few seconds.
      - A well placed squirt of super glue in a lock can not only completely lock somebody outside of their house but require extensive work to remedy, up to and including the complete removal of the lock.

    And you can get far more catastrophic there as well. Imagine if today somebody, for the first time, suggested building electric utility poles. We'll string countless wooden poles all over the country and then connecting them with extremely high voltage lines. Those lines can kill anything that completes a circuit and we understand that those poles/lines will come down in extreme weather (or other events) possibly starting fires, electrocuting things, and so on. If we had our social zeitgeist today a couple hundred years ago we likely still would not have national electric.

    This attack is simply not relevant. It would be incredibly visible, incredibly simple to counter-act if it did become an issue, and also requires substantial technological sophistication on the part of the attackers. And on top of this the ultimate effect would likely be mostly negligible. It could contribute to an accident of, for instance, a stop sign was removed but that's a 'could' on top of an 'if'. Heck, even then - that's no different than some kid stealing the sign because reasons.

    • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @07:01PM

      by Anonymous Coward on Sunday August 06 2017, @07:01PM (#549599)

      Yes, the theme of "safety first" is a pretty popular one throughout history. You always have groups that want to push the envelope and try something new. You also have groups that worry about major changes and don't want to mess up society which has been working relatively well so far.

      No one is saying to stop self-driving tech, and with the huge inertia behind self-driving tech these studies are actually very important to keep us aware of the potential pitfalls. Personally I'm more worried about the massive shift in government opinion. The opinion on self-driving cars went from "maybe in 10-20 years" and "massive regulation / restriction" to "test it out here! Let's launch this shit YEEHAW!"

      I don't understand the rapid shift, but as with everything else where opinions radically change over a short time there is probably a boat load of money to be made. My guess is major car manufacturers will push for regulation to capture the market, then they'll corner the market on self-driving car services and it can become like the Telecom boom. Overcharge massively for transportation, implement surveillance tech, and with cell phones you now have a complete surveillance society.

      My opinion: self-driving cars should be self contained. They can utilize GPS and cell towers for location, but they should not be SENDING anything back. We've lost track of controlling our own lives and the objects we interact with. Soon we will all just be renting our lives from the ownership class, life will become even more based on wage slavery. Phew, that tangent snuck up on me!

  • (Score: 2) by patrick on Sunday August 06 2017, @11:47PM (3 children)

    by patrick (3990) on Sunday August 06 2017, @11:47PM (#549689)

    ED-209: [menacingly] Please put down your weapon. You have twenty seconds to comply.

    Dick Jones: I think you'd better do what he says, Mr. Kinney.

    [Mr. Kinney drops the pistol on the floor. ED-209 advances, growling]

    ED-209: You now have fifteen seconds to comply.

    [Mr. Kinney turns to Dick Jones, who looks nervous]

    ED-209: You are in direct violation of Penal Code 1.13, Section 9.

    [entire room of people in full panic trying to stay out of the line of fire, especially Mr. Kinney]

    ED-209: You have five seconds to comply.

    Kinney: Help...! Help me!

    ED-209: Four... three... two... one... I am now authorized to use physical force!

    [ED-209 opens fire]

    • (Score: 1, Informative) by Anonymous Coward on Monday August 07 2017, @05:17AM (2 children)

      by Anonymous Coward on Monday August 07 2017, @05:17AM (#549779)

      The scenarios aren't comparable. The example you mention is an AI failing to grasp it's most fundamental and basic training. The issue here is adversarial attacks. They are intentionally using knowledge of how AI vision systems work to try to create scenarios where they fail to recognize something correctly. So for the Robocop example it would be more like if Mr. Kinney was sitting there smugly holding an L shaped piece of cardboard in his hand which had some near imperceptible visual modifications made to to it intentionally designed to make it appear to be a gun to AI learning systems.

      • (Score: 1, Interesting) by Anonymous Coward on Monday August 07 2017, @09:31AM

        by Anonymous Coward on Monday August 07 2017, @09:31AM (#549839)

        They are comparable if Mr Kinney happened to wear clothing with a pattern that the AI vision thought was a weapon for some unknown reason (unknown at that time). But it'll be fixed in the next release of course. Meanwhile too bad about Mr Kinney.

        Anyway whatever it is such robots might still be less trigger happy than the average US cop. After all in the USA they sack cops who don't shoot: http://www.npr.org/2016/12/08/504718239/military-trained-police-may-be-slower-to-shoot-but-that-got-this-vet-fired [npr.org]

        p.s. You definitely are doing things wrong if your average cop is less trigger happy than a US soldier. US soldiers aren't famous for their restraint.

      • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @06:35AM

        by Anonymous Coward on Tuesday August 08 2017, @06:35AM (#550476)

        Well, maybe Kinney had nearly-invisible paint dots added to his forehead that were recognized as a gun?

  • (Score: 2) by Wootery on Tuesday August 08 2017, @10:07AM

    by Wootery (2341) on Tuesday August 08 2017, @10:07AM (#550524)

    they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

    This seems to assume that we can intuitively understand our own visual processing machinery. We don't. Of course we don't.

(1)