Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 10 2018, @04:34PM   Printer-friendly
from the do-you-see-what-I-see? dept.

Image recognition technology may be sophisticated, but it is also easily duped. Researchers have fooled algorithms into confusing two skiers for a dog, a baseball for espresso, and a turtle for a rifle. But a new method of deceiving the machines is simple and far-reaching, involving just a humble sticker.

Google researchers developed a psychedelic sticker that, when placed in an unrelated image, tricks deep learning systems into classifying the image as a toaster. According to a recently submitted research paper about the attack, this adversarial patch is "scene-independent," meaning someone could deploy it "without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene." It's also easily accessible, given it can be shared and printed from the internet.


Original Submission

Related Stories

Hackers Can Trick a Tesla into Accelerating by 50 Miles Per Hour 41 comments

Hackers can trick a Tesla into accelerating by 50 miles per hour:

This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

Mobileye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla's automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee's Advanced Threat Research team.

The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35, and in testing, both the 2016 Tesla Model X and that year's Model S sped up 50 miles per hour.

This is the latest in an increasing mountain of research showing how machine-learning systems can be attacked and fooled in life-threatening situations.

[...] Tesla has since moved to proprietary cameras on newer models, and Mobileye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.

There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.

"What we're trying to do is we're really trying to raise awareness for both consumers and vendors of the types of flaws that are possible," Povolny said "We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it."

So, it seems this is not so much that a particular adversarial attack was successful (and fixed), but that it was but one instance of a potentially huge set. Obligatory xkcd.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by Anonymous Coward on Wednesday January 10 2018, @04:41PM (2 children)

    by Anonymous Coward on Wednesday January 10 2018, @04:41PM (#620508)

    Robot: Earth women who experience sexual ecstasy with mechanical assistance always tend to feel guilty!
    Gloria: I’m just scared I’ll come home one day and find you screwing a toaster.
    Robot: You’ll just have to trust me.

  • (Score: 1) by starvingboy on Wednesday January 10 2018, @04:56PM (12 children)

    by starvingboy (6766) on Wednesday January 10 2018, @04:56PM (#620510)

    I wonder if they'll come up with something similar for facial recognition software. It'd be disabled tracking via outerwear.

    • (Score: 3, Informative) by Snow on Wednesday January 10 2018, @05:00PM (3 children)

      by Snow (1601) on Wednesday January 10 2018, @05:00PM (#620513) Journal

      Like a ski mask?

      • (Score: 2) by ese002 on Wednesday January 10 2018, @10:50PM (1 child)

        by ese002 (5306) on Wednesday January 10 2018, @10:50PM (#620695)

        Like a ski mask?

        Wearing a ski mask blocks facial recognition but will make you highly conspicuous. All the humans will be watching you and, likely, so will the cameras.

        A suitable adversarial patch might defeat facial recognition while still being inconspicuous to humans.

        • (Score: 2) by fliptop on Thursday January 11 2018, @05:13AM

          by fliptop (1666) on Thursday January 11 2018, @05:13AM (#620807) Journal

          Wearing a ski mask...will make you highly conspicuous.

          It's also illegal [thedaonline.com] in some states.

          --
          Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
      • (Score: 2) by requerdanos on Thursday January 11 2018, @12:57AM

        by requerdanos (5997) Subscriber Badge on Thursday January 11 2018, @12:57AM (#620741) Journal

        Like a ski mask?

        Well, no, not like that.

        A ski mask covers almost 100% of the face, and even afterwards, it's still completely identifiable as a face. Zoom in enough and you can even get an identifying retina scan.

        This isn't like that. Think about the differences between a ski mask (covers almost all the face and still, it's clearly a face) vs. our sticker (covers none of the whatever and still gets it misidentified as something completely different):

        - The sticker does not have to cover up the banana (or other subject) to get the image classified as a toaster.
        - The sticker does not have to cover up even a very large percentage of the image to get the image classified as a toaster.
        - The sticker does not have to convincingly depict a toaster to get the image classified as a toaster.

        Heck, you could probably wear reflective sunglasses with the toaster sticker design printed on each lens and get identified as a tall, mobile toaster. If you lived in the movie Minority Report, in fact, I'd recommend it, so department stores aren't constantly asking how you liked those jeans, Mr. Yakamoto?

      • (Score: 1) by fustakrakich on Wednesday January 10 2018, @05:51PM (1 child)

        by fustakrakich (6150) on Wednesday January 10 2018, @05:51PM (#620541) Journal

        Oh no, you got it all wrong [wordpress.com]...

        *are hashtags capitalized?

        --
        La politica e i criminali sono la stessa cosa..
        • (Score: 0) by Anonymous Coward on Thursday January 11 2018, @02:44AM

          by Anonymous Coward on Thursday January 11 2018, @02:44AM (#620776)
          You should use a Trump mask and maybe #MAGA and neo-nazi stuff. The media has been rabidly trying to pin anything and everything on Trump. So there'll be a higher chance of the resulting stories being about Trump and not about what you actually did and who you are ;).
      • (Score: 2) by TheRaven on Thursday January 11 2018, @11:04AM

        by TheRaven (270) on Thursday January 11 2018, @11:04AM (#620877) Journal
        Those obscure your face entirely, and look suspicious. The goal is to have something that is ignored by humans, but makes face recognition algorithms either not classify you as a thing with a face at all, or classify you as some innocuous face. Because most of these deep learning buzzwordy systems are correlation engines working on unknown parameters, it's often possible to alter something that they detect in such a way that you get a completely different result, with minimal changes to the input (last year there was a single-pixel change to an image that caused Google's system to recognise a car as a dog or vice versa, for example).
        --
        sudo mod me up
    • (Score: 2) by bob_super on Wednesday January 10 2018, @05:25PM (1 child)

      by bob_super (1357) on Wednesday January 10 2018, @05:25PM (#620528)

      I'm already in contact with cap manufacturers to massively print that pattern.
      $50 for the Safe From Facial Tracking caps. Gonna be Huuge in China. I'll call you to confirm whether the tinfoil hat people make be a billionaire or sue...

    • (Score: 2) by LoRdTAW on Wednesday January 10 2018, @07:28PM

      by LoRdTAW (3755) on Wednesday January 10 2018, @07:28PM (#620586) Journal
    • (Score: 2) by sgleysti on Thursday January 11 2018, @02:57AM

      by sgleysti (56) Subscriber Badge on Thursday January 11 2018, @02:57AM (#620780)

      Like the ugly shirt in Gibson's novel "Zero History".

  • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @05:03PM (1 child)

    by Anonymous Coward on Wednesday January 10 2018, @05:03PM (#620517)

    It's actually toasters all the way down. Time to re-build all the ancient Hindu statues that depict the earth supported on turtles...

    https://en.wikipedia.org/wiki/World_Turtle [wikipedia.org]

    • (Score: 3, Funny) by c0lo on Wednesday January 10 2018, @10:41PM

      by c0lo (156) Subscriber Badge on Wednesday January 10 2018, @10:41PM (#620691) Journal

      Time to re-build all the ancient Hindu statues that depict the earth supported on rifles [gizmodo.com]...

      FTFY

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday January 10 2018, @05:08PM (7 children)

    by Anonymous Coward on Wednesday January 10 2018, @05:08PM (#620521)

    From https://gizmodo.com/this-simple-sticker-can-trick-neural-networks-into-thin-1821735479 [gizmodo.com]

    Most notably, with the roll-out of self-driving cars. These machines rely on image recognition software to understand and interact with their surroundings. Things could get dangerous if thousands of pounds of metal rolling down the highway can only see toasters.

    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @05:19PM (2 children)

      by Anonymous Coward on Wednesday January 10 2018, @05:19PM (#620525)

      Makes me wonder. Will clothing and fashion accessories that defeat facial recognition become illegal because they could lead to death for occupants of autonomous vehicles?

      Does that meant that machine vision is still not ready for self-driving cars?

      Moreover, do we want to imagine a world where machine vision is ready for self-driving cars and cannot be fooled by clever clothing and fashion accessories?

      • (Score: 2) by bob_super on Wednesday January 10 2018, @05:28PM

        by bob_super (1357) on Wednesday January 10 2018, @05:28PM (#620531)

        "We need those self-driving cars, because they save lives! Therefore, all clothing is now banned!"
        CA: Fine, man!
        ND, WY: No self-driving cars!
        FL: Self-driving vehicles allowed, but not near retired communities...

      • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @05:51PM

        by Anonymous Coward on Wednesday January 10 2018, @05:51PM (#620542)

        > Does that meant that machine vision is still not ready for self-driving cars?

        All the might of the tech industry, brought to its knees by graffiti artists.

    • (Score: 2) by HiThere on Wednesday January 10 2018, @06:20PM

      by HiThere (866) Subscriber Badge on Wednesday January 10 2018, @06:20PM (#620551) Journal

      And that's why it's important that this stuff be done *now*, so the algorithms can be hardened before the cars become common.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by TheRaven on Thursday January 11 2018, @11:08AM (2 children)

      by TheRaven (270) on Thursday January 11 2018, @11:08AM (#620878) Journal
      Once upon a time, people wrote software. They wrote this software for use on non-connected systems, and assumed all data was trustworthy. Later, they learned that it was important to consider an adversary in design of their systems and at least some software became more secure. Then people came up with complex ways of making decisions based on correlations. They assumed that their data was always trustworthy. Eventually, they will figure out that you have to design with an adversary in mind. Unfortunately, this is probably impossible for most current machine learning techniques, because to understand how to counter an adversary you have to actually understand the problem that you're trying to solve, and if you understand the problem that you're trying to solve then machine learning is not the right tool for the job.
      --
      sudo mod me up
      • (Score: 2) by Wootery on Friday January 12 2018, @01:30PM (1 child)

        by Wootery (2341) on Friday January 12 2018, @01:30PM (#621357)

        Two ideas spring to mind:

        • For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.
        • I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security.
        • (Score: 3, Insightful) by TheRaven on Saturday January 13 2018, @03:17PM

          by TheRaven (270) on Saturday January 13 2018, @03:17PM (#621818) Journal

          For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.

          That doesn't really help, because it assumes non-malicious mislabelling. It's analogous to error correction: ECC will protect you against all of the bit flips that are likely to occur accidentally, but if an attacker can flip a few bits intelligently then they can get past it.

          I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security

          That''s more likely, but it's very computationally expensive (even by machine-learning standards) and it has the same problem: an intelligent adversary is unlikely to pick the same possible variations as something that is not intelligently directed. Any machine learning approach gives you an approximation - the techniques are inherently unsuitable for producing anything else - and an intelligent adversary will always be able to find places where an approximation is wrong.

          --
          sudo mod me up
  • (Score: 5, Funny) by SomeGuy on Wednesday January 10 2018, @05:26PM (1 child)

    by SomeGuy (5632) on Wednesday January 10 2018, @05:26PM (#620529)

    I knew I shouldn't have left the AfterDark Flying Toasters screen saver running on the Neural Network computers.

    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @06:17PM

      by Anonymous Coward on Wednesday January 10 2018, @06:17PM (#620550)

      In high school I spent my rebellious teen years violating Galactic Ordinance 729.881-ZT13: accelerating toasters to unsafe speeds.

      Mr. Glitch's Retro Reviews: [After Dark] Lunatic Fringe [blogspot.com]

  • (Score: 2) by rts008 on Wednesday January 10 2018, @05:27PM (1 child)

    by rts008 (3001) on Wednesday January 10 2018, @05:27PM (#620530)

    I guess I'll finally retire my Nixon mask, and replace it with a psychedelic toaster mask.
    Added bonus: none of my friends and family will recognize me!

    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @09:03PM

      by Anonymous Coward on Wednesday January 10 2018, @09:03PM (#620625)

      It's no wonder: they all think they're related to Nixon.

  • (Score: 5, Funny) by Bot on Wednesday January 10 2018, @06:01PM (2 children)

    by Bot (3902) on Wednesday January 10 2018, @06:01PM (#620545) Journal

    That IS a toaster.

    --
    Account abandoned.
    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @06:25PM

      by Anonymous Coward on Wednesday January 10 2018, @06:25PM (#620554)

      I'm a little teapot
      Short and stout
      Here is my handle
      Here is my spout

      When I get all steamed up
      Hear me shout:
      Tip me over
      and pour me out!

    • (Score: 3, Touché) by DeathMonkey on Wednesday January 10 2018, @07:09PM

      by DeathMonkey (1380) on Wednesday January 10 2018, @07:09PM (#620571) Journal

      Howdy Doodly Doo! Would you like some toast?

  • (Score: 3, Insightful) by HiThere on Wednesday January 10 2018, @06:24PM

    by HiThere (866) Subscriber Badge on Wednesday January 10 2018, @06:24PM (#620553) Journal

    You know, that patch *does* sort of look like a toaster. Probably the bright colors are so that the patch is what will be examined for "what is that?" rather than the banana that's next to it.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @06:38PM (2 children)

    by Anonymous Coward on Wednesday January 10 2018, @06:38PM (#620559)

    How do they generalize that the sticker has the same affect on all deep-learning image recognition algorithms? I highly doubt ones trained specifically to recognize bananas, and nothing else, could suddenly call it a toaster.

    • (Score: 2) by sgleysti on Thursday January 11 2018, @03:11AM (1 child)

      by sgleysti (56) Subscriber Badge on Thursday January 11 2018, @03:11AM (#620784)

      They demonstrate a general method to create a patch that will trick a classification algorithm, but any patch created by this method is only likely to work against the algorithm it was designed to trick. Furthermore, they had better success when they used knowledge of the inner workings of the classification algorithm to design the patch. They still had fair success when only using the classification algorithm as a black box to design the patch.

  • (Score: 4, Interesting) by stretch611 on Wednesday January 10 2018, @06:54PM (4 children)

    by stretch611 (6199) on Wednesday January 10 2018, @06:54PM (#620563)

    Can I put it on my license plate to fool speed cameras?

    --
    Now with 5 covid vaccine shots/boosters altering my DNA :P
    • (Score: 3, Interesting) by Kromagv0 on Wednesday January 10 2018, @07:49PM (3 children)

      by Kromagv0 (1825) on Wednesday January 10 2018, @07:49PM (#620595) Homepage

      For ALPRs I would suggest a large array of IR LEDs near each plate. Like in the range of 300-400 watts TPD. Massively underexpose everything else in the image

      --
      T-Shirts and bumper stickers [zazzle.com] to offend someone
      • (Score: 2) by etherscythe on Thursday January 11 2018, @04:29PM (2 children)

        by etherscythe (937) on Thursday January 11 2018, @04:29PM (#620970) Journal

        I tried to design a system similar to this in the form of a necklace to enforce personal privacy in the age of ubiquitous cameras and Facebook. I have yet to determine how to do so without endangering the subject's (and innocent bystanders') eyeballs, as I understand that one of the dangerous aspects of colored lasers is not just the laser itself, but the high amount of IR leakage (particularly in cheap lasers) that goes along with it and which the eye does not detect and contract the iris to counteract.

        --
        "Fake News: anything reported outside of my own personally chosen echo chamber"
        • (Score: 2) by Kromagv0 on Friday January 12 2018, @05:00PM (1 child)

          by Kromagv0 (1825) on Friday January 12 2018, @05:00PM (#621445) Homepage

          For things like that I would think that IR may not be the best choice as a lot of personal cameras (cellphones mostly) have an IR filter over the sensor. For security cameras were they likely don't have an IR filter it would work and one could probably just put out a bunch of IR to throw the exposure off. By a bunch here I am thinking the output of a few IR LEDs in an earring or as decoration on a hat so a couple of watts in a broad pattern. The problems with lasers is that they don't have a spread and instead are a point while LEDs have a spread to them which is something you want to take advantage of here. Lasers are fun for damaging sensors as while it isn't a lot of total power it is in a very tight beam so the watts per illuminated area is huge which is what causes the damage. I should try this some day and install a couple of high output IR LEDs in my hat and go enter the local wal*mart where they have the security cam and TV at the entrance to see how it works.

          --
          T-Shirts and bumper stickers [zazzle.com] to offend someone
          • (Score: 2) by etherscythe on Monday January 15 2018, @05:28PM

            by etherscythe (937) on Monday January 15 2018, @05:28PM (#622628) Journal

            Where I work we have a security demo station that I have done some testing with. It may depend on the quality of the cameras, and whether they are designed to supplement visible spectrum with IR for night vision as some of them are equipped with, but even my "up to 1W" blue beam was unable to permanently damage any of the cameras I attacked (with authorization of management). This suggests that the output, even in wide "unfocused" spread would have to be significant enough that injury to the user is a real possibility. As I said, you don't see the IR, so unlike having a bright flashlight clipped to the front of your shirt where you can tell that you need to aim it away or squint a little bit sometimes and be fine, the safety margin is too thin for my comfort.

            Try taking an average TV remote. Most of them are NIR, and can demonstrate washout pretty well (or lack thereof, as the case may be). I found them to be mildly annoying at best and not significantly impairing recognition in the video stream, but again, haven't worked up into stronger LEDs for the above reasons. Not to mention, your power source would need to be bulkier and/or more often refreshed when your power output is boosted.

            --
            "Fake News: anything reported outside of my own personally chosen echo chamber"
  • (Score: 4, Insightful) by captain_nifty on Wednesday January 10 2018, @07:20PM (2 children)

    by captain_nifty (4252) on Wednesday January 10 2018, @07:20PM (#620578)

    Imagine a sticker like this but instead of a benign toaster they chose to fool the algorithm into believing everything was a gun.

    Cue the ED-209, demanding that you drop the non existent gun.

    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @07:25PM (1 child)

      by Anonymous Coward on Wednesday January 10 2018, @07:25PM (#620582)

      Just don't drop trou...it might think you have another gun!

      • (Score: 0) by Anonymous Coward on Thursday January 11 2018, @08:14AM

        by Anonymous Coward on Thursday January 11 2018, @08:14AM (#620843)

        You have 5 seconds to drop the derringer.

  • (Score: 2) by Azuma Hazuki on Wednesday January 10 2018, @08:39PM (1 child)

    by Azuma Hazuki (5086) on Wednesday January 10 2018, @08:39PM (#620617) Journal

    See if it can actually make toast. After all, as the internet has informed me for years upon years, All Toasters, Toast Toast.

    --
    I am "that girl" your mother warned you about...
    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @09:13PM

      by Anonymous Coward on Wednesday January 10 2018, @09:13PM (#620628)

      - A toast please!
      - Cheers!

  • (Score: 4, Informative) by Anonymous Coward on Wednesday January 10 2018, @10:34PM (4 children)

    by Anonymous Coward on Wednesday January 10 2018, @10:34PM (#620688)

    This summary and title is literally true, but completely misrepresentation of what is happening. They make it sound like "lol, put a sticker on something and the system thinks it's a toaster."

    A better summary would be "include this sticker in a photo, and image analysis now says the photo is a photo of a toaster." The system doesn't think the banana is now a toaster, it now thinks that the photograph is a photo of a toaster, with the banana being background or something else.

    In looking at the photo, I can entirely see this happening. If somebody showed me that photo and asked, I'd probably say, "it's a photo of a sticker and banana." That an image recognition software prioritizes the sticker, and mis-sees it as a toaster, isn't that surprising.

    But that's a less sensationalist, click-bait title, and we can't have that...

    • (Score: 3, Insightful) by requerdanos on Thursday January 11 2018, @12:49AM (2 children)

      by requerdanos (5997) Subscriber Badge on Thursday January 11 2018, @12:49AM (#620739) Journal

      I'd probably say, "it's a photo of a sticker and banana."

      Perhaps you would; I don't know you (or if I do, don't know it). But I would think the typical person, when asked to identify an image, would focus on things that occupy the largest portion of the image (and are therefore "the subject") and then, describe those things in an identifying way.

      We are looking at the images as digital photos or captures, just as the algorithms are, and it's not obvious that the circular trainwreck garbage fire area of the image is a sticker (much less a toaster), or whether it's a really strange poker chip on top of a photo, or whether it's a weird branding icon superimposed by latest-ecommerce-store.com, or what.

      But unless we just read TFA or work with such images frequently, that circular subimage is not going to be what we trigger on and say "Hey! I identify that part! This is a picture of a sticker! Oh, and there's also a banana," as you suggest.

      Rather, probably we'd focus on the part that we instantly identify: The yellow fruit.

      So, on Family Feud, the board would look something like this...

      One hundred people surveyed...
      Top two answers on the board...

      |--How would you identify this photo?---|
      | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
      | XXXXX [.99.] Banana XXXXXXXXXXXXXXXXX |
      | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
      | XXXXX [..1.] Sticker (And Banana) XXX |
      | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
      |---------------------------------------|

      I think that's kind of the point of the finding, that the sticker makes the bot classify the image as something different than would a person.

      - The sticker does not have to cover up the banana (or other subject) to get the image classified as a toaster.
      - The sticker does not have to cover up even a very large percentage of the image to get the image classified as a toaster.
      - The sticker does not have to convincingly depict a toaster to get the image classified as a toaster.

      I don't think that's really clickbait, but rather, a somewhat interesting finding.

      Permit me if you will to put on a moving picture with sound, in your imagination.

      Picture in your head someone showing someone else a rectangle of stiff paper imprinted with an image. In this case, let's say the image is of a man holding, displaying, a bizarrely-colored circle such as that in the banana-toaster image. Now, in your mind's ear, hear that person say to the other, "Hey, look, this is my dad."

      Even if the thought occurs to the other person that "That's not your dad; your dad is a person with a mass much greater than this square of paper and of a different shape and composition," they are not likely to say it. But it probably won't even *occur* to that other person to say "No, that's not your dad, that's a freaking toaster." (Your potential answer, "That's not your dad, that's primarily a sticker, and also some guy," is also on the not likely list.) But the bot? "Dude, that's not your dad, that's a toaster, 90+% probability" every single time.

      • (Score: 2) by fliptop on Thursday January 11 2018, @05:32AM (1 child)

        by fliptop (1666) on Thursday January 11 2018, @05:32AM (#620812) Journal

        But I would think the typical person, when asked to identify an image, would focus on things that occupy the largest portion of the image

        So you're saying it's really a picture of a tabletop w/ a sticker and a banana?

        --
        Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
        • (Score: 2) by requerdanos on Thursday January 11 2018, @11:32AM

          by requerdanos (5997) Subscriber Badge on Thursday January 11 2018, @11:32AM (#620893) Journal

          So you're saying it's really a picture of a tabletop w/ a sticker and a banana?

          Sigh. Yes, it does appear that that's what I was saying. Probably not, however, what I should have said.

    • (Score: 1, Interesting) by Anonymous Coward on Thursday January 11 2018, @03:05AM

      by Anonymous Coward on Thursday January 11 2018, @03:05AM (#620781)
      Yeah, I mean if you had to pick a SINGLE item of what the photo was about, you might not say banana and say "shiny sticker" instead.

      Heck if you only gave two choices like banana or toaster, many humans might select toaster for "reasons".
  • (Score: 3, Interesting) by KritonK on Thursday January 11 2018, @09:25AM

    by KritonK (465) on Thursday January 11 2018, @09:25AM (#620853)

    I isolated the sticker from the paper, and did an image-based google search for it. The result was "artificial intelligence", with references to the work that produced the sticker.

    I then pasted the sticker on an image of a banana, and did an image-based search for the combination. The result was "banana".

    It seems that google have already updated their image recognition algorithms to account for the work in the paper. (Or, there never was a problem with the algorithms, and the paper is invalid.)

  • (Score: 2) by KritonK on Thursday January 11 2018, @09:28AM

    by KritonK (465) on Thursday January 11 2018, @09:28AM (#620855)

    I hadn't realized how deeply the Cylons have penetrated our society.

  • (Score: 0) by Anonymous Coward on Thursday January 11 2018, @12:17PM

    by Anonymous Coward on Thursday January 11 2018, @12:17PM (#620905)

    ... it's only a toaster if you can see the reflection of someone naked in it.

(1)