Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday September 16 2014, @07:55AM   Printer-friendly
from the throw-away-your-makeup-and-forget-that-diet dept.

Darren Pauli at El Reg reports

Chinese researchers have developed a facial recognition system that can pick faces from a crowd with 99.8 percent accuracy from 91 angles. The platform can distinguish between identical twins, unravel layers of makeup and still identify an individual if they've packed on or shed kilos.

Researcher Zhou Xi of the Chinese Academy of Science told local reporters [Google translation] the system would be built into a [mobile] app next year.

"The facial recognition system is not only accurate but also quick to recognise," Xi said.

The platform which is limited use (sic) in China topped Carnegie Mellon University's global standards beating the previous accuracy record of 97.6 percent.

It was trained against a database sporting 50 million Chinese faces compiled with help from the University of Illinois and the National University of Singapore.

The biometrics system comes as Australia's Immigration Minister Scott Morrison announced facial recognition Smart Gates would be installed at departure areas within the nation's airports.

The systems already in place at arrival points work by matching Aussie or Kiwi passports against a stored photo and dramatically cut down on Customs wait times, much to this correspondent's delight.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by nyder on Tuesday September 16 2014, @08:56AM

    by nyder (4525) on Tuesday September 16 2014, @08:56AM (#93893)

    Perfect for the Government who bulk collect data on their citizens, now they can also match peoples movements thru camera's (already to it with cellphones) and have picture evidence to hold against you for the rest of your life with 99.8% accuracy.

    So 1984 was a few decades late...

    • (Score: 3, Interesting) by E_NOENT on Tuesday September 16 2014, @10:11AM

      by E_NOENT (630) on Tuesday September 16 2014, @10:11AM (#93906) Journal

      Yes, there's that...but let's not forget about the civil/commercial sphere, either!

      Maybe:

      - Buying things with cash won't protect your anonymity anymore
      - Criminals will figure out how to steal facial identities, and lazy security personnel will "take the computer's word for it" and arrest innocent people
      - In-store cameras will "get to know you better" (no store shopping card needed!) and send you eerily personal "special offers" for products you might've browsed
      - Something something "face off" -- http://www.imdb.com/title/tt0119094/ [imdb.com]

      That's all I have for now. I'm sure we'll learn the reality soon enough.

      --
      I'm not in the business... I *am* the business.
    • (Score: 4, Funny) by q.kontinuum on Tuesday September 16 2014, @11:22AM

      by q.kontinuum (532) on Tuesday September 16 2014, @11:22AM (#93924) Journal
      Yes, but think of the children!!! And once an evil terrorist performed his suicide attack, we can efficiently prevent him from ever entering an airplane again!
      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 2) by carguy on Tuesday September 16 2014, @06:37PM

      by carguy (568) Subscriber Badge on Tuesday September 16 2014, @06:37PM (#94152)

      Meanwhile, license plate readers (LPR) are being used by more than just law enforcement. Picked up October 2014 "Car & Driver" magazine at the dentist office which notes that towing companies are now building huge license plate + GPS databases. The readers (cameras) are often hidden in a light bar or behind the grille. They have enough data to track the movements of many cars and figure out where you are likely to be at any time. All over the USA (except a few states that regulate it like NH and ME).

      Older page, https://www.aclu.org/alpr [aclu.org]
      One of the companies that uses LPR, http://www.airrecovery.com/technology.html [airrecovery.com]

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday September 16 2014, @09:45PM

        by Anonymous Coward on Tuesday September 16 2014, @09:45PM (#94254)

        > Meanwhile, license plate readers (LPR) are being used by more than just law enforcement.

        They've doing that for years. You can even pay $10 to look up a plate in their database. [forbes.com]

        I've been thinking about countermeasures for LPR. The obvious ones like a fresnel lens over the plate are also obviously illegal. There is a product out there targeted at speedcams/red-light cams that flashes a brighter flash to overwhelm the camera when it detects a speedcam flash, apparently it works great. But if the camera doesn't use a flash it is useless.

        So I am thinking that instead of hiding the plate, it would be better to screw with the algorithms. That if you put some characters on either side of the plate, using the same colors, sizes and spacing as the ones on the plate (but not necessarily the same font, san-serif should be enough) the LPR would not be able to pick out the actual license numbers from the "camouflage" around it. Thus it would end up getting a false reading.

        It would not fool a human, but these are automated systems we have to worry about. As far as I can tell there are no laws limiting what you display next to a license plate as long as the plate itself is fully visible, lit at night, in the correct location on the car and oriented correctly.

  • (Score: 2) by aristarchus on Tuesday September 16 2014, @09:06AM

    by aristarchus (2645) on Tuesday September 16 2014, @09:06AM (#93895) Journal

    Before anyone goes off saying that they all look the same, take a look at the terra-cotta army surrounding the tomb of the First Emperor. If anyone can do facial recognition, it will be the Chinese. Europeans, with all the facial hair, they all look the same. Especially the women.

    • (Score: 5, Funny) by Tanuki64 on Tuesday September 16 2014, @09:12AM

      by Tanuki64 (4712) on Tuesday September 16 2014, @09:12AM (#93896)

      Insensible clot. My wife has no facial hair. She shaves twice a day.

      • (Score: 3, Funny) by aristarchus on Tuesday September 16 2014, @09:28AM

        by aristarchus (2645) on Tuesday September 16 2014, @09:28AM (#93900) Journal

        Sorry! How could I have known? (And I thought that was you!)

      • (Score: 3, Funny) by Tanuki64 on Tuesday September 16 2014, @12:47PM

        by Tanuki64 (4712) on Tuesday September 16 2014, @12:47PM (#93963)

        Awww... come on. Whoever modded this 'informative' should step forward... so he can be modded 'funny'. :-D

        • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @02:17PM

          by Anonymous Coward on Tuesday September 16 2014, @02:17PM (#94013)

          Whoever modded this 'informative' should step forward...

          Hang on, you mean...?!?... ohh, teaches me right, next time I'll ask for "photos or it didn't happen" before awarding informative points 😋

    • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @02:36PM

      by Anonymous Coward on Tuesday September 16 2014, @02:36PM (#94025)

      > Before anyone goes off saying that they all look the same,

      They do all look the same, to someone who isn't used to looking at chinese faces.
      Same thing about white people, their faces all look the same to someone who isn't used to looking at caucasian faces.

      It isn't an issue of racism, it is an issue of people's brains not having enough practice to be able to look past the broadly common features of any ethnic group in order to see the subtle variations.

      That said, I totaly disbelieve this story. Off-angle facial recognition is really hard. Even humans have a ~20% error rate comparing faces to photo-id's and those are under near ideal conditions. The only way I would believe this algorithm gets those kinds of accuracy numbers is if it works like facebook's "deepface" software which relies on social graphs in order to narrow the search space down to just a handful.

    • (Score: 2) by c0lo on Tuesday September 16 2014, @02:38PM

      by c0lo (156) Subscriber Badge on Tuesday September 16 2014, @02:38PM (#94027) Journal

      If anyone can do facial recognition, it will be the Chinese.

      True, but some Koreans [ssbkyh.com] take it a step further into the tech-art territory. Check these out:

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by Yog-Yogguth on Tuesday September 23 2014, @08:57PM

        by Yog-Yogguth (1862) Subscriber Badge on Tuesday September 23 2014, @08:57PM (#97335) Journal

        Thanks for the links, very nice and interesting!

        --
        Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
    • (Score: 2) by HiThere on Tuesday September 16 2014, @05:52PM

      by HiThere (866) Subscriber Badge on Tuesday September 16 2014, @05:52PM (#94128) Journal

      Actually, Chinese faces *are* more similar than are European faces. Probably this is because the Chinese population has been homogenized over the centuries. There is still noticeable differences between the northern Chinese and the Southern Chinese, and, I expect, those near the border with Tibet will be different still, but that's just a guess. This, however, pales when compared with the difference among even just Caucasian faces. The British are distinct from the Scandinavian are distinct from the Germanic are distinct from the Spanish are distinct from the Italian. Even just within Britain you commonly get greater distinctiveness than you do in all of China.

      My belief is that this is because China has for a long time maintained a single government over a large area with densely populated clumps. But this is a hypothetical explanation for the observed reality. It's not just "familiarity allows us to observe differences" as the Chinese themselves observe the same reality.

      So if this system is 98% accurate on Chinese faces, with a bit of training it should be 99+% accurate on non-Chinese faces.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by HiThere on Tuesday September 16 2014, @05:54PM

        by HiThere (866) Subscriber Badge on Tuesday September 16 2014, @05:54PM (#94130) Journal

        Let me amend my final sentence. Change it to:
        So if this system is 99.8% accurate on Chinese faces, with a bit of training it should be 99.9+% accurate on non-Chinese faces.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 1) by pert.boioioing on Tuesday September 16 2014, @09:35AM

    by pert.boioioing (1117) on Tuesday September 16 2014, @09:35AM (#93902)

    Finally, one more step closer to the CV Dazzle look [cvdazzle.com] becoming commonplace. Welcome to the future!

    • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @09:52AM

      by Anonymous Coward on Tuesday September 16 2014, @09:52AM (#93903)

      That's silly. Just go old school e.g. https://en.wikipedia.org/wiki/Burqa [wikipedia.org] https://en.wikipedia.org/wiki/Niq%C4%81b [wikipedia.org] or similar.

      • (Score: 2, Informative) by pert.boioioing on Tuesday September 16 2014, @09:59AM

        by pert.boioioing (1117) on Tuesday September 16 2014, @09:59AM (#93904)

        That's silly.

        That's the point, like a totally radical, to-the-extreme yet functional '80s new-wave style. Since I don't have any hair I'm probably stuck with something like this kind of Boy George look [imgur.com], but that's fine by me.

        And just to think this fashion was *truly* ahead of its time..

      • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @11:26AM

        by Anonymous Coward on Tuesday September 16 2014, @11:26AM (#93925)

        Wasn't there a recent development in gait recognition technology?

        • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @02:40PM

          by Anonymous Coward on Tuesday September 16 2014, @02:40PM (#94029)

          > Wasn't there a recent development in gait recognition technology?

          I bet gait recognition can be totally hosed by putting a rock in your shoe.
          In ten years it will be easy enough to 3D print a dohickey that you put in your shoe which expands and contracts randomly in order to randomly vary your gait.

    • (Score: 2) by tangomargarine on Tuesday September 16 2014, @02:28PM

      by tangomargarine (667) on Tuesday September 16 2014, @02:28PM (#94024)

      Guy Fawkes masks. Plus, there's the bonus that I'm sure it'll irritate the hell out of the authorities. Although I'm sure they'll come up with a new asinine law that makes it illegal to wear masks in public or something.

      Anonymous gets it right and wrong a lot of the time. But that's what you get with chaos.

      Herding cats, hail Eris

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @02:46PM

        by Anonymous Coward on Tuesday September 16 2014, @02:46PM (#94030)

        > Guy Fawkes masks.

        You don't need something so blatant. They can't run facial recognition if the software doesn't detect a face to begin with.

        Wrap-around sun glasses that are skin-colored and matte instead of shiny metallic or smoked will make you invisible to face-dectection software. It won't fool a human who is manually directing the software, but it will be enough to stop any automated systems and that's going to be 99.999% of the problem.

  • (Score: 5, Informative) by MrGuy on Tuesday September 16 2014, @12:11PM

    by MrGuy (1007) on Tuesday September 16 2014, @12:11PM (#93941)

    99.8% accuracy sounds like a really accurate system. Until you realize that it's dangerously INaccurate for what people will try to use it for, like airport security.

    Counterintutively, even with a 0.2% error rate, MOST of the people this system will flag are actually going to be false positives. This false positive rate is called "Type II error" in statistics, and failure to understand the concept is why people think "99.8% accuracy" means "really good" for law enforcement purposes.
    Like a medical test for a rare condition, the accuracy of a system needs to be considered in relation to the "base rate" probability of the thing you're testing for.

    Let's consider an example. The US population is about 350 million people. According to recent reports, there are 875,000 people on the terrorism watchlist, but only 9,000 of those US citizens and permanent residents. [washingtontimes.com]

    Let's start with the lower number - 9,000 of 350,000,000 people in the US are suspected terrorists (that's actually overstating what being on the watchlist means, but it's a start). That means 0.0025% of Americans are terrorists.

    Now say every American is scanned by a camera every day.

    Of 349,991,000 innocent people, 349,291,018 will correctly be identified as innocent, and 699,982 will incorrectly be identified as suspected terrorists.
    Of 9,000 suspected terrorists, 8,982 will be correctly identified as suspected terrorists, and 18 will be incorrectly identified as innocent.

    This means there will be 708,964 people identified as suspected terrorists. Of those, only 8,982 people (1.2%) will actually BE terrorists.

    98.8% of the IDENTIFIED POTENTIAL TERRORISTS will actually be innocents who are mis-identified. The OVERWHELMING MAJORITY of people identified as terrorists, even by a 99.8% accurate test, will be false positives, simply because actual terrorists are so rare.

    • (Score: 0) by Anonymous Coward on Tuesday September 16 2014, @12:19PM

      by Anonymous Coward on Tuesday September 16 2014, @12:19PM (#93946)

      Mod parent up. That's the first thing I thought when I saw "99.8 percent accuracy." Probably not good enough.

    • (Score: 2, Interesting) by Tanuki64 on Tuesday September 16 2014, @12:34PM

      by Tanuki64 (4712) on Tuesday September 16 2014, @12:34PM (#93956)

      Just a little nitpicking:

      Counterintutively, even with a 0.2% error rate, MOST of the people this system will flag are actually going to be false positives.

      What about false negatives? Nothing against your calculations, but how do you know that 0.2% means that a person was falsely identified? Ok, ok, terrorists are rare. So the situation that one is falsely NOT identified even rarer. So maybe this case really can be neglected. ;-)

      • (Score: 4, Informative) by MrGuy on Tuesday September 16 2014, @02:18PM

        by MrGuy (1007) on Tuesday September 16 2014, @02:18PM (#94014)

        I know "0.2% means a person was falsely identified" because that's what a 99.8% accurate recognition means.

        The error of failure to detect a REAL terrorist is called "Type 1 error," which is ALSO a problem, but a different one. Broadly speaking, Type II error is "false positive" (incorrectly believing you've found what you're looking for) and Type I error is "false negative" (failure to detect what you're looking for).

        There's 2 reasons I focused on Type II error. First, Type I error is somewhat intuitive - a 99.8% accurate test will in fact identify 99.8% of potential terrorists. Type II error is NOT intuitive - if you arrest every identified terrorist, you'll (intuitive) get 99.8% of the terrorists, but you'll ALSO get a HUGE number of people who are NOT terrorists - the vast majority of identified terrorists...aren't. And (in my example) ALMOST EVERYONE the system will stop will NOT be a terrorist.

        Second, Type II error is the poorly understood "collateral damage" factor that people who don't understand statistics don't realize (and nefarious people who DO understand statistics like to sweep under the rug). For example, people don't understand that if we launched missiles at every person in Afghanistan that we were 99% sure was a terrorist, it's NOT the case that 99% of the people who get killed are terrorists. (even if our probabilities are accurate).

        • (Score: 1) by Tanuki64 on Tuesday September 16 2014, @02:23PM

          by Tanuki64 (4712) on Tuesday September 16 2014, @02:23PM (#94020)

          Thank you for the clarification. I really should look into statistics some day.

          • (Score: 2, Insightful) by Anonymous Coward on Tuesday September 16 2014, @02:52PM

            by Anonymous Coward on Tuesday September 16 2014, @02:52PM (#94033)

            If we taught a sort of basic stats in high-school (and not a dry theoretical way, but in an applied, social-studies sort of class) this country would be so much better off. Applied stats and critical thinking, the two classes we most need that no one teaches.

    • (Score: 3, Informative) by c0lo on Tuesday September 16 2014, @12:55PM

      by c0lo (156) Subscriber Badge on Tuesday September 16 2014, @12:55PM (#93965) Journal
      If you rely on a single algo, you are right of course. If you double with a second independent check (e.g with a human observer - btw, seems like humans have lower face recognition rates - 97.5% [soylentnews.org]) the probability of the two being both wrong goes down to 0.002*0.025=5e-5. Supplementary, in the case of disagreement between the two - clear signal of ambiguity - you switch to alternative mode of identification (not based on face recognition).
      With 5e-5 detection rate, you'll have around 17500 FP for innocents, while 0.45 of a terrorist will be detected as innocent (the rest of 0.55 of it will still be detected correctly. Problem is: you won't know which 0.55 of that terrorist is actually a baddy).
      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by tangomargarine on Tuesday September 16 2014, @02:22PM

        by tangomargarine (667) on Tuesday September 16 2014, @02:22PM (#94017)

        the probability of the two being both wrong goes down to 0.002*0.025=5e-5.

        Only if the identification criteria for both are "randomly" assigned. If they constructed the algorithm based on human identification techniques, wouldn't there be guaranteed range overlap?

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 2) by MrGuy on Tuesday September 16 2014, @03:18PM

        by MrGuy (1007) on Tuesday September 16 2014, @03:18PM (#94054)

        Sorta. But now you've introduced the human factor, and human nature.

        A human could with high accuracy be an independent check on any given case.

        But in my example, nearly 99% of the people the human is asked to double check are actually innocent. This is like being an airplane screener - virtually every bag you see is OK, but we're expecting the human to be alert enough to jump right on the one case in 100 where it's actually right to do so. That's a really hard job - after a while, you get numb to the "it's pretty much always the case that it's OK" nature of the job. Airport security routinely fails random checks, even with somewhat obvious simulated weapons, because people aren't machines.

        Trying to tell whether two very similar (by construction) faces are or are not the same all day, when they're almost always not, is not exactly a "high success" task.

        • (Score: 2) by c0lo on Tuesday September 16 2014, @03:40PM

          by c0lo (156) Subscriber Badge on Tuesday September 16 2014, @03:40PM (#94067) Journal
          Just use two different-technique algos (or two different humans). Even at a 97.5% accuracy for any of the two, the error is 2.5%^2=6.25e-4 (one order of magnitude lower than a 99.8 rate by applying the chinese algo).
          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2) by MrGuy on Tuesday September 16 2014, @03:51PM

            by MrGuy (1007) on Tuesday September 16 2014, @03:51PM (#94077)

            You're assuming independence - the probability of one algorithm making an error being independent of a second algorithm making the same error.

            That seems like a poor assumption - I don't expect a facial recognition algorithm to fail randomly - it will fail when two faces are very close in shape and structure. So the fact that the first algorithm failed would likely make it significantly more likely that the second algorithm would fail.

            I do agree that it's possible to sort out the mess - just that it's not EASY to do so.

    • (Score: 1) by keick on Tuesday September 16 2014, @01:12PM

      by keick (719) on Tuesday September 16 2014, @01:12PM (#93974)

      While your math is sound, your logic is slightly flawed. I don't think 0.2% error rate means 0.2% chance your one of the 1:38888 terrorists.

      Using your example; Out of 349,991,000 people 699,982 will be incorrectly identified as SOMEONE ELSE. You then have to consider the probability that the SOMEONE ELSE you matched against was a terrorist, 0.0025% chance or 1749 people identified as a terrorist, not 699,982.

      So that makes the false positive 5e-6... which isn't as bad.

      • (Score: 1) by Tanuki64 on Tuesday September 16 2014, @01:20PM

        by Tanuki64 (4712) on Tuesday September 16 2014, @01:20PM (#93978)

        Your logic is also slightly flawed. You assume that faces of the whole population are stored, so if there is a false positive, you are falsely identified as a random other person. It is more likely that faces are only compared to a 'most wanted' list. Being identified as one of those is always worst case.

        • (Score: 2) by MrGuy on Tuesday September 16 2014, @02:23PM

          by MrGuy (1007) on Tuesday September 16 2014, @02:23PM (#94019)

          Right, this is the assumption I was working on. As grandparent points out, what you'll see in practice depends on how you set up the test and what an "error" means in context.

          If it's a 2 pass - identify every person by name with 99.8% accuracy, then compare those identities to the identities of known terrorists, we'll get fewer false positives, because most mis-identifies will be confusing someone with another non-terrorist.

          If it's a 1-pass "are you one of the people in this set?" then I think my math holds.

    • (Score: 2) by HiThere on Tuesday September 16 2014, @06:13PM

      by HiThere (866) Subscriber Badge on Tuesday September 16 2014, @06:13PM (#94137) Journal

      Do you really think that's the largest error in the system? I suspect that "photos of an inappropriate person" are much more common, though there's probably no way to get a good estimate of that.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by arslan on Tuesday September 16 2014, @10:51PM

    by arslan (3462) on Tuesday September 16 2014, @10:51PM (#94292)

    I wonder how accurate this would be in South Korea where they have a high rate of plastic surgery..