Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday August 18 2020, @05:11PM   Printer-friendly
from the mood-ring dept.

British police to trial facial recognition system that detects your mood:

A British police force is set to trial a facial recognition system that infers people's moods by analyzing CCTV footage.

Lincolnshire Police will be able to use the system to search the film for certain moods and facial expressions, the London Times reports. It will also allow cops to find people wearing hats and glasses, or carrying bags and umbrellas.

The force has got funding from the Home Office to test the tool in the market town of Gainsborough, but ethical concerns have delayed the pilot's launch.

A police spokesperson told the Times that all the footage will be deleted after 31 days. The force will also carry out a human rights and privacy assessment before the trial gets the green light.

[...] "At the same time as these technologies are being rolled out, large numbers of studies are showing that there is... no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks," AI Now's co-founder Prof Kate Crawford told the BBC late last year.

Could it be it was thrown off because everyone had a stiff upper lip?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by requerdanos on Tuesday August 18 2020, @05:30PM (3 children)

    by requerdanos (5997) Subscriber Badge on Tuesday August 18 2020, @05:30PM (#1038427) Journal

    The force has got funding from the Home Office to test the tool... but ethical concerns have delayed the pilot's launch.

    (Emphasis mine.) That word should perhaps (in a more ideal world) be "prevented," not "delayed."

    The force will also carry out a human rights and privacy assessment before the trial gets the green light.

    I believe that this should ideally say "to see whether or not," instead of simply "before."

    The idea seems to be that the facial recognition application will go forward no matter what the ethical concerns, no matter the effects on human rights and privacy. In such an environment, does it matter what the ethical concerns and effects on rights and privacy are? If so, why the drive to continue no matter what? If not, then why bother looking into it (other than to partially appease the reasonable)?

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday August 18 2020, @08:31PM (2 children)

    by Anonymous Coward on Tuesday August 18 2020, @08:31PM (#1038492)

    They know that ethics can diminish, with enough exposure, people will get used to most things.

    • (Score: 4, Interesting) by ledow on Wednesday August 19 2020, @11:57AM (1 child)

      by ledow (5567) on Wednesday August 19 2020, @11:57AM (#1038775) Homepage

      To be honest, there's no way that any facial recognition system is any good anyway. Especially if you're using it to judge "mood" or being masks and glasses.

      This stuff is literally pointless. More arrests happen on random sampling that do via face rec at public events. It's literally people who happened to have something in their pocket they shouldn't have, not "Hey, look, we found Lord Lucan hiding among the crowds at Glastonbury!".

      The trials of all kinds of systems, for all kinds of UK police forces, have basically determined that it's no better at recognition than random chance, and far inferior to an informed local copper looking at a camera or - even better - people coming into a venue.

      If it wasn't for the fact that I expect all kinds of feature creep (like this "mood" detection rubbish), and that it's an enormous waste of money, I'd literally say let them have it. It just does nothing.

      The best facial recognition in the world cannot tell that I am me, when I stand two feet from a 4K camera aimed directly and solely at my face, at Stansted e-Passport terminals, present my official document with a computer-readable image of myself, stand stock-still for several minutes following all instructions of both computer and humans, who then spend 5 minutes fussing over it and asking me to do different things before then directing me to the human passport desks anyway. Who wave me through because I look exactly like my passport and nothing has ever flagged for me.

      And that's been happening for YEARS. I think it allowed me through once, in nearly - what? 10 years of having that system. I've been walked to the human desks for at least 5 of those years because I gave up on it, but the staff get stroppy if you do that and have an e-Passport, so I am often forced through the e-passport channels only to then spend 10 minutes faffing about before they let me go back to the human channels again.

      This stuff just simply does not work. Or we'd have known who the people who commit terrorists acts were without ever having to follow them on camera. You'd just type in John Smith and it would give you their full history of their journey. In actual fact, what happens is we manually watch camera-months of CCTV, work out that they tagged their Oyster card on a Tube station entrance, then trace the credit card that topped up that card, then we might know who they were / were working for.

      As a UK taxpayer, I do not like my money wasted like this, by police or immigration/customs.

      As an IT guy, programmer, with an interest in proper AI, I know this stuff cannot and will not work reliably enough with anything currently called "AI" or anything borne from that.

      As a UK citizen, I have no fear of the system whatsoever, because of the above, except possibly being flagged as a terrorist when I'm not, but we can resolve that if it ever happens and I can prove my identity in a trice.

      As a privacy advocate, I think this is a pointless waste of money first and foremost, but think that we just shouldn't be using such flaky systems that could pay for several hundred real coppers. What will happen is that all the trials will continue to fail miserably until we realise they're useless, they'll quietly die (nobody will publicly admit to wasting millions on a system that doesn't work) and then we'll say we're doing that because we don't want to hurt people's privacy when really it's nothing to do with that.

      Hell, I can't even log into my own phone or laptop with facial recognition with any certainty (and certainly don't trust that nobody could pretend to be me and log into them), and that's got precisely ONE face to match against. When you have a million images and you're trying to match them against a database of a million people of interest, that software will have a hissy-fit and generate so many false-negatives and false-positives that nobody will bother to do anything with the information. Which is exactly what happens on the trials.

      • (Score: 2) by bzipitidoo on Wednesday August 19 2020, @04:10PM

        by bzipitidoo (4388) on Wednesday August 19 2020, @04:10PM (#1038867) Journal

        For decades now, law enforcement has been wanting facial recognition. Badly. They want it so much that they are entirely too credulous in accepting assurances that we have it now, or that some group can do it.

        An additional underappreciated difficulty is scaling. If you have a system that can reliably identify one person among 100, it's likely to falter when faced with a million. Even with 99.9% reliability, that's still 1000 false positives out of that database of 1 million.

        I also wonder if we're too proud of our individualism. Maybe, the distinctiveness of faces is less than we fondly like to think.