Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday May 11 2022, @09:21PM   Printer-friendly
from the Ministry-of-Information dept.

Can European regulation rein in ill-behaving algorithms?

Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

[...] Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

[...] As the dust settles, it's clear that the affair will do little to halt the spread of AI in governments—60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm—deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.

The hope is the European Parliament's AI Act, which puts public-sector AI under tighter scrutiny, will ban some applications (like law enforcement's use of facial recognition) and flag something like the Dutch Tax's algorithm as high-risk. Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium, summed it up:

"It's not just about making sure the AI system is ethical, legal, and robust; it's also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection."

Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Immerman on Thursday May 12 2022, @02:24PM (2 children)

    by Immerman (3985) on Thursday May 12 2022, @02:24PM (#1244401)

    What I meant was, the AI can dig no deeper than the data it is provided - unlike a human who can seek out more data. (theoretically an AI could request more data - but that's not the normal way they're used)

    And I already addressed the "just don't give it the data that's illegal to consider" argument. That data is almost certain to correlate well with data it *is* given, so leaving it out is mostly irrelevant, as it was mostly redundant data to begin with. A human is unlikely to even notice minor details that reveal that data - but there are no minor details to an AI, only details that do or do not correlate with the biased patterns it's being trained to replicate.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JoeMerchant on Thursday May 12 2022, @02:43PM (1 child)

    by JoeMerchant (3937) on Thursday May 12 2022, @02:43PM (#1244404)

    The old Target stores example of a father learning his daughter is pregnant from Target marketing is priceless. I think it was based on actual purchase patterns, but it could as easily be based on web browsing habits, the AI predicted, correctly, that the daughter was with child and spilled the beans with some direct-to-pregnant-women marketing that the father saw. AI is capable of these "Sherlock Holmes" style inferences due in large part to the inhuman volume of "publicly available" information that it can have access to. In theory: license plate scanners violate no privacy laws. Anybody can sit anywhere they like and write down license plate numbers of cars that pass them on the public roads, but... putting automatic plate readers on police cruisers and developing tracking databases of what plate was where at what time all over the city for years is an unprecedented database capable of all kinds of fishing expedition abuses.

    What we need are new definitions of what is publicly available information taking into account the changes in technology since the existing definitions were drafted, and that's going to be a hard fought battle because both sides have a lot to gain or lose in the outcomes.

    Meanwhile, nasty bureaucrats will be nasty bureaucrats right up to and past the limits of what's presently legally allowed.

    --
    🌻🌻 [google.com]
    • (Score: 1, Interesting) by Anonymous Coward on Friday May 13 2022, @01:01AM

      by Anonymous Coward on Friday May 13 2022, @01:01AM (#1244635)

      What has always bugged me about that Target story is that you're never told how many mailings went out to people who weren't pregnant. That one story got held up as the poster child for businesses to throw lots of money into data mining, but if nine other people got the same mailings and they were not expecting, then that isn't very impressive. It is easy to remember the interesting outliers, but dangerous to think of them as the mean, since they are the ones you remember.

      It isn't much different than the old scam where you mail out free stock tips to 1024 houses, where in half of them you predict a stock (or I suppose more appropriate for these days, the price of bitcoin) to go up in value, and the other down. Then the next week you mail out another tip to the 512 houses where you predicted it correctly. The the next week you mail out to the 256 house, etc. By the time you get near the end, you then offer your (expensive) services to the last one (or two or four) because they're convinced how good you are because every week you've sent them tips that were correct. Not saying the Target story was a scam (though when Marketing people push stories like that, I'm usually on my guard for suspecting at least a lot of exaggeration), but that Target story is so memorable that it is the one you'd remember, and not the other angry fathers who yelled at the marketing department because the daughter wasn't pregnant.

      We're many years removed from that, and I can tell you that I am not too terribly impressed with the items that get recommended to me by various algorithms. (A few years back YouTube kept showing me ads for products/services to stop smoking, where I nor anyone in my household have ever smoked a day in our lives!)