Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday May 11 2022, @09:21PM   Printer-friendly
from the Ministry-of-Information dept.

Can European regulation rein in ill-behaving algorithms?

Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

[...] Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

[...] As the dust settles, it's clear that the affair will do little to halt the spread of AI in governments—60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm—deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.

The hope is the European Parliament's AI Act, which puts public-sector AI under tighter scrutiny, will ban some applications (like law enforcement's use of facial recognition) and flag something like the Dutch Tax's algorithm as high-risk. Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium, summed it up:

"It's not just about making sure the AI system is ethical, legal, and robust; it's also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection."

Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 11 2022, @11:58PM (1 child)

    by Anonymous Coward on Wednesday May 11 2022, @11:58PM (#1244211)

    "Nobody ever got fired for specifying IBM".

    I used to hear that a lot in my early days at corporate.

    Now, it's " Microsoft".

    More signatures and obfuscation of responsibilities.

    If the IT guy knows Linux, Corporate can't have some little non-executive peon knowing how the system works. No one below executive level should have the keys to the kingdom. Ignorance is bliss.

    And we are this trained to take zero-days in stride.

    The business art of Planned Obsolence. Can't keep something around that works.

    If I took the MBA approach to maintaining my van, I would be way in debt and walking by now.

    What is this? You still have SAE threads on your engine head bolts? You need Metric! ( Extends hand for a shake, quickly followed by pen and papers to sign. Should I shake that hand, that quirky little smile will appear, as it knows all the problems I am gonna have if I let him as much as touch that engine. Problems that will guarantee him a very profitable stream of future revenue.)

    That's called " Marketing". The rest of us call it Planned Deception. People actually get degrees in this.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 2) by SomeGuy on Thursday May 12 2022, @12:52AM

    by SomeGuy (5632) on Thursday May 12 2022, @12:52AM (#1244228)

    Well, apparently now it is "nobody ever got fired for using AI".

    With traditional properly engineered code, one can point a finger to business requirements specified by a specific person, a piece of code written by a specific person, or failure to audit the system by a specific person.

    With AI, this does not exist. It just "learns" from whatever crap is thrown at it, nobody has any idea what it is really doing, and nobody cares. So when it goes "kill all humans" or whatever like this, it's ostensibly not the fault of any one specific person.