Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday May 11 2022, @09:21PM   Printer-friendly
from the Ministry-of-Information dept.

Can European regulation rein in ill-behaving algorithms?

Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

[...] Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

[...] As the dust settles, it's clear that the affair will do little to halt the spread of AI in governments—60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm—deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.

The hope is the European Parliament's AI Act, which puts public-sector AI under tighter scrutiny, will ban some applications (like law enforcement's use of facial recognition) and flag something like the Dutch Tax's algorithm as high-risk. Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium, summed it up:

"It's not just about making sure the AI system is ethical, legal, and robust; it's also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection."

Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by JoeMerchant on Thursday May 12 2022, @11:21AM (4 children)

    by JoeMerchant (3937) on Thursday May 12 2022, @11:21AM (#1244354)

    When I was 17 I had a summer job in a factory. One of the things they gave me to do was a bag with thousands of screws made by gluing a plastic head on a bit of aluminum threaded rod. 99+% were crooked, my job was to pull out 50 straight ones. Stupid job, perfect for AI.

    Radiologists scan millions of pap smears for signs of cancer every year (probably every day, globally)... This is also an excellent job for AI to assist with, but in a very different way.

    The screws went in a circuit breaker that goes in an airplane. If you find a straight one, you can verify it is straight within acceptable tolerance and using it will not subject the aircraft to unacceptable risks of failure of the screw.

    With the pap smears, letting a single cancerous slide slip by undetected would put someone's life at risk. Better to put 100 false positives to the human radiologists' attention than to let one false negative get through.

    One could argue that the Dutch fraud screening AI should run more like the straight screw detector, but of course conservative politics would have it run more like the cancer screening tool.

    I would say that UBI would eliminate the entire question of fraud making the whole endeavor irrelevant.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by PiMuNu on Thursday May 12 2022, @09:38PM (2 children)

    by PiMuNu (3823) on Thursday May 12 2022, @09:38PM (#1244592)

    To be clear, I don't object to using computers to do things - my objection is semantic; it aint "Artificial Intelligence", it's just heuristic pattern finding. My calculator isn't AI, my internet browser isn't AI, google search isn't AI.

    • (Score: 2) by JoeMerchant on Friday May 13 2022, @12:23AM (1 child)

      by JoeMerchant (3937) on Friday May 13 2022, @12:23AM (#1244620)

      Things like AlphaGo are "just heuristics" but they are certainly approaching a form of intelligence in limited areas, and they are orders of magnitude more complex than a calculator.

      --
      🌻🌻 [google.com]
      • (Score: 0) by Anonymous Coward on Friday May 13 2022, @01:05AM

        by Anonymous Coward on Friday May 13 2022, @01:05AM (#1244638)

        I found DeepMind: The Podcast [deepmind.com] to be very interesting. It covers all of these kind of issues including the goal of achieving artificial general intelligence and what that might look like and mean. A great way to pass the time on my daily commute.

  • (Score: 0) by Anonymous Coward on Friday May 13 2022, @01:11AM

    by Anonymous Coward on Friday May 13 2022, @01:11AM (#1244639)

    One thing about UBI that I don't understand is, won't that just cause inflation and generally raise prices? Is it the absolute value of the money that is important, or the delta between what some baseline is and what things cost? If you raise the baseline and the costs generally raise the same amount, aren't you back to where you started?