Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by hubie on Wednesday May 11 2022, @09:21PM   Printer-friendly
from the Ministry-of-Information dept.

Can European regulation rein in ill-behaving algorithms?

Until recently, it wasn't possible to say that AI had a hand in forcing a government to resign. But that's precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority's workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

[...] Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

[...] As the dust settles, it's clear that the affair will do little to halt the spread of AI in governments—60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm—deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.

The hope is the European Parliament's AI Act, which puts public-sector AI under tighter scrutiny, will ban some applications (like law enforcement's use of facial recognition) and flag something like the Dutch Tax's algorithm as high-risk. Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium, summed it up:

"It's not just about making sure the AI system is ethical, legal, and robust; it's also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection."

Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Touché) by Anonymous Coward on Wednesday May 11 2022, @09:45PM (20 children)

    by Anonymous Coward on Wednesday May 11 2022, @09:45PM (#1244177)

    》 The model saw not being a Dutch citizen as a risk factor.

    Um... perhaps because it actually *was* a risk factor?

    • (Score: 4, Interesting) by Immerman on Wednesday May 11 2022, @10:16PM (15 children)

      by Immerman (3985) on Wednesday May 11 2022, @10:16PM (#1244187)

      Which would be fine *if* the humans it was advising were actually doing their job rather than rubber-stamping the recommendations of a glorified calculator.

      Doing so would also have revealed the bias much earlier, and let them assess if it was evaluating the risk accurately or had gone off the rails. "Heightened risk" is not "committed fraud".

      That's the risk with AI in virtually any advisory role to authority: As a rule people get lazy, and would rather just take the advice than do a bunch of work. *Especially* if there's no personal consequences to themselves to rival the damage done by abusing their position. And the AI has no awareness of what it's doing, nor any concern for the law.

      • (Score: 5, Insightful) by Rosco P. Coltrane on Wednesday May 11 2022, @11:31PM (6 children)

        by Rosco P. Coltrane (4757) on Wednesday May 11 2022, @11:31PM (#1244205)

        As a rule people get lazy, and would rather just take the advice than do a bunch of work.

        You forget another factor: it's safer to follow the AI's advice than to go against it. If the human processor overturns the machine's decision, they might have to justify their decision to their boss. And if *they* get it wrong, they get reprimanded. Following the incorrect machine's decision is unlikely to cause a black stain on their job record.

        • (Score: 5, Insightful) by Immerman on Thursday May 12 2022, @02:25AM (4 children)

          by Immerman (3985) on Thursday May 12 2022, @02:25AM (#1244265)

          An important reason to target AI recommendation rates below 50%, maybe well below. Make sure that agreeing with the AI is most likely to be he wrong choice, even if it means having it throwing in random people it's pretty sure are actually innocent. In fact if it can do so in a manner that's not obvious to the human making the final call, that could be a wonderful tactic for letting the AI double-check the human and raise a red flag with management.

          AI is great for picking out the thousand people out of a million that are worth a closer look, saving an immense amount of labor. But it's a long way from finding the hundred of those that are actually a problem. Recognizing that at a policy level, and ensuring that everyone knows the AI is usually wrong, and will actually go so far as to try to intentionally screw them over in front of management on a regular basis, is the best way I can think of to keep the human element actually doing their job. Nobody trusts the co-worker that's constantly trying to screw them over, no matter how good they are at their job.

          • (Score: 1) by pTamok on Thursday May 12 2022, @08:30AM (3 children)

            by pTamok (3042) on Thursday May 12 2022, @08:30AM (#1244334)

            What you say is how things should work.

            Unfortunately, as far as benefits are concerned, the government saves more money by cutting off the benefits of the 1000 and waiting for complaints. Of the 900 wrongfully accused of fraud, a significant proportion will not have the resources to challenge the assessment. This is how the UK Disability Benefits system works - more than half the assessments that are appealed are overturned: BBC: Half of disability benefits appeals won in tribunal court [bbc.com], and many are not appealed. In certain areas it is worse: BBC: Disability benefits court appeals won four out of five times [bbc.com].

            In addition, vast amounts of support goes unclaimed, for various reasons: £16 billion remains unclaimed in means tested benefits each year [entitledto.co.uk] Our annual review suggests about £15 billion of benefits remain unclaimed each year [entitledto.co.uk]

            But yes, requiring AI decisions to be reviewed properly by humans would be great, so long as a mechanism is in place, perhaps even the one you describe, to ensure a proper and substantive review takes place, and it is not simply a box-ticking exercise.

            • (Score: 2) by Immerman on Thursday May 12 2022, @01:12PM (2 children)

              by Immerman (3985) on Thursday May 12 2022, @01:12PM (#1244371)

              Knowing that even 80% of the appealed assessments are overturned isn't very informative without also knowing what percentage of assessments are appealed. It could mean that the system is working correctly in 99.999% of cases, and 20% the people who appeal are doubling down on trying to game the system. (I doubt it, but it would be consistent with such limited data)

              The government saving money may be motive not to fix the problem, but probably isn't the cause of the problem - after all saving the government money typically only benefits the politicians who get to spend it on something else, not the the people making the individual decisions, or even their organization (unless there's also a very perverse incentive structure in place)

              • (Score: 0) by Anonymous Coward on Thursday May 12 2022, @05:07PM (1 child)

                by Anonymous Coward on Thursday May 12 2022, @05:07PM (#1244466)

                "The government saving money may be motive not to fix the problem, but probably isn't the cause of the problem - after all saving the government money typically only benefits the politicians who get to spend it on something else, not the the people making the individual decisions, or even their organization (unless there's also a very perverse incentive structure in place)"

                Guess you haven't noticed the trend of conservative politicians to use austerity as a club to destroy social services. The wealthy are happy to let the average person suffer and die than pay more taxes. The old line, Republicans say the government doesn't work then get elected and deliberatelt make things worse. DeJoy should be in jail for the destruction of the USPS.

                • (Score: 3, Informative) by Immerman on Thursday May 12 2022, @06:29PM

                  by Immerman (3985) on Thursday May 12 2022, @06:29PM (#1244483)

                  You're not really disagreeing - Republicans are among the politicians that get to spend the money on something else (in their case usually either tax cuts for their rich corporate sponsors, or domestic surveillance programs under the guise of kicking out illegal immigrants)

                  The person who is actually getting paid to decide if this specific person qualifies for benefits under whatever rules the current politicians have in place though? They rarely have any direct skin in the game, nor does their immediate boss. And probably not their boss either. They're all just middle-men handing out government money that they have no way to touch themselves.

        • (Score: 0) by Anonymous Coward on Thursday May 12 2022, @01:33PM

          by Anonymous Coward on Thursday May 12 2022, @01:33PM (#1244380)

          If the human processor overturns the machine's decision, they might have to justify their decision to their boss. And if *they* get it wrong, they get reprimanded.

          Funny story. This is how Russia invaded Ukraine, except those AI were just analysts and they tended to get reprimanded if their reports said "this scenario would not be good for Russia". So, every hypothetical scenario had to be good for Russia and they had no problems. Then, "someone" assumed the report reflected reality and you have the mess today.

          AI is just computerized "yes man" if you assume it's always right ;) But the problems with training these "yes man" predates AI by millennia.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 11 2022, @11:58PM (1 child)

        by Anonymous Coward on Wednesday May 11 2022, @11:58PM (#1244211)

        "Nobody ever got fired for specifying IBM".

        I used to hear that a lot in my early days at corporate.

        Now, it's " Microsoft".

        More signatures and obfuscation of responsibilities.

        If the IT guy knows Linux, Corporate can't have some little non-executive peon knowing how the system works. No one below executive level should have the keys to the kingdom. Ignorance is bliss.

        And we are this trained to take zero-days in stride.

        The business art of Planned Obsolence. Can't keep something around that works.

        If I took the MBA approach to maintaining my van, I would be way in debt and walking by now.

        What is this? You still have SAE threads on your engine head bolts? You need Metric! ( Extends hand for a shake, quickly followed by pen and papers to sign. Should I shake that hand, that quirky little smile will appear, as it knows all the problems I am gonna have if I let him as much as touch that engine. Problems that will guarantee him a very profitable stream of future revenue.)

        That's called " Marketing". The rest of us call it Planned Deception. People actually get degrees in this.

        • (Score: 2) by SomeGuy on Thursday May 12 2022, @12:52AM

          by SomeGuy (5632) on Thursday May 12 2022, @12:52AM (#1244228)

          Well, apparently now it is "nobody ever got fired for using AI".

          With traditional properly engineered code, one can point a finger to business requirements specified by a specific person, a piece of code written by a specific person, or failure to audit the system by a specific person.

          With AI, this does not exist. It just "learns" from whatever crap is thrown at it, nobody has any idea what it is really doing, and nobody cares. So when it goes "kill all humans" or whatever like this, it's ostensibly not the fault of any one specific person.

      • (Score: 2) by JoeMerchant on Thursday May 12 2022, @10:50AM (5 children)

        by JoeMerchant (3937) on Thursday May 12 2022, @10:50AM (#1244349)

        It's not so much a risk of AI as it is a prejudice of any kind of profiling, and everything from AI to old school actuarial tables to cops on the beat choosing who to pay attention to in their spare time is all prejudicial profiling, and mostly illegal in any country that implements "innocent until proven guilty."

        This is also the basis of resistance to using genetic markers for any kind of profiling other than consentual confidential medical applications. Genetic markers go far beyond skin color and sex and while we might think we can correlate certain markers with traits like career aptitudes, risk of addiction, violence, etc. to pre-judge people based on what people with similar genes have done in the past has almost always been rejected as a bad way to govern society.

        --
        🌻🌻 [google.com]
        • (Score: 2) by Immerman on Thursday May 12 2022, @01:29PM (4 children)

          by Immerman (3985) on Thursday May 12 2022, @01:29PM (#1244376)

          Any sort of AI system is inherently profiling though. Heck, even human initial assessments are - you're judging new cases based on patterns (real, imagined, or malicious) you've seen in which old cases deserved closer attention.

          But an AI is incapable of digging deeper than the initial data, and incapable of disregarding data it's illegal to consider. Even if it's not provided with that specific data, it's almost certainly reconstructable from the data it is provided. And if there was any unjustified profiling in the training data, that profiling will be silently incorporated into its assessment model. And lets be honest - the world being what it is, it's a near guarantee that there was profiling at play in the historical data it was trained on.

          • (Score: 2) by JoeMerchant on Thursday May 12 2022, @01:56PM (3 children)

            by JoeMerchant (3937) on Thursday May 12 2022, @01:56PM (#1244386)

            But an AI is incapable of digging deeper than the initial data, and incapable of disregarding data it's illegal to consider.

            False. An AI is capable of digging as deep as its creators design it to. They can provide AI with multiple layers of data and the AI can practice perfect restraint in ignoring the deeper layers until indications in the upper layers are seen which merit deeper digging. System designers (and maintainers) can absolutely restrict an AI from accessing any data that is illegal to consider simply by not providing it to the system for consideration. An AI isn't like a human that is incapable of "unseeing" illegal data. The old courtroom cheat of presenting impermissible evidence only to be objected to by the opposition, with the judge then directing the jury to "disregard the evidence just presented" doesn't work on AI, AI really will 100% disregard that which it is instructed to disregard.

            Now, are current AI practitioners implementing all of these best practices? Surely not, not all the time, and that's the basis of the current kerfuffle in AI development communities: what are the actual boundaries we should respect? Because, full access to all data, regardless of the legality or morality of using said data, will generally produce the most capable discrimination engines. That term: discriminator, is a very apt description of what AI does, it sorts photos of cats from photos of dogs, etc. It discriminates.

            Regarding prejudice: the whole of human society runs on prejudice. First impressions. Books judged by their covers, etc. Ain't nobody got time for a deeper dive on every decision they make day in and day out. This is a huge basis of how advertising / marketing works: show somebody images, sounds, smells, textures, etc. that they associate with whatever will influence them to buy a product, or service, or whatever. Face to face presentations fire all those dopamine releasing neurons and influence decisions about which product to buy. What we have discovered is: relying on prejudice as much as we would tend to without guidance tends to lead to worse outcomes overall.

            AI is a new form of prejudice, one based on cold sets of curated data. That data should be cleansed of information "irrelevant" to the decision the AI is guiding. And, as this whole discussion keeps reiterating: especially at this stage of development, AI should be guiding decisions, not making them.

            --
            🌻🌻 [google.com]
            • (Score: 2) by Immerman on Thursday May 12 2022, @02:24PM (2 children)

              by Immerman (3985) on Thursday May 12 2022, @02:24PM (#1244401)

              What I meant was, the AI can dig no deeper than the data it is provided - unlike a human who can seek out more data. (theoretically an AI could request more data - but that's not the normal way they're used)

              And I already addressed the "just don't give it the data that's illegal to consider" argument. That data is almost certain to correlate well with data it *is* given, so leaving it out is mostly irrelevant, as it was mostly redundant data to begin with. A human is unlikely to even notice minor details that reveal that data - but there are no minor details to an AI, only details that do or do not correlate with the biased patterns it's being trained to replicate.

              • (Score: 2) by JoeMerchant on Thursday May 12 2022, @02:43PM (1 child)

                by JoeMerchant (3937) on Thursday May 12 2022, @02:43PM (#1244404)

                The old Target stores example of a father learning his daughter is pregnant from Target marketing is priceless. I think it was based on actual purchase patterns, but it could as easily be based on web browsing habits, the AI predicted, correctly, that the daughter was with child and spilled the beans with some direct-to-pregnant-women marketing that the father saw. AI is capable of these "Sherlock Holmes" style inferences due in large part to the inhuman volume of "publicly available" information that it can have access to. In theory: license plate scanners violate no privacy laws. Anybody can sit anywhere they like and write down license plate numbers of cars that pass them on the public roads, but... putting automatic plate readers on police cruisers and developing tracking databases of what plate was where at what time all over the city for years is an unprecedented database capable of all kinds of fishing expedition abuses.

                What we need are new definitions of what is publicly available information taking into account the changes in technology since the existing definitions were drafted, and that's going to be a hard fought battle because both sides have a lot to gain or lose in the outcomes.

                Meanwhile, nasty bureaucrats will be nasty bureaucrats right up to and past the limits of what's presently legally allowed.

                --
                🌻🌻 [google.com]
                • (Score: 1, Interesting) by Anonymous Coward on Friday May 13 2022, @01:01AM

                  by Anonymous Coward on Friday May 13 2022, @01:01AM (#1244635)

                  What has always bugged me about that Target story is that you're never told how many mailings went out to people who weren't pregnant. That one story got held up as the poster child for businesses to throw lots of money into data mining, but if nine other people got the same mailings and they were not expecting, then that isn't very impressive. It is easy to remember the interesting outliers, but dangerous to think of them as the mean, since they are the ones you remember.

                  It isn't much different than the old scam where you mail out free stock tips to 1024 houses, where in half of them you predict a stock (or I suppose more appropriate for these days, the price of bitcoin) to go up in value, and the other down. Then the next week you mail out another tip to the 512 houses where you predicted it correctly. The the next week you mail out to the 256 house, etc. By the time you get near the end, you then offer your (expensive) services to the last one (or two or four) because they're convinced how good you are because every week you've sent them tips that were correct. Not saying the Target story was a scam (though when Marketing people push stories like that, I'm usually on my guard for suspecting at least a lot of exaggeration), but that Target story is so memorable that it is the one you'd remember, and not the other angry fathers who yelled at the marketing department because the daughter wasn't pregnant.

                  We're many years removed from that, and I can tell you that I am not too terribly impressed with the items that get recommended to me by various algorithms. (A few years back YouTube kept showing me ads for products/services to stop smoking, where I nor anyone in my household have ever smoked a day in our lives!)

    • (Score: -1, Troll) by Anonymous Coward on Wednesday May 11 2022, @11:13PM (2 children)

      by Anonymous Coward on Wednesday May 11 2022, @11:13PM (#1244201)

      Oh look, the expected nazi commentary was the first post!? Straight to jail for the white supremacist! Maybe just reeducation camp if the incel doesn't have a history of violence?

      • (Score: 2) by ewk on Thursday May 12 2022, @07:31AM (1 child)

        by ewk (5923) on Thursday May 12 2022, @07:31AM (#1244322)

        Assuming 'non-Dutch' means 'white' seems pretty racist to me... but that's exactly what you are doing with your comment.

        --
        I don't always react, but when I do, I do it on SoylentNews
        • (Score: 2) by ewk on Thursday May 12 2022, @07:34AM

          by ewk (5923) on Thursday May 12 2022, @07:34AM (#1244323)

          F*ck... 'not white' of course... not 'white'...
          Aargh. Need more coffee.

          --
          I don't always react, but when I do, I do it on SoylentNews
    • (Score: 0) by Anonymous Coward on Thursday May 12 2022, @09:07AM

      by Anonymous Coward on Thursday May 12 2022, @09:07AM (#1244341)

      》 The model saw not being a Dutch citizen as a risk factor.

      Um... perhaps because it actually *was* a risk factor?

      Point is, even if you had the Dutch nationality, but had a "foreign"-sounding name you could already be flagged by this system as a risk factor.

  • (Score: 2, Insightful) by Anonymous Coward on Wednesday May 11 2022, @09:54PM (11 children)

    by Anonymous Coward on Wednesday May 11 2022, @09:54PM (#1244180)

    civil servants rubber-stamped the fraud labels

    The "AI" is an excuse. Impunity of bureaucrats is the problem.

    • (Score: 4, Insightful) by pe1rxq on Wednesday May 11 2022, @10:00PM (3 children)

      by pe1rxq (844) on Wednesday May 11 2022, @10:00PM (#1244182) Homepage

      Exactly, this article is wrong on many levels.
      The reality is that the dutch tax service did not need any AI at all. Regular people working as civil servants were perfectly fine to discriminate against anybody who was remotely 'foreign'. The whole organistation, top to bottom, knew this and promoted it. All because politicians wanted to be though on frauds. The only problem was that they had no real evidence of fraud. But just being on a list was enough. Don't blame AI, blame the very real humans who thought this was acceptable.

      • (Score: 0) by Anonymous Coward on Wednesday May 11 2022, @11:03PM (1 child)

        by Anonymous Coward on Wednesday May 11 2022, @11:03PM (#1244196)

        Don't look this way. We've got lots of "concerns" over non-existent voter fraud that one party is "fixing" (in both the literal and figurative sense).

      • (Score: 2) by janrinok on Thursday May 12 2022, @07:53AM

        by janrinok (52) Subscriber Badge on Thursday May 12 2022, @07:53AM (#1244327) Journal

        ... and coming from a Dutchman (I am assuming from the username) you should know. It is the politicians, not those that have to carry out the policy, that created the problem. The latter did not help, though, by just following the AI and not questioning its output as they should have done.

    • (Score: 2) by HiThere on Thursday May 12 2022, @12:14AM (4 children)

      by HiThere (866) Subscriber Badge on Thursday May 12 2022, @12:14AM (#1244217) Journal

      The immunity of the bureaucrats does not excuse the blatant bias of the AI. There's plenty of blame to go around, if that's what you want.

      OTOH, perhaps the system was working just as those of put it together wanted. (I said "system", not program, because I meant the entire system including the bureaucrats, their supervisors, etc.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by DeVilla on Thursday May 12 2022, @12:24AM (2 children)

        by DeVilla (5354) on Thursday May 12 2022, @12:24AM (#1244219)

        Those claims passed through the gauntlet of a self-learning algorithm,

        Machine learning only learns if you tell it when it's wrong and/or when it's right. If you don't tell it when it's wrong, it'll assume it got it right. Just like a kid
        ... or a dog.

        • (Score: 2) by HiThere on Thursday May 12 2022, @03:16AM (1 child)

          by HiThere (866) Subscriber Badge on Thursday May 12 2022, @03:16AM (#1244287) Journal

          The thing is, a lot of the AI algorithms are explicitly designed to NOT learn when they're out in the field. So even if the users of the system gave it feedback this might well not have changed the results. We'd need to know a bit more about this particular example, but that might be the way to assume it was designed.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2) by JoeMerchant on Thursday May 12 2022, @02:06PM

            by JoeMerchant (3937) on Thursday May 12 2022, @02:06PM (#1244394)

            TFS says it was a learning algorithm... implies that the data it was fed over time influenced its later outcomes.

            --
            🌻🌻 [google.com]
      • (Score: 2) by JoeMerchant on Thursday May 12 2022, @10:56AM

        by JoeMerchant (3937) on Thursday May 12 2022, @10:56AM (#1244350)

        AI itself isn't an appropriate target for blame. The people who programmed and accepted the AI for use are very much to blame. "We don't really know how it works inside" is no excuse, actually it makes the decision makers more culpable.

        --
        🌻🌻 [google.com]
    • (Score: 2) by driverless on Thursday May 12 2022, @09:23AM (1 child)

      by driverless (4770) on Thursday May 12 2022, @09:23AM (#1244342)

      the so-called kinderopvangtoeslagaffaire:

      In response, the Germans brewed up their own Rhabarberbarbarabarbarbarenbartbarbierbier affair. Upon seeing this, the French also had a go but could only come up with Merde.

      • (Score: 0) by Anonymous Coward on Thursday May 12 2022, @04:02PM

        by Anonymous Coward on Thursday May 12 2022, @04:02PM (#1244442)

        And foreign media like the long compound words so much they pick the one that makes you wonder if people really use them. Generally, they don't. While "kinderopvangtoeslagaffaire" is correct, it is almost always called "toeslagenaffaire" ("benefits affair").

  • (Score: 3, Informative) by Anonymous Coward on Wednesday May 11 2022, @10:56PM (2 children)

    by Anonymous Coward on Wednesday May 11 2022, @10:56PM (#1244194)

    ... called RoboDebt, except the corrupt government did not fall and the minister responsible is now Prime Minister.

    See https://en.wikipedia.org/wiki/Robodebt_scheme [wikipedia.org]

  • (Score: 4, Informative) by Rosco P. Coltrane on Wednesday May 11 2022, @11:21PM (2 children)

    by Rosco P. Coltrane (4757) on Wednesday May 11 2022, @11:21PM (#1244203)

    There's no recourse. Because there's no actual human being to talk to anymore.

    This goes for everything these days:

    - if your Paypal or Facebook account is suspended...
    - If you get an automated speeding ticket...
    - If CloudFlare thinks your corner of the internet is dodgy and cuts you off half the sites everybody else can visit...
    - If Discord things your IP is dodgy and serves you captcha after captcha to log in, even if you have 2FA enabled...
    - If Mastercard or Visa thinks buying gasoline then hard liquor immediately after in another country indicates your credit card was stolen and disables it...
    - If you live in Scunthorpe [wikipedia.org] and your emails never arrive...

    Good luck fighting the decision. There's no-one to call. There's no email address to send a complaint to. The services that offer a support form send you a boilerplate "Thank you for your inquiry" reply and a case number, and the case never gets processed because nobody processes the tickets. Or worse: your ticket is processed by another AI that decides to send you the FAQ concerning whatever irrelevant keyword it scanned in your message.

    There was a time when it wasn't easy to get redress but it was possible. Nowadays, it just isn't anymore. If things go wrong for you, they stay wrong forever and there's nothing you can do about it. It's really maddening.

    • (Score: 2) by JoeMerchant on Thursday May 12 2022, @11:04AM

      by JoeMerchant (3937) on Thursday May 12 2022, @11:04AM (#1244352)

      The recourse is often to open a new account and start from scratch.

      Google has marked my 20 year old account as persona non grata with YouTube due to a copyright kerfuffle I let my 8 year old get into with PBS using my account to post videos he was making. That was 10 years ago and the YouTube ban still stands on my account, but not against my now 18 year old son's personal account, on which he continues to post videos that should piss off PBS IP protectors even more, but the system seems to have chilled out on new account bans lately. Still zero avenues for appeal on my banned account.

      --
      🌻🌻 [google.com]
    • (Score: 2) by JoeMerchant on Thursday May 12 2022, @02:09PM

      by JoeMerchant (3937) on Thursday May 12 2022, @02:09PM (#1244396)

      If you get an automated speeding ticket...

      I don't know about all jurisdictions, or even Phoenix today, but in Phoenix some years back they were issuing all kinds of automated speeding tickets and then not defending ANY of them in court. Thinking was: if they fought in court and lost, they might face a class-action claw-back of all fines collected to-date or at least be forced to stop issuing robotickets. So, if you got one, you had to go to court to fight it, but it was a slam-dunk no fine no points outcome if you did. Sleazy in the extreme on the part of the Phoenix police, but that's one way to play our system and still make tons of money from the people who can't be bothered to fight in court.

      --
      🌻🌻 [google.com]
  • (Score: 5, Informative) by shrewdsheep on Thursday May 12 2022, @07:26AM (3 children)

    by shrewdsheep (5215) on Thursday May 12 2022, @07:26AM (#1244321)

    Living in the Netherlands, I can comment a bit on the affair. The whole premise of the article is false, unfortunately. The whole affair had been lingering for more than a year already and the incompentence of the government and official institutions was blatently obvious. Still, the government could have chosen not to resign given that the affected group of people is powerless. Resignation of the government came roughly three month prior to the planned elections. This was convenient as nothing changed. The government simply became the "acting" government instead. The move was purely tactical to remove the topic from the election. Quite successfully, it turns out, as the government parites got reelected.

    So nothing to see here, except the usual deplorable political moves.

    • (Score: 5, Informative) by inertnet on Thursday May 12 2022, @08:13AM (1 child)

      by inertnet (4071) on Thursday May 12 2022, @08:13AM (#1244332) Journal

      You can add that there were early warnings that got ignored. From 2019 onward the warnings got louder and louder and the government's response was to cover things up. All the while people (indeed mostly immigrants) had everything taken away, sometimes even their children. They were completely ruined, some committed suicide and the others will suffer for the rest of their lives, even if they eventually get compensated. The political elite is largely responsible for it to drag on as it still does, but they still got reelected.

      • (Score: 2) by JoeMerchant on Thursday May 12 2022, @11:06AM

        by JoeMerchant (3937) on Thursday May 12 2022, @11:06AM (#1244353)

        Cover things up, stall for time, let it blow up at an advantageous moment... Many successful politicians are also lawyers, and it shows.

        --
        🌻🌻 [google.com]
    • (Score: 1, Interesting) by Anonymous Coward on Thursday May 12 2022, @09:03AM

      by Anonymous Coward on Thursday May 12 2022, @09:03AM (#1244339)

      Resignation of the government came roughly three month prior to the planned elections.

      Being Dutch myself. This is quite common if you look at past "end-of-term" situations in the Netherlands, and I expect that this also happens in other countries. It has a function why this happens. Three months is a nice time for people to forget why the government resignated (I think there was a studie that a political situation held its effect for like two weeks before people didn't care about it any more, when in the voting booth). Even if people remember, you can blame it on other parties.

  • (Score: 2) by PiMuNu on Thursday May 12 2022, @10:58AM (5 children)

    by PiMuNu (3823) on Thursday May 12 2022, @10:58AM (#1244351)

    Artificial Stupidity

    • (Score: 2) by JoeMerchant on Thursday May 12 2022, @11:21AM (4 children)

      by JoeMerchant (3937) on Thursday May 12 2022, @11:21AM (#1244354)

      When I was 17 I had a summer job in a factory. One of the things they gave me to do was a bag with thousands of screws made by gluing a plastic head on a bit of aluminum threaded rod. 99+% were crooked, my job was to pull out 50 straight ones. Stupid job, perfect for AI.

      Radiologists scan millions of pap smears for signs of cancer every year (probably every day, globally)... This is also an excellent job for AI to assist with, but in a very different way.

      The screws went in a circuit breaker that goes in an airplane. If you find a straight one, you can verify it is straight within acceptable tolerance and using it will not subject the aircraft to unacceptable risks of failure of the screw.

      With the pap smears, letting a single cancerous slide slip by undetected would put someone's life at risk. Better to put 100 false positives to the human radiologists' attention than to let one false negative get through.

      One could argue that the Dutch fraud screening AI should run more like the straight screw detector, but of course conservative politics would have it run more like the cancer screening tool.

      I would say that UBI would eliminate the entire question of fraud making the whole endeavor irrelevant.

      --
      🌻🌻 [google.com]
      • (Score: 2) by PiMuNu on Thursday May 12 2022, @09:38PM (2 children)

        by PiMuNu (3823) on Thursday May 12 2022, @09:38PM (#1244592)

        To be clear, I don't object to using computers to do things - my objection is semantic; it aint "Artificial Intelligence", it's just heuristic pattern finding. My calculator isn't AI, my internet browser isn't AI, google search isn't AI.

        • (Score: 2) by JoeMerchant on Friday May 13 2022, @12:23AM (1 child)

          by JoeMerchant (3937) on Friday May 13 2022, @12:23AM (#1244620)

          Things like AlphaGo are "just heuristics" but they are certainly approaching a form of intelligence in limited areas, and they are orders of magnitude more complex than a calculator.

          --
          🌻🌻 [google.com]
          • (Score: 0) by Anonymous Coward on Friday May 13 2022, @01:05AM

            by Anonymous Coward on Friday May 13 2022, @01:05AM (#1244638)

            I found DeepMind: The Podcast [deepmind.com] to be very interesting. It covers all of these kind of issues including the goal of achieving artificial general intelligence and what that might look like and mean. A great way to pass the time on my daily commute.

      • (Score: 0) by Anonymous Coward on Friday May 13 2022, @01:11AM

        by Anonymous Coward on Friday May 13 2022, @01:11AM (#1244639)

        One thing about UBI that I don't understand is, won't that just cause inflation and generally raise prices? Is it the absolute value of the money that is important, or the delta between what some baseline is and what things cost? If you raise the baseline and the costs generally raise the same amount, aren't you back to where you started?

(1)