Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday July 21 2014, @06:02PM   Printer-friendly
from the future-to-avoid? dept.

Tech pioneers in the US are advocating a new data-based approach to governance - 'algorithmic regulation'. But if technology provides the answers to society's problems, what happens to governments ?

What is Algorithmic Regulation? Well, here and here are two attempts to explain it. For example: the "smartification" of everyday life follows a familiar pattern: there's primary data - a list of what's in your smart fridge and your bin - and metadata - a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses - one recent model promises to track respiration and heart rates and how much you move during the night - and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be - to use the buzzwords of the day - "evidence-based" and "results-oriented," technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation - a case that deserves close scrutiny both for what it promises policy-makers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can't write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it's time to find another rule for finding a good rule - and so on. An algorithm can do this, but it's the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it's not just spam: your bank uses similar methods to spot credit-card fraud.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published ,as it happens, roughly at the same time as The Automated State, put it best: "Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by eapache on Monday July 21 2014, @06:25PM

    by eapache (3822) on Monday July 21 2014, @06:25PM (#71925)

    This ignores the fact that the underlying disagreements in politics are all philosophical, not practical. Nobody is against picking the best means, the problem is in choosing which ends to aim for (and which means are not acceptable because of other philosophical reasons).

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  

    Total Score:   4  
  • (Score: 3, Insightful) by Rune of Doom on Monday July 21 2014, @06:57PM

    by Rune of Doom (1392) on Monday July 21 2014, @06:57PM (#71943)

    I agree that "algorithmic regulation" isn't actually a solution, but it's not only philosophical disagreements that we currently have problems with. It's underlying issues like wealth inequality, corruption, and total government capture by the 'too rich for anyone's good' class of companies and individuals. By gut reaction to "algorithmic regulation" is that, if implemented, it will be government of the too-rich, by the rich, and for the rich - but much less problematical for them than this messy pseudo-democracy we have now.

    • (Score: 1) by Delwin on Monday July 21 2014, @09:35PM

      by Delwin (4554) on Monday July 21 2014, @09:35PM (#72029)

      Those in power will fight algorithmic regulation tooth and nail because it removes the power that they currently have.

      • (Score: 2) by dry on Tuesday July 22 2014, @03:07AM

        by dry (223) on Tuesday July 22 2014, @03:07AM (#72119) Journal

        Or perhaps subvert it. "See, the algorithm says that trickle down economics is the best system to enrich society and that only the lower classes should be taxed" which is the problem, who decides on which algorithm to use.

      • (Score: 2) by etherscythe on Tuesday July 22 2014, @11:11PM

        by etherscythe (937) on Tuesday July 22 2014, @11:11PM (#72533) Journal

        Look at the stock market and tell me that again. The rich OWN the algorithms of the system. The only change that would occur is that they would, somehow, become even more powerful, for probably less-obvious and less-articulable reasons.

        --
        "Fake News: anything reported outside of my own personally chosen echo chamber"
  • (Score: 1) by MBasial on Tuesday July 22 2014, @12:19AM

    by MBasial (1910) on Tuesday July 22 2014, @12:19AM (#72084)

    My experience is that folks want the same ends -- can you find anyone that says "more homelessness is better", "the real problem with this country is that there's not enough teen pregnancy", "things were better when crime rates were higher"? The disagreement is how to get there. Do we make being poor very uncomfortable, so people will go get jobs? Or do we provide a safety net so that people can look for their next paying gig without worrying that they'll be choosing between food and shelter at the end of the month? Sex ed or abstinence-only? Punish prostitutes or johns? Arrest dealers, arrest users, disrupt foreign manufacturing, or legalize?

    That said, I'm sure there are folks today that say they want one thing ("less teen pregnancy") but actually mean another ("more obedience to my god"). They'll have to ask for those currently-unstated desires to be included in the algorithm, if they want them considered by the machine. I bet we can all manage to agree on and achieve "less teen pregnancy" long before any particular religion/political party gets its recruiting needs incorporated into the algorithm.

    • (Score: 0) by Anonymous Coward on Tuesday July 22 2014, @12:49AM

      by Anonymous Coward on Tuesday July 22 2014, @12:49AM (#72092)

      > can you find anyone that says "more homelessness is better", "the real problem with this country is that
      > there's not enough teen pregnancy", "things were better when crime rates were higher"?

      Not exactly, but close. There are lots of people who believe that homeless people want to be homeless, that teen pregnancy is a just punishment for loose morals and that people live in high-crime neighborhoods are of poor character themselves and so deserve to be there.

      • (Score: 1) by MBasial on Tuesday July 22 2014, @07:17PM

        by MBasial (1910) on Tuesday July 22 2014, @07:17PM (#72418)

        Agreed, but I think the "not exactly" is really important. I think (hope?) that the strength of algorithmic government will be that those "just punishment" folks will be forced to make those desires explicit, or not have them considered. I can see almost anyone standing up and saying "I support policies that lead to less teen pregnancy", but it takes a lot more to stand up and say "I support policies that result in more teen pregnancy, because those policies support my goal of punishing sexual transgression."

        At the moment, they have the cover of abusing statistics when choosing policies (e.g. abstinence is 100% effective if you don't count the failures to abstain; well, the Pill also gains several percentage points of effectiveness if we assume humans don't do human things). I am assuming that the machine will have instructions to choose/create policies that are effective. If abstinence-only education leads to fewer teen pregnancies, then that's what the machine will choose. I feel like I've seen data that says the machine will be choosing another approach.

        I'd like to think that in the US, separation of church and state rules will exclude a lot of the "just punishment" issues from the algorithm. Some folks will try, I'm sure, but it's one thing to have agreement on a policy with a side-effect of punishing teen moms, and another to have agreement on a policy of explicit punishment. Without the explicit punishment goals, the algorithm is going to try for a comfortable life for everyone, including teen moms. In my imagining of the situation, the machine is allowed to notice whether income support for single moms or desperate poverty results in better outcomes for kids. I don't think there are a lot of people willing to ask the machine to include "The single mom's sinfulness requires that she be punished so harshly that the child has a 10% greater chance of being poor at age 30 (or whatever outcome pattern the machine sees)."

        In other words, I expect algorithmic government to get a lot of foolishness out of our rule-making process. We currently establish a lot of rules based on how we think the world should work (e.g. poverty creates incentive to work). I expect the machine will have a better handle on how the world works (e.g. poverty is more often stressful, demoralizing, and demotivating), and that it will be gloriously ignorant about how it ought to work.