Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday July 21 2014, @06:02PM   Printer-friendly
from the future-to-avoid? dept.

Tech pioneers in the US are advocating a new data-based approach to governance - 'algorithmic regulation'. But if technology provides the answers to society's problems, what happens to governments ?

What is Algorithmic Regulation? Well, here and here are two attempts to explain it. For example: the "smartification" of everyday life follows a familiar pattern: there's primary data - a list of what's in your smart fridge and your bin - and metadata - a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses - one recent model promises to track respiration and heart rates and how much you move during the night - and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be - to use the buzzwords of the day - "evidence-based" and "results-oriented," technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation - a case that deserves close scrutiny both for what it promises policy-makers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can't write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it's time to find another rule for finding a good rule - and so on. An algorithm can do this, but it's the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it's not just spam: your bank uses similar methods to spot credit-card fraud.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published ,as it happens, roughly at the same time as The Automated State, put it best: "Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday July 22 2014, @12:49AM

    by Anonymous Coward on Tuesday July 22 2014, @12:49AM (#72092)

    > can you find anyone that says "more homelessness is better", "the real problem with this country is that
    > there's not enough teen pregnancy", "things were better when crime rates were higher"?

    Not exactly, but close. There are lots of people who believe that homeless people want to be homeless, that teen pregnancy is a just punishment for loose morals and that people live in high-crime neighborhoods are of poor character themselves and so deserve to be there.

  • (Score: 1) by MBasial on Tuesday July 22 2014, @07:17PM

    by MBasial (1910) on Tuesday July 22 2014, @07:17PM (#72418)

    Agreed, but I think the "not exactly" is really important. I think (hope?) that the strength of algorithmic government will be that those "just punishment" folks will be forced to make those desires explicit, or not have them considered. I can see almost anyone standing up and saying "I support policies that lead to less teen pregnancy", but it takes a lot more to stand up and say "I support policies that result in more teen pregnancy, because those policies support my goal of punishing sexual transgression."

    At the moment, they have the cover of abusing statistics when choosing policies (e.g. abstinence is 100% effective if you don't count the failures to abstain; well, the Pill also gains several percentage points of effectiveness if we assume humans don't do human things). I am assuming that the machine will have instructions to choose/create policies that are effective. If abstinence-only education leads to fewer teen pregnancies, then that's what the machine will choose. I feel like I've seen data that says the machine will be choosing another approach.

    I'd like to think that in the US, separation of church and state rules will exclude a lot of the "just punishment" issues from the algorithm. Some folks will try, I'm sure, but it's one thing to have agreement on a policy with a side-effect of punishing teen moms, and another to have agreement on a policy of explicit punishment. Without the explicit punishment goals, the algorithm is going to try for a comfortable life for everyone, including teen moms. In my imagining of the situation, the machine is allowed to notice whether income support for single moms or desperate poverty results in better outcomes for kids. I don't think there are a lot of people willing to ask the machine to include "The single mom's sinfulness requires that she be punished so harshly that the child has a 10% greater chance of being poor at age 30 (or whatever outcome pattern the machine sees)."

    In other words, I expect algorithmic government to get a lot of foolishness out of our rule-making process. We currently establish a lot of rules based on how we think the world should work (e.g. poverty creates incentive to work). I expect the machine will have a better handle on how the world works (e.g. poverty is more often stressful, demoralizing, and demotivating), and that it will be gloriously ignorant about how it ought to work.