Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday July 21 2014, @06:02PM   Printer-friendly
from the future-to-avoid? dept.

Tech pioneers in the US are advocating a new data-based approach to governance - 'algorithmic regulation'. But if technology provides the answers to society's problems, what happens to governments ?

What is Algorithmic Regulation? Well, here and here are two attempts to explain it. For example: the "smartification" of everyday life follows a familiar pattern: there's primary data - a list of what's in your smart fridge and your bin - and metadata - a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses - one recent model promises to track respiration and heart rates and how much you move during the night - and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be - to use the buzzwords of the day - "evidence-based" and "results-oriented," technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation - a case that deserves close scrutiny both for what it promises policy-makers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can't write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it's time to find another rule for finding a good rule - and so on. An algorithm can do this, but it's the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it's not just spam: your bank uses similar methods to spot credit-card fraud.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published ,as it happens, roughly at the same time as The Automated State, put it best: "Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Monday July 21 2014, @08:52PM

    by Anonymous Coward on Monday July 21 2014, @08:52PM (#72003)

    I'll go ahead and take the bait.

    The thing we MOST need is a technocrat dominated government. Because once you trend above a few million people, the answer to "What is best for society?" should come down to hard numbers, public health, and epidemiology - NOT knee jerk reactions to tearful anecdotes. I assume this is what you meant when you made the unsubstantiated claim about "not understanding the human condition."

    Put differently, if the data says saving 0.00003% of the populace would be possible given 10% of the entire society's GDP, that shit is a non-starter. Sorry. We have to progress as a species. The one and only way to get a good government is to get informed people to vote. That is precluded today thanks to media control and the objectively, verifiably proven, broken two party system.

    Governing from data means ensuring that the incentives are done right, and everyone on welfare brings home additional net money for every penny they earn. Instead, thanks to mathematically inept laws piling on each other, single unemployed mothers in the USA have a net gain for each additional child they have out of wedlock and receive LESS net income if they start working at any salary under $70k. That is only one example. Another is having break points in the tax code; it should be a graduated curve.

    If you are against objectively good government as I just described, then yes I'm going to call you stupid or at least uninformed. Note that I am solidly, wholly against the NSA and invasion of privacy. This entire mini-rant is basically to get you to disassociate governing using objective principles and data from the creeping tyranny of the surveillance state. They are not (necessarily) the same. We desperately need the former, and desperately need to destroy the latter. Don't toss the baby out with the bath water.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday July 21 2014, @11:17PM

    by Anonymous Coward on Monday July 21 2014, @11:17PM (#72057)

    Yes, you took the bait hook line and sinker.

    "What is best for society?" should come down to hard numbers, public health, and epidemiology - NOT knee jerk reactions to tearful anecdotes.

    You sound like the kind of person who believes that "the numbers don't lie."

    It has nothing to do with "tearful anecdotes." What it does have to do with is who decides what numbers matter, and how to measure them. And that is something that geeks are universally terrible at. We see it with "teaching to the test" and even institutional cheating [newyorker.com] in schools, we see police regularly downgrading or outright ignoring crimes [chicagomag.com] even benchmark "optimization" on computers. [anandtech.com]

    As long as people are involved in the process, the numbers will lie because what to measure and how to measure it have always been and will always be the fundamental problem.

    • (Score: 1, Interesting) by Anonymous Coward on Tuesday July 22 2014, @09:14AM

      by Anonymous Coward on Tuesday July 22 2014, @09:14AM (#72211)

      Even if the numbers are perfectly accurate and relevant, it may turn out that you optimize for the false values. Now, it happens already in human-driven politics (like, confusing "good for the economy" with "good for big business" (not to mention that "good for the economy" is not always the same as "good for the people" either; slavery was certainly good for the economy of the time).

      And even if the numbers are absolutely accurate and relevant, and the goals are all good, the optimization algorithm may still reach unacceptable results because of some rule that was forgotten. As an extreme example, the algorithm may figure out that it is best for public health if all ill people, no matter how harmless their illness is, are killed and their bodies immediately burned. After all, if you kill ill people, you reduce the fraction of ill people, and by burning them immediately, you'll prevent their deceases to spread further.

      That's why we need humans in the loop. Not because humans are inherently better in optimizing for a goal (they probably aren't), but because if something goes horribly wrong, humans will be able to recognize it and change their rules, while the computer will determine that according to the algorithm it was given, everything is going well.