Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday July 21 2014, @06:02PM   Printer-friendly
from the future-to-avoid? dept.

Tech pioneers in the US are advocating a new data-based approach to governance - 'algorithmic regulation'. But if technology provides the answers to society's problems, what happens to governments ?

What is Algorithmic Regulation? Well, here and here are two attempts to explain it. For example: the "smartification" of everyday life follows a familiar pattern: there's primary data - a list of what's in your smart fridge and your bin - and metadata - a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses - one recent model promises to track respiration and heart rates and how much you move during the night - and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be - to use the buzzwords of the day - "evidence-based" and "results-oriented," technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation - a case that deserves close scrutiny both for what it promises policy-makers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can't write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it's time to find another rule for finding a good rule - and so on. An algorithm can do this, but it's the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it's not just spam: your bank uses similar methods to spot credit-card fraud.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published ,as it happens, roughly at the same time as The Automated State, put it best: "Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday July 21 2014, @11:17PM

    by Anonymous Coward on Monday July 21 2014, @11:17PM (#72057)

    Yes, you took the bait hook line and sinker.

    "What is best for society?" should come down to hard numbers, public health, and epidemiology - NOT knee jerk reactions to tearful anecdotes.

    You sound like the kind of person who believes that "the numbers don't lie."

    It has nothing to do with "tearful anecdotes." What it does have to do with is who decides what numbers matter, and how to measure them. And that is something that geeks are universally terrible at. We see it with "teaching to the test" and even institutional cheating [newyorker.com] in schools, we see police regularly downgrading or outright ignoring crimes [chicagomag.com] even benchmark "optimization" on computers. [anandtech.com]

    As long as people are involved in the process, the numbers will lie because what to measure and how to measure it have always been and will always be the fundamental problem.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday July 22 2014, @09:14AM

    by Anonymous Coward on Tuesday July 22 2014, @09:14AM (#72211)

    Even if the numbers are perfectly accurate and relevant, it may turn out that you optimize for the false values. Now, it happens already in human-driven politics (like, confusing "good for the economy" with "good for big business" (not to mention that "good for the economy" is not always the same as "good for the people" either; slavery was certainly good for the economy of the time).

    And even if the numbers are absolutely accurate and relevant, and the goals are all good, the optimization algorithm may still reach unacceptable results because of some rule that was forgotten. As an extreme example, the algorithm may figure out that it is best for public health if all ill people, no matter how harmless their illness is, are killed and their bodies immediately burned. After all, if you kill ill people, you reduce the fraction of ill people, and by burning them immediately, you'll prevent their deceases to spread further.

    That's why we need humans in the loop. Not because humans are inherently better in optimizing for a goal (they probably aren't), but because if something goes horribly wrong, humans will be able to recognize it and change their rules, while the computer will determine that according to the algorithm it was given, everything is going well.