Tim O'Reilly has advocated for the idea of algorithmic regulation - reducing the role of people and replacing them with automated systems in order to make goverment policy less biased and more efficient. But the idea has been criticized as utopianism, where actual implementations are likely to make government more opaque and even less responsive to the citizens who have the least say in the operation of society.
Now, as part of New America's annual conference What Drives Innovation Around the Country? Virginia Eubanks has written an essay examining such automation in the cases of pre-crime and welfare fraud. Is it possible to automate away human judgment from the inherently human task of governance and still achieve humane results? Or is inefficiency and waste an unavoidable part of the process?
(Score: 3, Interesting) by Hartree on Sunday May 03 2015, @01:49PM
Implementing technical solutions to what are at base human problems often doesn't work out very well.
Somehow, magically, the machine system is supposed do away with the human problems. In reality, it often amplifies them. Why? It's humans programing, running, and critically, authorizing exceptions. Anyone who's done security and had to leave a decades old default password exposed to the open internet because a high level manager orders them to knows the problem.
When you're trying to get humans out of the loop, you're usually just making it a smaller group of humans that are in the loop. And the effects of one or more of them being corrupt/incompetent becomes even worse.
Look at the current feedback system in politics where pursuit of campaign contributions and a rather fanatical base has become the way of ensuring reelection. Theoretically, the idea is to keep a large enough cross section of voters happy enough to keep you in office. In reality, it often degenerates into keeping happy relatively small groups/institutions that either vote in blocs or have lots of money.