Submitted via IRC for Bytram
This week Samuel Arbesman, a complexity scientist and writer, will publish "Overcomplicated: Technology at the Limits of Comprehension." It's a well-developed guide for dealing with technologies that elude our full understanding. In his book, Arbesman writes we're entering the entanglement age, a phrase coined by Danny Hillis, "in which we are building systems that can't be grasped in their totality or held in the mind of a single person." In the case of driverless cars, machine learning systems build their own algorithms to teach themselves — and in the process become too complex to reverse engineer.
And it's not just software that's become unknowable to individual experts, says Arbesman.
Machines like particle accelerators and Boeing airplanes have millions of individual parts and miles of internal wiring. Even a technology like the U.S. Constitution, which began as an elegantly simple operating system, has grown to include a collection of federal laws "22 million words long with 80,000 connections between one section and another."
In the face of increasing complexity, experts are ever more likely to be taken by surprise when systems behave in unpredictable and unexpected ways.
Source: http://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/
For a collection of over three decades of these (among other things) see The Risks Digest - Forum On Risks To The Public In Computers And Related Systems. It's not so much that this is a new problem, as it is an increasingly common one as technology becomes ever more complicated.
(Score: 3, Insightful) by VLM on Wednesday July 20 2016, @08:28PM
The fact that we're building ever more complicated systems of things that work really well
Not really, no. We can trade off higher tech to patch over stuff, and play expensive games with abstraction but there are limits.
Its a simple systems engineering problem. It almost echos of Shannon's Law relating an information rate (ditto), a bandwidth (tech level?), and power to noise ratio. (lines of code vs bug rate?)
There's been a lot of work done in reliability engineering.
Going a different direction, people who've never taken formal theory of computing classes tend to have really weird and unfortunately completely wrong intuitive ideas about simple scalability issues or halting problem related questions or what boils down to Godel's little problem.
If you're bored you can emulate a ram to CPU interconnect as a telecommunications bit stream, then play all kinds of weird games WRT bit error rates and power and noise. It turns out that even if you could build an infinite amount of infinitely fast ram, there are Shannons Law limits to how much fun you can have with that theoretically infinite processor. And since an ALU latch is just another transmission line etc etc etc.