Submitted via IRC for Bytram
This week Samuel Arbesman, a complexity scientist and writer, will publish "Overcomplicated: Technology at the Limits of Comprehension." It's a well-developed guide for dealing with technologies that elude our full understanding. In his book, Arbesman writes we're entering the entanglement age, a phrase coined by Danny Hillis, "in which we are building systems that can't be grasped in their totality or held in the mind of a single person." In the case of driverless cars, machine learning systems build their own algorithms to teach themselves — and in the process become too complex to reverse engineer.
And it's not just software that's become unknowable to individual experts, says Arbesman.
Machines like particle accelerators and Boeing airplanes have millions of individual parts and miles of internal wiring. Even a technology like the U.S. Constitution, which began as an elegantly simple operating system, has grown to include a collection of federal laws "22 million words long with 80,000 connections between one section and another."
In the face of increasing complexity, experts are ever more likely to be taken by surprise when systems behave in unpredictable and unexpected ways.
Source: http://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/
For a collection of over three decades of these (among other things) see The Risks Digest - Forum On Risks To The Public In Computers And Related Systems. It's not so much that this is a new problem, as it is an increasingly common one as technology becomes ever more complicated.
(Score: 2) by JNCF on Thursday July 21 2016, @03:17PM
I agree with this. I do think you can understand some systems completely at a certain level, and I was trying to discuss systems where this is not the case. My language could have been more precise. I considered clarifying it, but cut a paragraph for brevity.
Of course, all of our systems are emergent phenomenon running on top of a universe we don't understand. Computers work well enough that we can mostly abstract the lower levels we're running on top of. This doesn't always work, and once in a while background radiation flips a bit or a truck drives into a computer while it's operating. It is the leakiness of our abstractions that allows this, and may always allow it with some probability. I see this as a different issue than not understanding the level you're working on, even though that's an arbitrary division made by my human brain. Fizz-buzz programs are simple enough that you can correctly model their output if we assume that lower levels work as expected, but larger programs that use more data than you can juggle in your working memory don't have this property. You can abstract this data behind interfaces, but it's still part of the system that runs at the same level. I believe we're on the same page, but correct me if I'm wrong.