Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday July 20 2016, @05:17PM   Printer-friendly
from the i-just-don't-get-it dept.

Submitted via IRC for Bytram

This week Samuel Arbesman, a complexity scientist and writer, will publish "Overcomplicated: Technology at the Limits of Comprehension." It's a well-developed guide for dealing with technologies that elude our full understanding. In his book, Arbesman writes we're entering the entanglement age, a phrase coined by Danny Hillis, "in which we are building systems that can't be grasped in their totality or held in the mind of a single person." In the case of driverless cars, machine learning systems build their own algorithms to teach themselves — and in the process become too complex to reverse engineer.

And it's not just software that's become unknowable to individual experts, says Arbesman.

Machines like particle accelerators and Boeing airplanes have millions of individual parts and miles of internal wiring. Even a technology like the U.S. Constitution, which began as an elegantly simple operating system, has grown to include a collection of federal laws "22 million words long with 80,000 connections between one section and another."

In the face of increasing complexity, experts are ever more likely to be taken by surprise when systems behave in unpredictable and unexpected ways.

Source: http://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/

For a collection of over three decades of these (among other things) see The Risks Digest - Forum On Risks To The Public In Computers And Related Systems. It's not so much that this is a new problem, as it is an increasingly common one as technology becomes ever more complicated.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday July 20 2016, @06:03PM

    by Anonymous Coward on Wednesday July 20 2016, @06:03PM (#377404)

    "Too complicated" means "No Docs".

    A 747 with millions of parts is no too complicated, because Boeing supplies millions of doc to help understand, maintain and repair it. If something is missing, they are there to help. If it falls from the sky, they are there to pickup the pieces and figure out how it happened and to update the machine and the doc to prevent it from happening again.

    It is us that makes thing "too complicated" - we waste hours of work on build systems in multiple languages, when one will do. To prove that only we can maintain it.

  • (Score: 4, Interesting) by JNCF on Wednesday July 20 2016, @06:27PM

    by JNCF (4317) on Wednesday July 20 2016, @06:27PM (#377423) Journal

    But even with docs, a single human can't understand it all. I think TFA is correct about this, I just don't think it's very important. Anybody who has created a system too complex for themselves to remember every detail of without looking at docs/diagrams/code is aware of this. You're necessarily working in a mental fog after a certain level of complexity, even if you can grasp every detail one at a time. I think organizations are the same way. Groups of people can tackle a problem in small pieces and stitch those pieces together. It's not important that an individual human can't work through the mental fog of a 747, because Boeing can. Nobody ever expected a single human to understand a 747, nor did we expect a single human to build an aqueduct. Some tasks have to be done by groups, or we can't have nice things.

    The point about machines designing systems too complex for any group of humans to understand is more interesting to me.

    • (Score: 0) by Anonymous Coward on Wednesday July 20 2016, @07:31PM

      by Anonymous Coward on Wednesday July 20 2016, @07:31PM (#377467)

      you are just quitting before beginning. It is "too hard". I get that from my kids with math problems. Break it down and get going.

      747 is easy to get your head around it. It is plane (flys in the air), it is made of a tube, with 2 sets of wings one mainly for lift and the other guidance. What if Lift? Physic problem. and so and so on and so on. It is all frames of reference.

      Thing of the Skunk Works on the late 04's and 50's... U2, SR-71 built with slide rulers and few people. Trip to moon again, built in stages a scaled up.

      It is up to each of to do the right job, not A job.

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday July 20 2016, @07:37PM

      by Anonymous Coward on Wednesday July 20 2016, @07:37PM (#377471)

      Sounds like Mr. Arbesman is learning why we have APIs. As long as foo(bar) properly foos the bars I pass it, I don't need to know how it works and there's no way I could keep in my head how all the APIs I call actually work.

    • (Score: 1, Touché) by Anonymous Coward on Thursday July 21 2016, @09:29AM

      by Anonymous Coward on Thursday July 21 2016, @09:29AM (#377830)

      But even with docs, a single human can't understand it all.

      Depends on what you mean with "understand it all". Strictly speaking, it is impossible to understand even a simple fire. There are just too many ways it can burn.

      If you think "understanding" means "knowing every detail" you don't understand what understanding means. The essential part of understanding something is to know when a detail does not matter.

      • (Score: 2) by JNCF on Thursday July 21 2016, @03:17PM

        by JNCF (4317) on Thursday July 21 2016, @03:17PM (#377943) Journal

        I agree with this. I do think you can understand some systems completely at a certain level, and I was trying to discuss systems where this is not the case. My language could have been more precise. I considered clarifying it, but cut a paragraph for brevity.

        Of course, all of our systems are emergent phenomenon running on top of a universe we don't understand. Computers work well enough that we can mostly abstract the lower levels we're running on top of. This doesn't always work, and once in a while background radiation flips a bit or a truck drives into a computer while it's operating. It is the leakiness of our abstractions that allows this, and may always allow it with some probability. I see this as a different issue than not understanding the level you're working on, even though that's an arbitrary division made by my human brain. Fizz-buzz programs are simple enough that you can correctly model their output if we assume that lower levels work as expected, but larger programs that use more data than you can juggle in your working memory don't have this property. You can abstract this data behind interfaces, but it's still part of the system that runs at the same level. I believe we're on the same page, but correct me if I'm wrong.

  • (Score: 2) by Bot on Wednesday July 20 2016, @08:28PM

    by Bot (3902) on Wednesday July 20 2016, @08:28PM (#377513) Journal

    >build systems in multiple languages, when one will do
    LINGVA LATINA PRO VICTORIA

    oh sry, you meant
    JAVASCRIPT

    --
    Account abandoned.