Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday July 20 2016, @05:17PM   Printer-friendly
from the i-just-don't-get-it dept.

Submitted via IRC for Bytram

This week Samuel Arbesman, a complexity scientist and writer, will publish "Overcomplicated: Technology at the Limits of Comprehension." It's a well-developed guide for dealing with technologies that elude our full understanding. In his book, Arbesman writes we're entering the entanglement age, a phrase coined by Danny Hillis, "in which we are building systems that can't be grasped in their totality or held in the mind of a single person." In the case of driverless cars, machine learning systems build their own algorithms to teach themselves — and in the process become too complex to reverse engineer.

And it's not just software that's become unknowable to individual experts, says Arbesman.

Machines like particle accelerators and Boeing airplanes have millions of individual parts and miles of internal wiring. Even a technology like the U.S. Constitution, which began as an elegantly simple operating system, has grown to include a collection of federal laws "22 million words long with 80,000 connections between one section and another."

In the face of increasing complexity, experts are ever more likely to be taken by surprise when systems behave in unpredictable and unexpected ways.

Source: http://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/

For a collection of over three decades of these (among other things) see The Risks Digest - Forum On Risks To The Public In Computers And Related Systems. It's not so much that this is a new problem, as it is an increasingly common one as technology becomes ever more complicated.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday July 20 2016, @06:47PM

    by Anonymous Coward on Wednesday July 20 2016, @06:47PM (#377431)

    >People keep saying that compilers are better than humans ... I'm not sure if I fully believe that ...

    Compilers obviously are much better than the persons singing their praise. :) Because not noticing how bad compilers really are at optimizing, requires quite an advanced level of stupid.
    I routinely can speed up code anywhere from 30% and up to 300% just by looking at assembly output and tweaking the C source, without resorting to writing assembly myself. Which means, I only make compiler avoid its most glaringly idiotic choices, while still leaving all the smaller inefficiencies be; there remains a whole lot of cruft to be cut out with a clean assembly rewrite, if I ever needed that last bit of performance at the cost of portability.

    Compilers are lame, but many coders out there are lamer. :)

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Wednesday July 20 2016, @06:54PM

    by Anonymous Coward on Wednesday July 20 2016, @06:54PM (#377436)

    Could you list the compilers you've found to produce inefficient code?

    Just curious, as I have no doubt that compilers can differ.

    Also, what optimization settings did you use?

    • (Score: 2) by Scruffy Beard 2 on Wednesday July 20 2016, @07:20PM

      by Scruffy Beard 2 (6030) on Wednesday July 20 2016, @07:20PM (#377458)

      Probably using the Intel compiler for AMD hardware. :P

      I suspect The GP may be alluding to unrolling loops enough to allow the CPU to execute 4 instructions per clock cycle.

      • (Score: 2) by bzipitidoo on Wednesday July 20 2016, @07:49PM

        by bzipitidoo (4388) Subscriber Badge on Wednesday July 20 2016, @07:49PM (#377485) Journal

        Lot more than that. Just look at the assembler a compiler produces, you'll see all kinds of things. Stuff like this:

        loop:
        (bunch of assembler)
        dec r1
        bnz loop
        load r1,0

        Uh, compiler, r1 is already 0 upon exiting that loop, it was not necessary to load 0 into r1. Oh, and if you insist on zeroing out r1 anyway, why did you do it with a load command instead of xor r1,r1? Was it that you didn't want to change any flag settings? No, the next few instructions don't need any leftover flag settings, which in any case were unchanged from the way the decrement command set them.

        As to the subject of the article, compilers are NOT complicated beyond our understanding. Careful not to extrapolate the mere condition of being hidden, hard to view, to being impossible to understand.

        • (Score: 2) by Arik on Wednesday July 20 2016, @08:44PM

          by Arik (4543) on Wednesday July 20 2016, @08:44PM (#377524) Journal
          "Uh, compiler, r1 is already 0 upon exiting that loop, it was not necessary to load 0 into r1. Oh, and if you insist on zeroing out r1 anyway, why did you do it with a load command instead of xor r1,r1? Was it that you didn't want to change any flag settings? No, the next few instructions don't need any leftover flag settings, which in any case were unchanged from the way the decrement command set them."

          Because the compiler is essentially pasting in a function, and that function is (as it should be) written to be as generic and bulletproof as possible. This is how it has to be done, and this is why even the platonic ideal compiler, with perfectly written functions and no bugs at all can still be beaten by a human at this.*

          "As to the subject of the article, compilers are NOT complicated beyond our understanding."

          Perhaps compilers are not but modern 'desktop' computers may well be. The software stack typically present has grown so huge and bloated, and it's so messy in design and implementation, I really don't think any single person understands any of the modern examples.

          *but note that this assumes the human has skills and is given a lot more time than the compiler will use.
          --
          If laughter is the best medicine, who are the best doctors?
          • (Score: 2) by bzipitidoo on Wednesday July 20 2016, @10:51PM

            by bzipitidoo (4388) Subscriber Badge on Wednesday July 20 2016, @10:51PM (#377604) Journal

            > The software stack typically present has grown so huge and bloated, and it's so messy in design and implementation, I really don't think any single person understands any of the modern examples.

            Oh, bull. If we are talking about formal verification to prove that the software functions correctly and has no bugs, you're right, it's far too big and complicated for that. But we're not. We have a crucial, core tool in software engineering, modularization.

            This reminds me of a debate I heard over control systems at a heating, ventilation, and air conditioning (HVAC) company. The traditional way was to have one diagram per possible configuration. When there were only a few models of thermostats, furnaces and A/C's, that was possible. But when the numbers ballooned, they began to panic. 100 thermostats, 50 different models of furnaces, and 50 different models of A/Cs meant drawing 100x50x50 = 250,000 diagrams, something they simply did not have the resources to do. Add in yet another device, for ventilation only, and that number grew to well over 1 million. For a while they managed by only creating diagrams on demand, wouldn't make one for a configuration unless at least one customer actually used it. But the ultimate solution was modularity. Instead of 100x50x50 diagrams, they needed only 100+50+50 = 200 diagrams. Each thermostat, furnace, and A/C had its own diagram, and whichever ones were wanted in a complete system could be connected together. Years after this was in place, a new batch of engineers who didn't know about the system fell into the same mistake, and began to panic over having to create millions of diagrams. But that time, computers had become ubiquitous, and they proposed writing programs to generate all those millions of diagrams, and hiring software engineers to handle that. Their plans didn't go far before they learned they were being silly. Quite a few red faces over that. As the famous saying from Hitchhiker's Guide to the Galaxy goes, "Don't panic".

        • (Score: 2) by sjames on Wednesday July 20 2016, @10:24PM

          by sjames (2882) on Wednesday July 20 2016, @10:24PM (#377583) Journal

          Did you forget to pass it -O3 permitting it to elide instructions?

          • (Score: 2) by bzipitidoo on Thursday July 21 2016, @12:32AM

            by bzipitidoo (4388) Subscriber Badge on Thursday July 21 2016, @12:32AM (#377645) Journal

            I admit I haven't kept up with what compilers can do. I know -O3 was added quite a few years ago. And I know register juggling has gotten very sophisticated. As I recall, optimal register assignment is an NP-hard problem, but there are few enough registers and fast enough computers now that the compiler can employ an exponential algorithm to find the optimal solution without making compilation unacceptably slow. How far optimization goes now or will go in the near future, I don't know, all the way to -O6?

            • (Score: 2) by sjames on Thursday July 21 2016, @01:20AM

              by sjames (2882) on Thursday July 21 2016, @01:20AM (#377654) Journal

              We're up to -O6 (or perhaps more by now), but it's not standard and it may take unsafe liberties with floating point.

              • (Score: 1, Touché) by Anonymous Coward on Thursday July 21 2016, @09:16AM

                by Anonymous Coward on Thursday July 21 2016, @09:16AM (#377828)

                Optimization will not be good enough until it goes to eleven!

  • (Score: 2) by hendrikboom on Thursday July 21 2016, @05:47PM

    by hendrikboom (1125) on Thursday July 21 2016, @05:47PM (#378043) Homepage Journal

    Compilers got to be better than humans sometime in the 70's. Not that all of them are, nor even that humans can't improve a lot of fragments of the code they generate. But when you start to look at programs of thousands of lines of assembler, people get tired, and fail to work on each fragment at their human peak optimization ability.

    Compilers may do worse in small pieces of code, but they don't get tired and in the long stretches they end up excelling. It's like the difference between a sprinter and a long-distance runner.

    Hand-optimizing the few bits of code that are time-critical can, of course, still improve efficiency.

    Not to mention that coding a lot of assembler can blind you to the possibilities of changing data representations or algorithms, which I'm not even taking into account.

    But I'll agree that on small pieces of code, like the ones presented in this discussion, human programmers can often outdo compilers.