Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Wednesday October 03 2018, @09:12PM   Printer-friendly
from the the-future-is-now,-old-man dept.

Nikita Prokopov has written a blog post detailing disenchantment with current software development. He has been writing software for 15 years and now regards the industry’s growing lack of care for efficiency, simplicity, and excellence as a problem to be solved. He addresses the following points one by one:

  • Everything is unbearably slow
  • Everything is too large
  • Bitrot
  • Half-baked products get shipped
  • The same old problems recur again and again
  • Most code has grown too complex to refactor
  • Business is uninterested in improvement

Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by ilsa on Wednesday October 03 2018, @10:54PM (6 children)

    by ilsa (6082) Subscriber Badge on Wednesday October 03 2018, @10:54PM (#743725)

    In the old days, hardware was far more expensive than the programmers that wrote the software, so you had to make every byte count.

    Now, the cost of developers is significantly more than the cost of the hardware, which means the focus is now about how to make developers produce the fastest. Anything that makes things easier for the developer is the road taken, no matter how god-awful the product becomes as a result.

    Throw into the mix the current new developer values like 'move fast and break things' (what a f__king idiotic philosophy), etc, and you can easily see why the state of software is so crap now.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Interesting) by inertnet on Wednesday October 03 2018, @11:29PM (3 children)

    by inertnet (4071) on Wednesday October 03 2018, @11:29PM (#743756) Journal

    That, and the fact that all those fancy coders must have the latest, fastest hardware to play with. I bet a lot of them never even realize that most of the world is using dated hardware.

    • (Score: 2) by c0lo on Thursday October 04 2018, @05:30AM (2 children)

      by c0lo (156) Subscriber Badge on Thursday October 04 2018, @05:30AM (#743872) Journal

      That, and the fact that all those fancy coders must have the latest, fastest hardware to play with

      I'm quite a seasoned coder and I must say that I appreciate a machine that can do the compilation of a C++ monster solution (over 50 subprojects in C++ and some 150+ C#) in less than 20 mins - parallel compilation - on my dev desktop.
      The Jenkins slave machine that we use for building manages to do it in under 10 mins, using to-redhot-100% about 50 of the 64 CPU cores (yes, we are already using UBes [buffered.io] inside CI).

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by VLM on Thursday October 04 2018, @12:09PM (1 child)

        by VLM (445) Subscriber Badge on Thursday October 04 2018, @12:09PM (#744010)

        in less than 20 mins - parallel compilation - on my dev desktop.
        The Jenkins slave machine that we use for building manages to do it in under 10 mins

        Eventually it gets to the point of just have a frigging chromebook connected to an absolute monstrosity of a build farm in vmware. Very few laptops will give you 10 fanless hours of computing with 128 gigs of ram and 36 TB of storage. Thats hard on a desktop too.

        Interesting problem I ran into was vmware performance for multicore can be very bad because the amount of time I can get 32 cores simultaneously is much less than the time I can get 8 cores for four virtual images so wall clock results were way better for large numbers of parallel machines.

        I'm just saying when you can spin up a build box with one click, if you got 50+150=200 projects and access to a corporate sized compute cluster you could probably get that build time down to a minute or less with 200 or so dedicated build boxes. And of course the testing infrastructure. Massive parallelism works REALLY well with test infrastructure.

        • (Score: 2) by c0lo on Thursday October 04 2018, @12:58PM

          by c0lo (156) Subscriber Badge on Thursday October 04 2018, @12:58PM (#744045) Journal

          an absolute monstrosity of a build farm in vmware.

          VM-es simply don't make the cut due to the projects dependencies.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 3, Interesting) by jelizondo on Thursday October 04 2018, @12:44AM

    by jelizondo (653) Subscriber Badge on Thursday October 04 2018, @12:44AM (#743776) Journal

    Amen brother

    I started "programming" on a VIC-20 with fucking 8 KB (yes, KB) of RAM. So I learned to count bytes and strive for efficiency. Then I was off to dumb terminal world and of course, every byte counts, every instruction counts and one looks with a magnifying glass at everything to make it as lean and fast as possible.

    Some years ago some dude who replaced me at one of my old jobs complained to one of my friends still at that company that he could not understand why some portions of the code (written about 20 years ago) were in machine code! Well, doh, because those portions were bottlenecks and I need the fastest execution possible! And he was complaining because he couldn't read assembler.

    Get off my lawn!

  • (Score: 3, Interesting) by jmorris on Thursday October 04 2018, @05:40PM

    by jmorris (4844) on Thursday October 04 2018, @05:40PM (#744205)

    The problem is not realizing when programmers are more expensive than throwing hardware at a problem and when they aren't. The example in the article was a bad one. A script a single person runs daily and gets a correct answer from in a second or two is not a candidate for any further optimization. If it is a resident process on a billion battery powered devices it is. Between those poles the reward for optimization varies.

    My Fedora desktop is currently wasting 51MB of resident set for a goddamned bluetooth applet written in Python and $deity only knows how many libraries so it can sit idly doing nothing useful. It also need another 42MB for the program that manages the tray icon, it, keeps bluetooth/obex in memory for another 7MB, talks to bluetoothd which squats on another 5MB and interacts with Pulseaudiod and pulse/gconf-helper for another ~22MB. There is no sound currently being produced btw. Over 100MB are currently mapped just for the bluetooth stack. Considering how many Linux desktops there are, all of that is a very good candidate for an optimization project but nobody will do it because they know that before they could complete the work the underlying libraries will change again, the Mad Hatters will reinvent the whole middle layer of plumbing again or perhaps rip and replace the entire desktop. Now consider that Android re-uses some of those bluetooth libs.