Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Friday May 09, @09:19PM   Printer-friendly
from the every-line-of-code-counts dept.

"We have now sunk to a depth in which restatement of the obvious is the first duty of intelligent men." (George Orwell).

Few people remember this, but back in 2003 there was a bit of an uproar in the IT community when Intel dared introduce a unique, retrievable, ID, the PSN number, in its new Pentium III CPU.

It is kinda hard to believe, but that little privacy backlash was strong enough to force Intel to withdraw the feature, starting with Tualatin-based Pentium IIIs. That withdrawal lasted until 2015, when it was (silently) introduced again, as the Protected Processor Identification Number (PPIN), with Intel's Ivy Bridge architecture.

So, only a good ten years ago we believed in privacy. Now we still do, perhaps, but somehow the industry moved the needle to obligatory consent -- without opt-out possibility -- with any and all privacy violations that can be dreamt up in Big (and Not So Big) Tech boardrooms.

Something similar is happening with software, argues Bert Hubert in a piece on IEEE Spectrum. Where once on-premise software and hardware was the rule, trying to get a request for on-prem hardware signed off nowadays is a bit like asking for a coal-fired electricity generator. Things simply *have* to be in the Magically Secure Cloud, and software needs to be developed agile, with frameworks.

The way we build and ship software these days is mostly ridiculous, he claims: apps using millions of lines of code to open a garage door, and simple programs importing 1,600 external code libraries. Software security is dire, which is a function both of the quality of the code and the sheer amount of it.

Let me briefly go over the terrible state of software security, and then spend some time on why it is so bad. I also mention some regulatory and legislative things going on that we might use to make software quality a priority again. Finally, I talk about an actual useful piece of software I wrote as a proof of concept that one can still make minimal and simple yet modern software.


Original Submission

This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Anonymous Coward on Friday May 09, @09:35PM (11 children)

    by Anonymous Coward on Friday May 09, @09:35PM (#1403234)

    I was recently (few days..) asked to update the login link on a page, and just get it working. ?...

    The login link was redirecting to /#, and there was nothing that could be done about that in about a _week_. This completely baffled me, as it was a website with ..... one page. It was a landing page for a not-yet-launched domain. There would be a few links to things that worked on the back-end, which I know nothing about.

    So. The single page.

    - A react app
    - that is proxied to a back-end server
    - which runs inside node.js
    - inside some odd node http server
    - Inside next.js
    - inside "vite"(??)
    - Inside npm
    - inside node version manager

    .... and I'm looking at all of this... and about to just copy-paste the code rendered in the browser into "index.html", fix the link, and call it a day.

    W. T. F. This is absurd. It reminds me of a recent post I saw where someone was using http://127.0.0.1/zzz [127.0.0.1] to hit a node.js http server to access a local file.

    What the hell are kids-these-days even smoking?? WAY more hardcore than back when I was in high school.

    (Eventually I got it working. The link was created in React as <link to="..."> which directs requests through the React router, which wasn't defined, whereas I needed it to direct to a path on the local domain, not inside this react app: I needed to use the tag <a href="...">. Just......... wtf. . .)

    • (Score: 1, Informative) by Anonymous Coward on Friday May 09, @10:03PM

      by Anonymous Coward on Friday May 09, @10:03PM (#1403236)

      While I to hate the bloat that thing you describe isn't really all that new nor novel. It's a lot more tho but you could do it back in the day with just some cascading style sheets and simple js. Putting the entire site basically into one file. Then just have parts of the page displayed by turning things on and off in the css. It was quite simple.

    • (Score: 2, Interesting) by Anonymous Coward on Friday May 09, @11:05PM

      by Anonymous Coward on Friday May 09, @11:05PM (#1403243)

      Was it all generated by Wix or some other similar pile of crap?

    • (Score: 4, Informative) by JoeMerchant on Friday May 09, @11:59PM (5 children)

      by JoeMerchant (3937) on Friday May 09, @11:59PM (#1403246)

      What we are facing is an update treadmill. Dire CVE warnings pile up by the hundreds: severe, CRITICAL, and the way to address them? Update. And the way to update your Ubuntu without pushing a new OS to the field every 5 years? Pay up for a subscription, and then deal with all that mess of license keys and contracts and ongoing relationships: https://arstechnica.com/gadgets/2025/05/broadcom-sends-cease-and-desist-letters-to-subscription-less-vmware-users/ [arstechnica.com]

      Why do we even need an OS at all in our product?

      Well... our customers want to take our final reports and submit them to their networked electronic records systems, so we need a TCP/IP stack and all that entails. We use those poison API libraries that expect "standard" video and touch screen interfaces, so that's far easier to provide with an OS than with "bare metal." And some of our customers demand virus scanners - try explaining to a VA acquisitions committee how your system can't get a virus because it doesn't have an OS, just try...

      Now that we're on the OS bandwagon, we need perpetual, and frequent, updates. Time was we could field a system for 20 years and it was just fine, same as it ever was, but no: the standard today is that we have to have a plan and demonstrated ability to execute "timely response to reported vulnerabilities" - basically: pushing updates. Around the globe, so that means via internet connection - did we need internet connection before? Not really, but now... we've got to have that rapid threat response so we can update when we have vulnerabilities introduced via: the mandatory update mechanism.

      And it's the most reliable paycheck I have had in my entire life.

      --
      🌻🌻🌻 [google.com]
      • (Score: 5, Interesting) by Anonymous Coward on Saturday May 10, @01:36AM (3 children)

        by Anonymous Coward on Saturday May 10, @01:36AM (#1403258)

        Now that we're on the OS bandwagon, we need perpetual, and frequent, updates. Time was we could field a system for 20 years and it was just fine, same as it ever was, but no: the standard today is that we have to have a plan and demonstrated ability to execute "timely response to reported vulnerabilities" - basically: pushing updates.

        The updating part isn't such a huge problem.

        The big problem is the updates are often broken and/or break some backward compatibility. Otherwise we could just easily automate most stuff and do "update all".

        But no, we have to deploy the updates to test environments, hopefully succeed in testing "what needs to be tested". Then only deploy to canary nodes in production to check for probs in production. Then if OK deploy to all nodes.

        As for "what needs to be tested" the more features etc your enterprise software has the more there is to be tested. Only noobs spout platitudes about "test coverage" and assume you can test everything and every case. If there are 100+ different combinations you can't test all the combinations - it'll be worse than brute forcing AES128 because each test takes magnitudes longer than one cycle of AES128.

        The other big problem is you are often not forced to just update, but every few years you are forced to upgrade to a partly but significantly incompatible version. That's like building a skyscraper on a foundation that only lasts a few years. Before you're done migrating the skyscraper to foundation 2.0 you already need to start migrating it to foundation 3.0. Then 4.0 etc.

        We're basically trying to build the future civilization on foundations that don't last. Think of the amount of wasted time and resources. Most of the changes causing incompatibilities aren't necessary. The OS stuff should be boring reliable concrete foundation stuff, not trying to show you ads and trying to add more AI and telemetry. RedHat and others aren't much better - their goal is similar to Microsoft - they don't want an OS that lasts for decades - they need customers to keep paying more and migrating.

        I think more enterprises or even countries should get together and pick a preferably "Free" platform and make it the stable foundation for future decades if not centuries; warts, ugliness[1] and all. Because what they need is a boring stable affordable foundation to build and run THEIR STUFF on.

        Otherwise the future will be like the paraphrased Japanese saying: "Those who use mainframes are stupid. Those who don't use mainframes are also stupid". We'd have no non-stupid options.

        [1] Here's an example of a system that has been working for millions of years, that didn't simply break backward compatibility even though stuff was ugly or even "wrong": https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve [wikipedia.org]

        • (Score: 2) by JoeMerchant on Saturday May 10, @01:30PM (2 children)

          by JoeMerchant (3937) on Saturday May 10, @01:30PM (#1403290)

          >The big problem is the updates are often broken and/or break some backward compatibility. Otherwise we could just easily automate most stuff and do "update all".

          Our plan is to setup an automated system that pulls the most recent updates, validates that they don't cause (obvious) problems with our system, run a human smoke test, and if that's all good then update our mirror of the repository to match the one just validated. Systems in the field update from the repository we control. That's the plan. Projected launch date? Was 2021, currently? 2027. Ask again in 2026 for more accurate forecasting.

          > noobs spout platitudes about "test coverage" and assume you can test everything and every case.

          Oh, FFS - we have "code coverage metric tools" that guarantee - nothing. You could get 100% coverage of every branch case as measured by those tools and still completely miss the bugs - we currently shoot for 70% line coverage which usually works out as about 20% branch coverage. The only "good thing" about attempting to meet those metrics is that it gives you more time working with the code so you do find bugs, and you can tell management "oh, we need this time to get our unit test coverage metric up to par..." Correlation between unit test coverage and actual bugs ferreted out? Weak, at best.

          > That's like building a skyscraper on a foundation that only lasts a few years.

          Yup, which is why bare metal is so attractive, until the metal you're running on goes end of life and you're on the hook to rewrite all the hardware level interfaces.

          >Before you're done migrating the skyscraper to foundation 2.0 you already need to start migrating it to foundation 3.0. Then 4.0 etc.

          In the early 90s DOS would release a new "95% backward compatible" version every few months, I think there was a year in there with 3 breaking version updates. 95% backward compatible means: you're having to find which 5% of your code broke and fixing that. I called it the M$ treadmill.

          > trying to build the future civilization on foundations that don't last.

          It's the subscription model. Most reliable paycheck I've had in my life.

          >We'd have no non-stupid options.

          True, but life without income is a more-stupid option. The over-arching social structure is borked, and has been getting more-so for 50 years. Probably longer, that's just my personal experience.

          --
          🌻🌻🌻 [google.com]
          • (Score: 2) by driverless on Saturday May 10, @06:16PM (1 child)

            by driverless (4770) on Saturday May 10, @06:16PM (#1403322)

            Yup, which is why bare metal is so attractive, until the metal you're running on goes end of life and you're on the hook to rewrite all the hardware level interfaces.

            That depends on whether you're using the latest flashy thing that turned up or planned for long-support-life hardware. There's stuff out there with lifetimes planned ahead in decades, not necessarily the same hardware components each time but functionally equivalent ones. Just two days ago I was discussing with someone replacing 15-year-old hardware with something newer, and wondering why it was necessary - turned out there was no pressing need, just a few nice features on the newer stuff. It'll probably be in production in five years or so, so a total lifetime of 20-odd years for the same hardware and software.

            • (Score: 2) by JoeMerchant on Sunday May 11, @02:59AM

              by JoeMerchant (3937) on Sunday May 11, @02:59AM (#1403362)

              Yeah, we try to source only long life components. Our current system is still on a Skylake i7 6th gen, and we should be able to stay with that for 10+ years to come, but things like the WiFi card and Camera card are going away on us, and of course pandemic supply chain shocks were a lot of fun... we did a full redesign of a system to get it in more readily available parts, got that working, and then the old parts came available again.

              --
              🌻🌻🌻 [google.com]
      • (Score: 3, Informative) by driverless on Saturday May 10, @05:33PM

        by driverless (4770) on Saturday May 10, @05:33PM (#1403317)

        Another thing we're facing, if the app is standards-based, is a red-queen problem where IETF committees composed largely of professional meeting-goers are churning out pointless updates and revisions to things far faster than anyone except the mega-organisations sponsoring the stuff can keep up with, and certainly no-one, not even the mega-organisations, can implement securely, because there's no time before the next rearrangement of deckchairs is announced.

    • (Score: 1, Troll) by VLM on Saturday May 10, @02:46PM

      by VLM (445) Subscriber Badge on Saturday May 10, @02:46PM (#1403299)

      W. T. F. This is absurd.

      I have a serious answer for you, although nobody is going to like it.

      The problem is passing the buck. I can already tell you that was written by a team of competitive ivy league grads from Stanford.

      Imagine yourself working on their website. What happens is you got yourself a problem, add another layer of abstraction, now someone else has two problems trying to fix it later. But programmer number one made his personal number on metrics graph look more gooder so all is well with the world even though for no apparent reason everything is worse.

      This does NOT happen on single person codebases where someone "owns" the code, and generally happens a lot less on FOSS.

      Aside from inside company competition between individuals this also happens between companies, like outsourcing companies. Security is a lot stricter now but "in the old days" you could go to www.cnn.com or whatever and "view source" and read comments from different outsourcing companies. What happens is outsourcing company #7 gets the new contract "WTF is this shite" wraps it all up in yet another layer of abstraction, move on until it's too slow to use or collapses under its own weight or sales BSes the customer into a complete rewrite.

      To some extent this is a computer science problem. Everyone in aerospace engineering has heard about "simplicate and add lightness" as good aviation design. Well, maybe Boeing abandoned that with predictable results, maybe not. I suspect recent plane crashes and failures and "incidents" are accompanied by everyone involved having the most stellar individual personal review metrics and ALL graphs only go up and to the right even if the planes altitude doesn't LOL.

      I miss the days or"view source" and reading comments on rando sometimes very big corporate websites. It gave the web a little more character.

    • (Score: 0) by Anonymous Coward on Saturday May 10, @03:43PM

      by Anonymous Coward on Saturday May 10, @03:43PM (#1403308)

      So. The single page.

      - A react app
      - that is proxied to a back-end server
      - which runs inside node.js
      - inside some odd node http server
      - Inside next.js
      - inside "vite"(??)
      - Inside npm
      - inside node version manager

      so, let's break this down a little..

      1. which "inside node.js inside http server inside next.js inside vite?" -- you have 4 servers running??? that's the WTF here.
      2. react is a frontend framework -- that's the only part that runs in the browser.
      3. npm is installer -- if you include this you have to include "inside RPM inside a *distro* inside a linux kernel"... or maybe also "inside kubernetes inside a container ..."
      4. nvm is installer for node. You can do the same with RPMs or just download a binary...

      Most of the time this is generated for a container that you can then update and deploy somewhere. It's actually simpler to maintainer the machine like that.

      Also when people write 1400 npm modules to install website ... no, it's 1400 npm modules to generate some Javascript and run webpack or similar. The actual generated code is like 10kB or 100kB, depending on the site. And for that I blame companies like Google or Facebook that created these frameworks and piled in all these deps in there. Even npm sources are kind of insane as they "modularized" it everywhere. This modularization makes it more difficult to fix things.

      The link was created in React as <link to="..."> which directs requests through the React router, which wasn't defined, whereas I needed it to direct to a path on the local domain, not inside this react app: I needed to use the tag <a href="...">. Just......... wtf. . .)

      Yes, the WTF is using <link to> where you want <a>

    • (Score: 2) by corey on Saturday May 10, @11:38PM

      by corey (2202) on Saturday May 10, @11:38PM (#1403343)

      100%

      This is why I got out of web development (LAMP back in the day).

      Even C/C++ is starting to escape me, with all the fancy UI tools (Eclipse/the Microsoft one). For me, it’s vim the C file, then gcc -o file file.c. Python is a bit mad as well. I used to write Makefiles as well.

      I think all this obfuscation is what drove me to Hardware engineering which I do for a living now.

  • (Score: 5, Interesting) by krishnoid on Friday May 09, @10:09PM (2 children)

    by krishnoid (1156) on Friday May 09, @10:09PM (#1403237)

    I keep reposting this link [youtu.be] to the audiobook with sound effects, but 1984's fanciful rewriting of truth and deletion of history [pbs.org] seems to be showing up in a ham-fisted enough way that you can explain it to a five-year-old.

    When Janrinok resigned [soylentnews.org], it made me think about whether I was overly paranoid to request journal citations for science articles in the light of some media and population embracing the concepts of "fake news" and "alternative facts" vs. freedom of the press and the value of journalism. I was thus extremely grateful (and continue to be) to the editors that citations are provided with the stories.

    Citations have been historically presented to provide full disclosure and disavow intentions of plagiarism. It only occurred to me last week that providing them as a "chain of evidence" to show that something wasn't just made up, shows that we've "sunk to a depth" in which "restatement of" the existence of and reference to a document, solely for the reason of it being an original source, isn't something to take for granted.

    • (Score: 3, Interesting) by krishnoid on Friday May 09, @10:13PM

      by krishnoid (1156) on Friday May 09, @10:13PM (#1403238)

      When Janrinok resigned, it made me think ...

      Wait a second .... :-|

      I did wonder why/when AI hasn't yet been repurposed towards identifying security holes and shrinking code bases. Less profit in it, I guess.

    • (Score: 4, Insightful) by JoeMerchant on Saturday May 10, @12:07AM

      by JoeMerchant (3937) on Saturday May 10, @12:07AM (#1403248)

      > overly paranoid to request journal citations for science articles in the light of some media and population embracing the concepts of "fake news" and "alternative facts" vs. freedom of the press and the value of journalism. I was thus extremely grateful (and continue to be) to the editors that citations are provided with the stories.

      Not just citations, links to articles that you can actually read - authors who you can vet for influences. The information is "out there" but we continue to hide it in the name of privacy, because "we never shared that depth of information before."

      No, before we traveled in small packs, even global collaborators would meet face to face at annual conferences. You knew who you were dealing with at least a little bit because you went to lunch with the guy a few years back.

      Today? Energized phosphors on a screen, as quickly erased or revised as they were arranged in the first place. Who's that author? Google says he has 568 publications, but what does _that_ really mean?

      Real children of tomorrow should have real identities, cryptographicly secured with public blockchain irrefutability that they are who they say they are, when they sign something you can be assured it really was signed by them. Keys will be lost, and the secure recovery of identity will be recorded in public blockchains with clear statement of when a particular key was lost and no longer to be trusted. If someone wants to wander into a bar as an anonymous stranger new to town, they're free to do that and that right should be protected. When they whip out a journalist's blog and start signing first hand reporting of what they are seeing in town - that gets a signature, or gets heavily discounted as anonymous entertainment rather than information with veracity.

      --
      🌻🌻🌻 [google.com]
  • (Score: 3, Insightful) by pdfernhout on Saturday May 10, @12:03AM (6 children)

    by pdfernhout (5984) on Saturday May 10, @12:03AM (#1403247) Homepage

    ... with 1K of RAM (a big step up from breadboarding ICs and LEDs and switches in the 1970s), the article resonates with me. :-)

    See also:
    https://en.wikipedia.org/wiki/Software_bloat#Security_Risk [wikipedia.org]
    "Software bloat may induce more vulnerabilities due to raise of difficulty in managing a large number of code and dependencies. Furthermore, it may make software developer difficulty to understand the code they ship, increasing the difficulty for spot and fix vulnerabilities."

    Yeah, to just see people add npm modules with lots of dependencies to a project to get some small feature...

    Related humor:
    http://vanilla-js.com/ [vanilla-js.com]
    "Vanilla JS is a fast, lightweight, cross-platform framework for building incredible, powerful JavaScript applications..."

    --
    The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
    • (Score: 4, Insightful) by JoeMerchant on Saturday May 10, @12:12AM (2 children)

      by JoeMerchant (3937) on Saturday May 10, @12:12AM (#1403249)

      My first was an Atari 800 with 16K of RAM and a BASIC interpreter.

      After I got that puppy souped up to 48K of RAM, I remained convinced that it could do almost anything the big 640K PCs were doing for at least 8 years later. The PCs were already starting to run bloatware that could have been implemented more efficiently and run on the 8 bit processors.

      Today, I am a fan of 64 bit addressing, but only because big RAM is so cheap. For now at least, I believe 18 billion-billion bytes are enough for anyone's personal computer.

      --
      🌻🌻🌻 [google.com]
      • (Score: 1, Touché) by Anonymous Coward on Sunday May 11, @05:23AM

        by Anonymous Coward on Sunday May 11, @05:23AM (#1403373)

        > I believe 18 billion-billion bytes are enough for anyone's personal computer.

        One day you will be mocked for this. You can't possibly have ads lasered directly into your brain stem with less than 20 billion billion bytes.

      • (Score: 3, Insightful) by sjames on Wednesday May 14, @10:54AM

        by sjames (2882) on Wednesday May 14, @10:54AM (#1403758) Journal

        I had a mix of the Apple][ at school and a c64 at home. That early experience serves me well when I need to work on μcontrollers now.

        I do enjoy the large address space of 64 bit on desktops, but the idea of cargo-culting a bunch of frameworks and NPMs on the fly gives me the creeps from a security standpoint. And you just know the developers of software like that never once vetted it for security, much less doing so continuously as version numbers get bumped.

        I have an old coffee mug with various IT wisdom printed on it (in a green bar design no less) and one of those sayings rings more true today than ever before:

        If builders built buildings the way programmers write programs, civilization would be destroyed by the first woodpecker to come along.

    • (Score: 4, Insightful) by pdfernhout on Saturday May 10, @12:38AM

      by pdfernhout (5984) on Saturday May 10, @12:38AM (#1403251) Homepage

      Other systems to think about from a simplicity point-of-view include Forth, Squeak Smalltalk (and friends), Lisp, and Scheme.

      A related talk by Rich Hickey (creature of Clojure, a lisp for the JVM):
      "Simple Made Easy"
      https://www.infoq.com/presentations/Simple-Made-Easy/ [infoq.com]
      "Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

      Squeak Smalltalk in particular I think hits a sweet spot of just enough complexity to make a very ergonomic usable language while still being simple enough for one person to understand and being easy to put on bare "metal" computing platforms without an OS. Frankly, I remain torn between working in it as a self-contained system that could reach out to other systems versus working on single-page applications for the web browser in JavaScript/TypeScript. (Yes, I know you can run Squeak in the browser too, but the experience tends to be sub-optimal.)

      QNX was also amazing (but unfortunately proprietary, unlike Squeak) as a simple system with some well-thought-out concepts that did a lot especially with networking (where you in the 1980s easily access resources across a network in a transparent way):
      https://en.wikipedia.org/wiki/QNX [wikipedia.org]
      "Gordon Bell and Dan Dodge, both students at the University of Waterloo in 1980, took a course in real-time operating systems, in which the students constructed a basic real-time microkernel and user programs. Both were convinced there was a commercial need for such a system, and moved to the high-tech planned community Kanata, Ontario, to start Quantum Software Systems that year. In 1982, the first version of QUNIX was released for the Intel 8088 CPU. In 1984, Quantum Software Systems renamed QUNIX to QNX (Quantum's Network eXecutive) in an effort to avoid any trademark infringement challenges. One of the first widespread uses of the QNX real-time OS (RTOS) was in the nonembedded world when it was selected as the operating system for the Ontario education system's own computer design, the Unisys ICON. Over the years QNX was used mostly for larger projects, as its 44k kernel was too large to fit inside the one-chip computers of the era. The system garnered a reputation for reliability and became used in running machinery in many industrial applications.
            In the late-1980s, Quantum realized that the market was rapidly moving towards the Portable Operating System Interface (POSIX) model and decided to rewrite the kernel to be much more compatible at a low level. The result was QNX 4. During this time Patrick Hayden, while working as an intern, along with Robin Burgener (a full-time employee at the time), developed a new windowing system. This patented[2] concept was developed into the embeddable graphical user interface (GUI) named the QNX Photon microGUI. QNX also provided a version of the X Window System.
              To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk for the 386 PC. ..."

      That example -- one 1.44 floppy to surf (and serve) the web:
      https://archive.org/details/qnx303 [archive.org]

      I was slow to move into Linux in the 1990s because a monolithic kernel (and the rest of the UNIX trappings) seemed so obsolete already compared to QNX and so on (event though I had been using UNIX since the early 1980s). Of course, there is value in popular and FOSS standards, so Linux has come a long way (along with GNU etc). And I like that Linux is customizable. But it is still huge.

      Sad to say this as a software developer, but the world might be a better place if 99.9%+ of software were to be deleted (by consensus, outside of special archive libraries) -- and we all only used the best remaining stuff with a few common flexible (but not "flabby") standards for data exchange -- like Lisp S-expressions? :-) The problem is we can't seem to agree what to keep and what to delete. :-)

      Choice is great, but it also can be anxiety-provoking:
      https://duckduckgo.com/?q=tyranny+of+choice&ia=web [duckduckgo.com]

      https://www.cambridge.org/core/journals/judgment-and-decision-making/article/tyranny-of-choice-a-crosscultural-investigation-of-maximizingsatisficing-effects-on-wellbeing/AD385DA6994D60D1948C4385437D8717 [cambridge.org]
      "The present research investigated the relationship between individual differences in maximizing versus satisficing (i.e., seeking to make the single best choice, rather than a choice that is merely good enough) and well-being, in interaction with the society in which an individual lives. Data from three distinct cultural groups (adults), drawn respectively from the U.S. (N=307), Western Europe (N=263), and China (N=218), were analyzed. The results showed that, in societies where choice is abundant (i.e., U.S. and Western Europe), maximizers reported less well-being than satisficers, and this difference was mediated by experienced regret. However, in the non-western society (China), maximizing was unrelated to well-being. Although in China maximizing was associated with more experiences of regret, regret had no substantial relationship to well-being. These patterns also emerged for the individual facets of the maximizing scale, although with a notable difference between the U.S. and Europe for the High Standards facet. It is argued that, in societies where abundant individual choice is highly valued and considered the ultimate route to personal happiness, maximizers’ dissatisfaction and regret over imperfect choices is a detrimental factor in well-being, whereas it is a much less crucial determinant of well-being in societies that place less emphasis on choice as the way to happiness."

      https://thesicknessuntodeath.com/the-tyranny-of-choice-how-kierkegaards-theory-of-despair-explains-modern-decision-paralysis/ [thesicknessuntodeath.com]
      "In our modern world, choice is celebrated as a freedom, but it can also become a sort of tyranny—a phenomenon that Søren Kierkegaard, the 19th-century philosopher, might recognise well. Kierkegaard’s theories, particularly his thoughts on despair, offer profound insights into why today’s vast array of choices can leave us feeling more paralysed than empowered."

      https://longevity.stanford.edu/the-tyranny-of-choice/ [stanford.edu]
      "We presume that more choices allows us to get exactly what we want, making us happier. While there is no doubt that some choice is better than none, more may quickly become too much. Drawbacks include: ..."

      As much as I have grown to like JavaScript/TypeScript (the "good parts"), and as much as I like the huge variety of npm modules to do interesting stuff, that choice of npm modules can be bewildering (and even risky). I so much wish JavaScript had a bit more "batteries included" like Python so I did not have to continually evaluate the risk of trusting arbitrary third parties for common functions in small npm modules (instead of them just being in core JavaScript). We may get there eventually though. As some people have suggested, JavaScript is better than we deserve (considering its history and all the fighting over control of the browser).
      https://en.wikipedia.org/wiki/Browser_wars [wikipedia.org]
      https://duckduckgo.com/?q=the+browser+wars&ia=web [duckduckgo.com]

      Squeak Smalltalk has its own issues with every time you save an image you have essentially forked the Squeak project. :-) And further, Smalltalk encourages a style of coding where you modify base classes (links to add methods to String for convenience in creating objects based on strings) which has all sorts of maintenance issues.

      All these decades of software development and people still seem to be struggling with so many basic things that are incomplete or badly designed along with, on the other hand, a proliferation of needless complexity.

      That said, diversity and evolution of populations of designs over time is important too. So "no silver bullet". But something to reflect on...

      --
      The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
    • (Score: 1, Insightful) by Anonymous Coward on Saturday May 10, @01:51AM (1 child)

      by Anonymous Coward on Saturday May 10, @01:51AM (#1403259)

      I'll add two more reasons code bloat contributes to vulnerabilities:

      - bloat is usually caused by pushing and rushing features into the code, and getting the code out the door fast. Unrefined code.
      - the bigger the code is, the more person-hours it takes to comb through it looking for problems. Good development tools (and AI) certainly help, but it still takes more time and effort.

      • (Score: 0) by Anonymous Coward on Sunday May 11, @05:27AM

        by Anonymous Coward on Sunday May 11, @05:27AM (#1403374)

        Easy - outsource to india or China.

  • (Score: 4, Insightful) by Thexalon on Saturday May 10, @01:23AM (10 children)

    by Thexalon (636) on Saturday May 10, @01:23AM (#1403257)

    The way we build and ship software these days is mostly ridiculous, he claims: apps using millions of lines of code to open a garage door, and simple programs importing 1,600 external code libraries.

    One phenomenon I've noticed over the years is a problem of unnecessary transitive dependencies. Your code relies on library A, which relies on libraries B, C, and D, which rely on libraries E-P, etc down an ever-growing tree of doom. But the fact is that your code probably doesn't rely on the entirety of library A, just a portion of it, and the portion you're actually using only relies on libraries B and C but not D, so really D and all of its dependencies can be dropped. And likewise, the portion of libraries B and C that A rely on don't really use the whole of those either, so libraries H, I, J, and K could also go away along with all of their dependencies. And so on. Odds are pretty good that a lot of those 1600 external code libraries are completely dead code which could be safely dropped, but because the default of every dependency system is to include anything that might be needed, and there's no easy way to identify which are unnecessary, and hey, disk space and network transfer of client code is cheap, so might as well keep them just to be on the safe side. (Of course, each one brings its own set of security risks, but that's Somebody Else's Problem, right?)

    So given substantial time and resources, I'd be very interested in writing something that goes through and figures out what's really needed and what isn't for each entry point into a library, and then use that to create a dependency tree trimmer. My suspicion is that with something like that in use, deployments could get a lot smaller.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    • (Score: 0) by Anonymous Coward on Saturday May 10, @01:58AM

      by Anonymous Coward on Saturday May 10, @01:58AM (#1403260)

      Agreed. I'm also thinking, but not sure, if code development tools could tell the developer about how many libraries get pulled in due to the call (function / method) the programmer wants to use.

      That and / or the dev. tools traverse the libraries and get the wanted function's code (source or object) up front.

    • (Score: 0) by Anonymous Coward on Saturday May 10, @02:00AM

      by Anonymous Coward on Saturday May 10, @02:00AM (#1403261)

      Sorry, I said what you said. Mind is on something else...

    • (Score: 0) by Anonymous Coward on Saturday May 10, @03:43AM

      by Anonymous Coward on Saturday May 10, @03:43AM (#1403266)

      > apps using millions of lines of code to open a garage door

      My ancient garage door opener (the kind with jumpers to set the code) is more secure than any of these new-fangled ones. Simple, the motor that opens the door is on the same circuit as the garage light. We only turn on the light when we want to open the door (the light switch is inside the house) and leave it on when we go out for short local trips. If we are away for any length of time, that light is off and the radio receiver & motor has no power.

    • (Score: 4, Interesting) by istartedi on Saturday May 10, @05:27AM (1 child)

      by istartedi (123) on Saturday May 10, @05:27AM (#1403269) Journal

      It sounds like a thankless task and a moving target, especially on the web. Consider the domains you have to allow when running a script blocker. I used to be able to check off a few reasonable ones, and leave obvious garbage disabled because I knew a site would work without it. More recently I've noticed that not only is the list of domains longer, it also leads to an expanding tree when you enable all of them, and this can go on for 2 or more, and I don't really know how much further because it always makes me ask "Do I really need to make that site work?". A lot of times, the answer is "No".

      Good ol' Soylent. Everything is coming from soylent.org. Sigh. The good ol' days.

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 0) by Anonymous Coward on Sunday May 11, @05:30AM

        by Anonymous Coward on Sunday May 11, @05:30AM (#1403375)

        You trust those goons?! That's the only one I block.

    • (Score: 2) by Ken_g6 on Saturday May 10, @06:54AM (2 children)

      by Ken_g6 (3706) on Saturday May 10, @06:54AM (#1403272)

      That sounds rather like the Halting Problem. You never know what set of conditions might lead to a library being needed. Though you could run code through a profiler and find what libraries definitely are needed, which might narrow down your list a little.

      I also found something for Python [github.com], but all it does is find tiny Python libraries you could inline instead. It helps you trim the leaves, but not the branches.

      • (Score: 2) by Thexalon on Saturday May 10, @10:43AM (1 child)

        by Thexalon (636) on Saturday May 10, @10:43AM (#1403284)

        You never know what set of conditions might lead to a library being needed.

        Yes you do: You need to somehow somewhere reference a symbol from that library. No symbol, no usage.

        My very rough starting point for how I might actually pull this off:
        1. We have branch coverage tools already.
        2. We read the top-level entry point code and generate inputs that hit every branch, taking note of every function call in the process.
        3. We recursively repeat that process for every function that's called that's within the library being tested: Generate input variations to the original function call that hit every branch.
        4. We also collect a list of functions being called from external libraries. We can do this by following the imports.

        As an added bonus, you could also use something like this to list out your unit test cases.

        Do I think it would be easy? Heck no. Especially not with stuff that implies rather than explicitly uses something. But there's a lot of distance between "easy" and "impossible".

        --
        "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
        • (Score: 2) by driverless on Saturday May 10, @06:21PM

          by driverless (4770) on Saturday May 10, @06:21PM (#1403324)

          Do I think it would be easy? Heck no.

          In fact it's impossible. Think about what it would take to test all the error paths, which often make up a significant percentage of the code.

    • (Score: 2) by JoeMerchant on Saturday May 10, @01:35PM

      by JoeMerchant (3937) on Saturday May 10, @01:35PM (#1403292)

      Your system would be efficient, but nobody is incentivized to provide your standardized modularization for later trimming.

      They can't be bothered to figure out what breaks when they remove this or that from their library, it's hard enough to make it sort of work as a whole system, now you're going to try to fragment it into many pieces and run all the permutations of what can be safely removed in which cases?

      Great theory. Great in practice, if you got everybody to sign up to do the extra work. Not going to happen in anything resembling today's FOSS community.

      --
      🌻🌻🌻 [google.com]
    • (Score: 2) by VLM on Saturday May 10, @02:53PM

      by VLM (445) Subscriber Badge on Saturday May 10, @02:53PM (#1403300)

      You need immutable deployments for your "compiler optimizer for interpreters" otherwise the change from ver 1.2.3 to 1.2.3.1 might suddenly add actual use of "D" to "A" and being an interpreter you won't find out until that branch of code crashes at runtime.

      Yeah I like the developer experience of interpreters too, but there's something to be said for a nice (unfortunately slow) optimizing compiler/linker system. Nobody wants the dev experience of writing web apps in compiled C, but it does have its theoretical advantages...

  • (Score: 2) by turgid on Saturday May 10, @09:41AM (4 children)

    by turgid (4318) Subscriber Badge on Saturday May 10, @09:41AM (#1403281) Journal

    trying to get a request for on-prem hardware signed off nowadays is a bit like asking for a coal-fired electricity generator

    Tell me about it. We can't have Software Engineers taking responsibility for their own success and getting more work done!

    Things simply *have* to be in the Magically Secure Cloud

    Which is expensive, slow and not particularly secure. Plus it's on the other end of the Internet. What are they doing with my data? Do I trust them? Why?

    and software needs to be developed agile, with frameworks.

    That's not actually what "agile" means, but, yet, frameworks, platforms, containers (Docker), Best Practice, Python, poetry, VSCode/Electron, WSL...

    apps using millions of lines of code to open a garage door

    Quite. "But it's faster. If you wrote it yourself it would need debugging." Except when I write a working, tested piece of C to solve the problem while you're still getting your pipenv or Visual Studio working.

    Finally, I talk about an actual useful piece of software I wrote as a proof of concept that one can still make minimal and simple yet modern software.

    Good for you. I'm still coding away in C, bash and make myself, with my own home-grown tools. Meanwhile, C++ becomes ever more complex and my brain can't keep up. I've only got the brain the Good Lord gave me and it ain't getting any better.

    • (Score: 3, Insightful) by JoeMerchant on Saturday May 10, @01:37PM (3 children)

      by JoeMerchant (3937) on Saturday May 10, @01:37PM (#1403294)

      > while you're still getting your pipenv or Visual Studio working.

      That's one of my evaluation parameters for any solution: how long does it take to setup the tool chain? Can I reproduce this tool chain in the future?

      Using those metrics, one big hammer (like the Qt libraries) makes everything look like a nail.

      --
      🌻🌻🌻 [google.com]
      • (Score: 2) by VLM on Saturday May 10, @03:06PM (2 children)

        by VLM (445) Subscriber Badge on Saturday May 10, @03:06PM (#1403301)

        how long does it take to setup the tool chain? Can I reproduce this tool chain in the future?

        Look into development containers in Docker. The whole thing. Its kind of fun. You can get code studio or emacs or vi and lint and everything else in a container. You can do a lot in a Dockerfile, if you want.

        For some projects I've worked on, I can pull not just the software version 1.2.3 but I can also pull the exact binary version of the development system I used for version 1.2.3.

        It's old tech to distribute ready to run binaries in Docker. Medium age tech to write your own dockerfile to import a specific immutable ubuntu and then install specific versions of build-essential and gnu make, etc, then compile an immutable reproducible binary in the previous line. The next step is why not start with the previous build-only CICD container, and create a development environment by installing a specific version of vim, lint, openssh with keys, maybe an rdesktop if you like RDP or kasm if you like https access to X, etc.

        • (Score: 2) by JoeMerchant on Saturday May 10, @03:43PM

          by JoeMerchant (3937) on Saturday May 10, @03:43PM (#1403309)

          Yeah, I setup a Docker image for a product as a poor man's immutable system. Instead of running a traditional container, we spin up a new one from the image at every boot up - thus: it never gets corrupted, which the regular system did fairly often during development, in theory that shouldn't happen in the field, but after you've seen a dozen developers screw it up and not know what happened it seems like tempting fate to push that solution out to thousands of field installs.

          Other than that, I mostly just use vanilla Docker images for things like Home Assistant and Frigate (which I'm playing with this morning) - as well as Music Assistant, the Mosquitto server, and friends.

          >why not start with the previous build-only CICD container

          Security. Do that too visibly and some security wonk is going to come around and whine about how out of date everything is in your container, how many CODE RED LEVEL 10 CVSS CRITICAL ACTIVELY EXPLOITED vulnerabilities you have - 99-100% of which could never be actually exploited in your system, but evaluating actual risk isn't his/her job, stoking fear in management is his/her job.

          I dodged a bullet with docker-compose, kept our system 100% in docker-ce, when their license shift happened we started getting lawyer e-mail about how we had to enumerate our license count and give a project code for charging the licenses to... I think I explained docker-ce three times before they quit threatening me (because "docker" showed in our SBOM.)

          --
          🌻🌻🌻 [google.com]
        • (Score: 3, Insightful) by turgid on Saturday May 10, @04:37PM

          by turgid (4318) Subscriber Badge on Saturday May 10, @04:37PM (#1403312) Journal

          Why would you use Docker when you could set up a proper VM using something like Virtual Box? How hard is it to install the right compiler, editor and libraries these days? If you script the installation of the development environment, everyone can have it when and wherever they want. I did that once for a bunch of CentOS machines a few years ago. It was modular. You could install everything (partition disks, install OS, install all the binary RPMs, install from binary tarballs, download source tarballs, configure && make && make install) or just the last part.

(1)