"We have now sunk to a depth in which restatement of the obvious is the first duty of intelligent men." (George Orwell).
Few people remember this, but back in 2003 there was a bit of an uproar in the IT community when Intel dared introduce a unique, retrievable, ID, the PSN number, in its new Pentium III CPU.
It is kinda hard to believe, but that little privacy backlash was strong enough to force Intel to withdraw the feature, starting with Tualatin-based Pentium IIIs. That withdrawal lasted until 2015, when it was (silently) introduced again, as the Protected Processor Identification Number (PPIN), with Intel's Ivy Bridge architecture.
So, only a good ten years ago we believed in privacy. Now we still do, perhaps, but somehow the industry moved the needle to obligatory consent -- without opt-out possibility [scss.tcd.ie] -- with any and all privacy violations that can be dreamt up in Big (and Not So Big) Tech boardrooms.
Something similar is happening with software, argues Bert Hubert in a piece on IEEE Spectrum. Where once on-premise software and hardware was the rule, trying to get a request for on-prem hardware signed off nowadays is a bit like asking for a coal-fired electricity generator. Things simply *have* to be in the Magically Secure Cloud, and software needs to be developed agile, with frameworks.
The way we build and ship software these days is mostly ridiculous, he claims: apps using millions of lines of code to open a garage door, and simple programs importing 1,600 external code libraries [github.com]. Software security is dire, which is a function both of the quality of the code and the sheer amount of it.
Let me briefly go over the terrible state of software security, and then spend some time on why it is so bad. I also mention some regulatory and legislative things going on that we might use to make software quality a priority again. Finally, I talk about an actual useful piece of software I wrote as a proof of concept that one can still make minimal and simple yet modern software. [ieee.org]