In which ESR pontificates on the future while reflecting on the past.
I was thinking a couple of days ago about the new wave of systems languages now challenging C for its place at the top of the systems-programming heap – Go and Rust, in particular. I reached a startling realization – I have 35 years of experience in C. I write C code pretty much every week, but I can no longer remember when I last started a new project in C!
...
I started to program just a few years before the explosive spread of C swamped assembler and pretty much every other compiled language out of mainstream existence. I'd put that transition between about 1982 and 1985. Before that, there were multiple compiled languages vying for a working programmer's attention, with no clear leader among them; after, most of the minor ones were simply wiped out. The majors (FORTRAN, Pascal, COBOL) were either confined to legacy code, retreated to single-platform fortresses, or simply ran on inertia under increasing pressure from C around the edges of their domains.Then it stayed that way for nearly thirty years. Yes, there was motion in applications programming; Java, Perl, Python, and various less successful contenders. Early on these affected what I did very little, in large part because their runtime overhead was too high for practicality on the hardware of the time. Then, of course, there was the lock-in effect of C's success; to link to any of the vast mass of pre-existing C you had to write new code in C (several scripting languages tried to break that barrier, but only Python would have significant success at it).
One to RTFA rather than summarize. Don't worry, this isn't just ESR writing about how great ESR is.
(Score: 2) by RamiK on Saturday November 11 2017, @12:26AM (5 children)
Similar things been said about C from assembly programmers. Fact of the matter is, the C memory model is reaching its EOL as bounded-pointers or capabilities (and stop-less garbage collection) are being baked into the hardware. C will be dragged along kicking and screaming through the C++ type system. But the performance advantage just won't be there once the new languages get their compilers up-and-running. Go especially puts all its memory addressing stuff withing the compiler's domain or under the "unsafe" package so it's highly probable to be the language-of-choice for user-lands on new hardware.
compiling...
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @03:20AM (2 children)
Not only does C++ provide suitably low-level access to hardware, but it also provides a fairly strict type system and robust abstraction facilities.
What with modern syntactic sugar and a push to include in the standard library most of the kitchen, why isn't C++ the potential heir?
(Score: 3, Informative) by RamiK on Saturday November 11 2017, @12:41PM (1 child)
C\C++ is cross-compatible to modern hardware in the same way Intel's x86 MOV instruction is Turing complete or CRISPR is suitable for carpentry. It's technically true. But oh God...
But lets put aside what any google search for "why C++ is bad" can yield and give some concrete examples of why C\C++ is starting to build some rust that won't be easily removed:
1. Read through https://www.cl.cam.ac.uk/~dc552/papers/asplos15-memory-safe-c.pdf [cam.ac.uk] carefully. Other approaches to bounded pointers like NV-RISC similarly can't really work around these issues either. Only Mill's belt machines are claimed to solve the harder problems without significant user-land rewrites (and indeed they're using C++). Though the compilers still need a lot of work and using linux on them will be an abysmal waste even if you can get the performance. Regardless, if Java taught us anything, it's that many software developers, companies and governments will go to extraordinary lengths to avoid vendor lock. Either way, since Go (re)moves the memory addressing details to the unsafe package and the compiler, it should become cross-platform between different fat pointers solutions even at the kernel level let alone the user-land where C\C++ definitely won't.
2. Look up how C++11 atomics tied the x86 and C++ threading together (One example: https://stackoverflow.com/questions/29922747/does-the-mov-x86-instruction-implement-a-c11-memory-order-release-atomic-store [stackoverflow.com] ). This already led to rewrites for ARM. I can't even imagine how they're going to sort through this mess for newer machines.
3. Hardware-assisted Garbage Collection: There been a few papers since around 2007 all proposing different mechanism to get stop-less, cheap and concurrent GC through the hardware. One recent and less invasive approach being https://people.eecs.berkeley.edu/~maas/papers/maas-asbd16-hwgc.pdf [berkeley.edu] . Whichever you choose, manually freeing memory will become less and less relevant when even the kernel end up using GC.
Anyhow, there are better examples and better articulated papers covering these points but I think it's clear how C\C++ is not ideal for the task where modern languages are. Again, I believe C\C++ can be twisted to any hardware you'd like. But once you're dealing with Google coming up with their own capability-based design, Microsoft coming out with another one, Intel with a third machine and Apple with a 4th... Well, you start needing languages that move the low-level memory details to the compiler and let you write code you can actually execute on different platforms without rewrites or reading through book sized specs detailing undefined behavior at least in the user-land.
Overall, I'm not saying people won't run plenty of C\C++ on the newer platforms for years to come. But there are real hardware reasons that go beyond wishful thinking and language features to make me believe ESR is correct in saying C++ is is losing its appeal with system development in the same way it lost its place to Java with business oriented application development even if I don't have exact figures to prove this.
compiling...
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @03:41PM
I'll definitely look into those points further.
However, the C++ community really prides itself on providing a language and a set of libraries that can both provide useful degrees of abstraction and squeeze out every last cycle for performance; while old code might need a re-write, I bet that C++ will provide the tools necessary to handle with ease any such new hardware features, and it will be just like having a brand new language, yet one that builds atop an existing one.
(Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:56AM (1 child)
Do remember some history, please. A grand announcement for new shiny silver bullet is every week; it getting "baked into the hardware" "any minute now, we swear!!!" is several times a year; any of it getting anywhere but into murky waters of Lethe is maybe twice a decade at best.
What new capabilities we've got in the hardware in this last decade, that did affect programming practices? Virtualization. That's it.
And that is 1 (one) real thing out of how many advertising spiels?
(Score: 2) by RamiK on Saturday November 11 2017, @01:36PM
Capabilities was referring to https://en.wikipedia.org/wiki/Capability-based_addressing [wikipedia.org]
ARM going with multi-stage branch prediction and Jazelle enabled Android's Java app development. Quite the different programming practices when you're not dealing with pointers and have a GC.
Nvidia and AMD switching form VLIWs to RISC brought GPU compute to HPC which is a big segment of servers. Very different workflow with profiling and unit testing taking priority.
Virtualization on servers was\is huge. Whole business models and software markets came and went\stayed over it. It's not "just" by any means when you have distro developers bisecting commits and running a system clone on qemu to find which user-land change halted boot to X.
DRM moved game development from general purpose to consoles. Well, the Cell architecture came and went so it's not quite the last decade... But the way game-engines became the target rather then each game developer making their own engine even for AAA games is a huge change in practice that was only possible due to the hardware forcing it.
You can't even say the desktops went unaffected since they clearly lost market share to everything from smartphones to tablets to streamers over power usage to the point Microsoft is sweating and Intel has been busy failing Atom and switching to a value-adding scheme ( https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/baumann-hotos17.pdf [microsoft.com] ) as they're rapidly losing market share.
Does it matter how many? The point is that there are gradual changes that change practices all the time. More unit testing. More scripting to glue C\C++ instead of writing C. More GUIs in browsers over native which also means more server-client designs even for locally run apps... And a lot less pointers.
compiling...