Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday November 10 2017, @06:23PM   Printer-friendly
from the C,-C-Rust,-C-Rust-Go,-Go-Rust-Go! dept.

In which ESR pontificates on the future while reflecting on the past.

I was thinking a couple of days ago about the new wave of systems languages now challenging C for its place at the top of the systems-programming heap – Go and Rust, in particular. I reached a startling realization – I have 35 years of experience in C. I write C code pretty much every week, but I can no longer remember when I last started a new project in C!
...
I started to program just a few years before the explosive spread of C swamped assembler and pretty much every other compiled language out of mainstream existence. I'd put that transition between about 1982 and 1985. Before that, there were multiple compiled languages vying for a working programmer's attention, with no clear leader among them; after, most of the minor ones were simply wiped out. The majors (FORTRAN, Pascal, COBOL) were either confined to legacy code, retreated to single-platform fortresses, or simply ran on inertia under increasing pressure from C around the edges of their domains.

Then it stayed that way for nearly thirty years. Yes, there was motion in applications programming; Java, Perl, Python, and various less successful contenders. Early on these affected what I did very little, in large part because their runtime overhead was too high for practicality on the hardware of the time. Then, of course, there was the lock-in effect of C's success; to link to any of the vast mass of pre-existing C you had to write new code in C (several scripting languages tried to break that barrier, but only Python would have significant success at it).

One to RTFA rather than summarize. Don't worry, this isn't just ESR writing about how great ESR is.


Original Submission

Related Stories

Half of Curl's Security Vulnerabilities Due to C Mistakes 83 comments

curl developer Daniel Stenberg has gone through his project's security problems and calculated that 51 out of curl's 98 security vulnerabilities have been C mistakes. The total number of bugs in the database is about 6.6k, meaning that not quite 1.5% have been security flaws.

Let me also already now say that if you check out the curl security section, you will find very detailed descriptions of all vulnerabilities. Using those, you can draw your own conclusions and also easily write your own blog posts on this topic!

This post is not meant as a discussion around how we can rewrite C code into other languages to avoid these problems. This is an introspection of the C related vulnerabilities in curl. curl will not be rewritten but will continue to support backends written in other languages.

It seems hard to draw hard or definite conclusions based on the CVEs and C mistakes in curl's history due to the relatively small amounts to analyze. I'm not convinced this is data enough to actually spot real trends, but might be mostly random coincidences.

After the stats and methodology, he goes into more detail about the nature of the 51 bugs and the areas in the program (and library) where they occur. In general, the problems sort out into buffer overreads, buffer overflows, use after frees, double frees, and NULL mistakes.

Previously:
(2020) curl up 2020 and Other Conferences Go Online Only
(2019) Google to Reimplement Curl in Libcrurl
(2018) Daniel Stenberg, Author of cURL and libcurl, Denied US Visit Again
(2018) Twenty Years of cURL on March 20, 2018
(2018) Reducing Year 2038 Problems in curl
(2017) Eric Raymond: "The long goodbye to C"


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by bob_super on Friday November 10 2017, @06:32PM (2 children)

    by bob_super (1357) on Friday November 10 2017, @06:32PM (#595260)

    Oh wait, it's: "if it's weird in ways that everyone understands and is used to live with", then why fragment to try to fix it ?

    • (Score: 2) by JoeMerchant on Friday November 10 2017, @07:31PM

      by JoeMerchant (3937) on Friday November 10 2017, @07:31PM (#595291)

      From "Surely you must be joking, Dr. Feynman [northwestern.edu]":

      I thought my symbols were just as good, if not better, than the regular symbols--it doesn't make
      any difference what symbols you use--but I discovered later that it does make a difference. Once
      when I was explaining something to another kid in high school, without thinking I started to make
      these symbols, and he said, "What the hell are those?" I realized then that if I'm going to talk to
      anybody else, I'll have to use the standard symbols, so I eventually gave up my own symbols.

      --
      🌻🌻 [google.com]
    • (Score: 1) by khallow on Friday November 10 2017, @08:05PM

      by khallow (3766) Subscriber Badge on Friday November 10 2017, @08:05PM (#595308) Journal

      then why fragment to try to fix it ?

      The real cost to language fragmentation is code migration and learning the quirks of the language. Both are one-time costs and the former can be heavily automated.

  • (Score: 0) by Anonymous Coward on Friday November 10 2017, @06:58PM (19 children)

    by Anonymous Coward on Friday November 10 2017, @06:58PM (#595274)

    Then, of course, there was the lock-in effect of C's success; to link to any of the vast mass of pre-existing C you had to write new code in C

    and now you can use C++, so why use C? C is only nice if you want to make libraries where ABI stability is paramount. Otherwise you can use other things, even Javascript (aka, NodeJS).

    Rust? Go? Go doesn't even make dynamic linking support. So how is it going to help me leverage my OS security support when everything is linked in? And just the other year Scala and Erlang were suppose to be the "It" languages then. oh well...

    • (Score: 1, Informative) by Anonymous Coward on Friday November 10 2017, @07:18PM

      by Anonymous Coward on Friday November 10 2017, @07:18PM (#595284)

      Go doesn't even make dynamic linking support. So how is it going to help me leverage my OS security support when everything is linked in?

      It is not going to help there. You'll have to return to the bad old days before shared libraries where a security fix involved recompiling everything that depended on the library, and reinstalling it all.

    • (Score: 2) by pvanhoof on Friday November 10 2017, @07:27PM (14 children)

      by pvanhoof (4638) on Friday November 10 2017, @07:27PM (#595289) Homepage

      "C is only nice if you want to make libraries where ABI stability is paramount. "

      ABI stability is possible with C++. But it's indeed very hard. It's also hard with C, though. Having to add padding members and space to your structs so that you can add members later on, having to do the same thing in C++ structs and C++ classes, having to understand data alignment, etc.

      Yet ABI stability is super important for popular libraries. Then I think about GLib, Qt, among others. We have the semver.org rules. But the semver.org rules for ABI are hard to support.

      A language, or a compiler, or a standard, for C, for C++, for that language, that makes this more easy; would or could displace C.

      We have D as a interesting contender. There is Vala which generates GLib C code. One could claim that Qt with moc and C++ and QMetaData provides some sort of an ABI.

      ...

      • (Score: 4, Interesting) by bzipitidoo on Friday November 10 2017, @09:21PM (12 children)

        by bzipitidoo (4388) on Friday November 10 2017, @09:21PM (#595353) Journal

        Libraries, and the way they are accessed, is the problem. Without the header file, written in C/C++ of course, which makes developing in C by far the easiest way to use these library functions, there's no way to be sure of the details of the parameters to pass. That information is not part of the library file that contains the compiled functions. It should have been.

        Over the decades, we've built up a huge code base in these C library file formats, and it'll take a lot of work to phase in a better, more informative and language independent library file format. I can't see Linux, X, and Firefox all being rewritten in some other programming language any time soon.

        • (Score: 0) by Anonymous Coward on Friday November 10 2017, @10:01PM

          by Anonymous Coward on Friday November 10 2017, @10:01PM (#595369)

          They'll just be replaced entirely.

        • (Score: 0) by Anonymous Coward on Friday November 10 2017, @10:27PM (10 children)

          by Anonymous Coward on Friday November 10 2017, @10:27PM (#595380)

          Including "details of the parameters to pass" in the compiled library is of little use.

          Suppose a struct layout changes. Now what? Are you proposing to parse something, then dynamically convert between the layout used by the library and the layout used by the program? This is a performance killer; at that point you may as well be writing in an interpreted language.

          Suppose a field goes missing in the new library, but the program depended on that field. The typical behavior for an interpreted language would be to run for a while, modifying your data files, and then suddenly crash with an exception. People expect better of compiled programs, and anyway the whole point of a compiled program is to quickly and directly access the data. Type checking is supposed to happen at compile time, not run time, both for performance and for crash avoidance.

          At best, we could ask that the program loader (ld.so runtime linker) refuse to run when there is any mismatch at all. This only requires a hash. Simply hash the data structures and function prototypes, include this hash in both the library and program, and compare the hashes for equality at startup. Mismatches prevent running the program.

          Of course, that only covers things in a crude sense. Library behavior may change without any change to the structs or function prototypes. Consider a library function that implicitly opens a database connection. In a newer version of the library, there is a separate call that must be made to do this. Consider a library with locking. In a new version, the lock ordering changes, causing usage of the old locking to create a deadlock. Consider a library that returns a pointer to a struct. In a new version, the allocation may change (static, malloc, new, mmap...) in a way that causes old library users to crash or run out of memory.

          • (Score: 4, Interesting) by Grishnakh on Friday November 10 2017, @11:04PM (1 child)

            by Grishnakh (2831) on Friday November 10 2017, @11:04PM (#595393)

            The typical behavior for an interpreted language would be to run for a while, modifying your data files, and then suddenly crash with an exception. People expect better of compiled programs, and anyway the whole point of a compiled program is to quickly and directly access the data. Type checking is supposed to happen at compile time, not run time, both for performance and for crash avoidance.

            I disagree about people expecting better. Everyone these days loves Python, which exemplifies the unexpected crashing behavior you mention here, therefore I think that people now are quite happy to have software which crashes regularly with undecipherable exception errors, and they're demonstrably happy with software that's very slow. For proof, I cite the widespread usage of Electron apps. If people actually cared about performance, they wouldn't be using those or developing them, or all the Python apps that are all the rage now.

            This is a performance killer; at that point you may as well be writing in an interpreted language.

            We might as well anyway, since performance isn't a priority any more.

            • (Score: -1, Troll) by Anonymous Coward on Friday November 10 2017, @11:17PM

              by Anonymous Coward on Friday November 10 2017, @11:17PM (#595399)

              That's how you indicate that you are quoting someone else.

          • (Score: 2) by bzipitidoo on Saturday November 11 2017, @01:39AM (6 children)

            by bzipitidoo (4388) on Saturday November 11 2017, @01:39AM (#595435) Journal

            Dynamically convert? Type checking at run time? Of course not. The info about the parameters should have been in an easier to parse format than C source code. That info is not that complicated. Ambiguous items such as the order of members in a structure, should have been specified exactly. As it is, to parse those C header files, pretty much an entire C compiler is needed, can't get by with even a subset of C. They love using macros in header files. It's SOP to bracket the entire header in a macro to prevent duplicate defines in case the header is compiled more than once. Even when the redefinitions are exactly the same, because they're from the same header file, still have to wrap it in a macro. It's a rotten way to handle that problem, but that's how the C libraries do it and we've been stuck with that method for decades.

            Then there's the problem of duplicate function names in different libraries, which for years was handed by an unwritten rule that library writers should prefix all their function names with the name of the library. The namespace extensions to C address this issue. Of course, the oldest libraries such as stdlib, stdio, and math do not follow that custom. They were created before name collision became a serious issue.

            Another mistake was the handling of variable numbers of parameters, the so called variadic functions. C doesn't do that, but it was wanted anyway and the designers cobbled on an ugly extension so it could be done. They used it immediately in the venerable stdio functions printf and friends.

            Most languages dump the problem of calling C library functions on the language users. Trying to call between languages, like, calling a C routine from a FORTRAN or Perl program, has always been way too hard to do. Some compensate by trying to provide their own native libraries for everything. Java is an extreme example of that. There are also utilities to convert Pascal and FORTRAN source code to C. Last time I tried to link a Perl 5 program to a C library, I gave up on that approach. I tried the direct approach, then SWIG. SWIG is one of those code generators that spews out an astonishing amount of source code. Like, 100K of source code, just to make less than 6 library calls?? I instead made a wrapper in C with the library functions compiled into it, to receive and send parameter data and results through a socket. The Perl program called on the OS to launch the C program in a separate process.

            I understand Python and Perl 6 have conceded on this and can easily call C library functions, have the means to do so built into the language, relieving programmers of that burden.

            • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @06:15AM (5 children)

              by Anonymous Coward on Saturday November 11 2017, @06:15AM (#595511)

              Usually when people complain that a binary library does not describe the data structure layout or parameters of functions, the issue is compatibility between library versions.

              I now see that you seem to be complaining that you'd rather not parse C code at all. This is a tough issue for me to be sympathetic to, since I love C. More C please! I'll try though...

              The info you want is in fact provided within every normal compiled library.

              For platforms like Linux, using the ELF object format and a SysV ABI, the libraries can be in *.so form (shared) or in *.a form (not shared). Either way, the files installed by all normal systems will contain DWARF3 debug information. There are libraries that can parse this; your favorite language will almost certainly have bindings for one. There are tools to dump out DWARF3 data in a human-readable form.

              On the Windows platform, normal libraries are in some sort of PE-COFF format. You get *.dll files for runtime use, *.lib files for linking both shared and static binaries, and *.pdb files for debug info. The info you want is in the *.pdb files at least; it might be elsewhere too. Again, there are libraries and tools to deal with *.pdb files.

              Your language interpreter probably ought to provide built-in functionality to handle this stuff, making it possible for you to simply call into a normal library.

              • (Score: 2) by bzipitidoo on Saturday November 11 2017, @12:50PM (4 children)

                by bzipitidoo (4388) on Saturday November 11 2017, @12:50PM (#595567) Journal

                I had forgotten about DWARF. Checking, I read DWARF5 was released this year.

                Yes, for library info, I would prefer a small subset of C, or, since C perhaps isn't the best tool for that job, another simpler language entirely, like what this DWARF sounds like. Why should programmers using other languages also have to use C? Spoils the point of using a "better" language than C if you still have to use C. Why make all other languages somewhat less than complete by not being able to call library functions? Alternatively, why are library functions not more universal, easily called from any program? Language designers dropped the ball on this matter. One of the heaviest workarounds of this issue is to launch separate processes, rather like a shell script does when it hooks together a bunch of utilities with pipes.

                It's sweet that you love C, but don't let that blind you to its many shortcomings. To wit, if C is so great, why do we have Makefiles? Why are those a language of their own, rather than more C?

                A shortcoming of most programming languages is their abysmal handling of declarations of complicated data structures, giving rise to abominations such as XML, and the slightly better YAML (a superset of JSON). C++ has stepped up a little on this matter, thanks to "aggregate initialization" added in C++11 and improved in C++14. But it's still crappy. It's not just C, it's most programming languages. Even Python stinks at this. I'll give an example. Suppose you want in your data the 2 and 1 letter chemical element abbreviations. You could do an array of strings: {"H", "He", "Li", "Be", "B", ... } Think that looks pretty good? It doesn't! Why couldn't it be initialized like this: "H He Li Be B" ? In JavaScript, it can be done that way by appending .split(" ") to that string. Not ideal, but much better than having to flood the source code with dozens of quote marks and commas, all because programming languages don't do decent data serialization natively, with their own syntax, nooo, they choke on their own dogfood and push programmers to do it programmatically. Got to call a function, maybe even a YAML library function, passing it a string or even a file name. Then they screw up the programmatic way by casting everything to variable status, can't let it stay constant. Why is it that in C, something like #define SIZE 100 was preferred to const int SIZE=100, for array sizes?

                C is also notorious for its awful pointer syntax. One of the big selling points of Java was dumping those asterisks, bragging that there aren't any pointers in Java. Might as well program in Perl if you have to have asterisks in front of most of your variable names. C++ added the ampersand syntax to function parameters, and that helps some. How about the thrilling syntax for function pointers? And then, function name mangling! They had to support polymorphism somehow, but ouch is that ugly. That sort of thing makes connecting to a library a nightmare.

                • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @08:31PM (3 children)

                  by Anonymous Coward on Saturday November 11 2017, @08:31PM (#595722)

                  People who prefer non-C being forced to do a bit of C is nothing compared to the trouble they cause for people using C and every other language. Making use of non-C code across languages is a complete disaster. Your suffering is insufficient punishment for the suffering you inflict upon others. Example: my C program might like to call code written in Java, and some other code written in Python. This is nearly impossible. Even something much less crazy, like C++, is really difficult. It is you who is causing trouble.

                  The need for multiple languages is reasonable. We use "make" to avoid fussing with details, but that greatly limits control over what exactly is happening. I write processor emulators, sometimes with JIT and/or a hypervisor. I also write stuff resembling boot loaders and OS kernels. The amount of control I need to do this stuff is extreme; sometimes C is too high-level. It's hard to imagine a language that I could use for this work that wouldn't be awful for a build system.

                  That said, there was a time I did write part of a build system in C. The "make" language isn't very good at handling symlinks. There are 3 timestamps on the symlink, 3 timestamps on the target, and of course the content of each. The "make" program wouldn't look at the right stuff. Builds could be inaccurate and take 8 hours, or they could be accurate and take 30 hours. Switching to C cut the build times down to a few hours.

                  C99 added decent initializers for structs and arrays. That's 18 years ago, or about 5 for Visual Studio. It sounds like you want more though; you want to make up a language on the fly. That is sort of possible in LISP and Scheme (still with quotation limitations) but it isn't all that reasonable of a request. You're making code hard to parse by humans when you do this; your non-standard little language is a source of confusion. You're also creating more of that problem you complained about in C header files: you can't make a tool to parse things without implementing almost the whole language. If you really want a mini-language, you can of course have it: gperf, bison or yacc, flex or lex, midl or pidl for idl, snmp MIB stuff, corba stuff, sunrpc stuff, custom makefile hacks with sed and awk... but of course the cost is that your code is no longer trivial to parse and no longer trivial for all to understand. Your example with JavaScript's .split(" ") suffers from this problem; if I were to try to parse that in a generic way then I'd need to implement all of JavaScript and actually run the program, which hopefully would halt!

                  Normally it is best to put a literal 100 in the array definition, then use the sizeof operator (total divided by first member) to get the array size where needed.

                  Putting "const" in any language is hugely problematic. Consider the strstr function. It may return something that is const, or not, depending on what it is passed. There is no limit to how complicated this can get. Imagine a function that normally returns a pointer to a string that is passed into it, If the string content is "Whew!" though, a pointer to a constant string "Yow!" is instead returned. There is just no reasonable way to express the conditions upon which the function would return a const value. Oh, let's make it even worse. The string "Whew!" is actually configurable.

                  C pointer syntax is almost good. People have trouble because the language will sometimes implicitly grant you an "&" operator. For example, the "&" is optional when taking the address of a function or an array. This screw's up people's understanding of the language. It would've also been nice to have the "->" operator defined to dereference as many pointers as needed, for example 0 or 4, rather than always exactly 1. That actually looks compatible; the "->" operator could be made more capable in a future version of the language.

                  The syntax C++ added for function parameters is awful. Maybe you save a few keystrokes, but the result is non-obvious code. It isn't obvious at all points that the variable is really implemented as a pointer. It isn't obvious at the call site that something passed as a parameter could be modified.

                  They did not have to support polymorphism. That too is a disaster. It is no longer obvious what code is even being called. An important part of maintaining software is keeping things understandable, particularly if you don't want hidden slowness to creep in all over the place.

                  The code I write at work is in plain C. Linux itself is in C. This works fine.

                  • (Score: 2) by bzipitidoo on Sunday November 12 2017, @07:16AM (2 children)

                    by bzipitidoo (4388) on Sunday November 12 2017, @07:16AM (#595854) Journal

                    Just a quick note. I'll reply again when I have more time.

                    The JavaScript example with .split is a hackish workaround to achieve data serialization. It ought to be possible to do that natively. It takes such a tiny amount of additional syntax to support the parsing of a literal into a hierarchical data structure, it's just sad that there's such poor support for that. Many languages have a "dump" function to print out complicated data structures, why not at least an "undump" or "dedump" function to do the reverse?

                    As another example, why can't we have a class-- or, let's stick to C and talk struct instead of class-- for points in 2D (or 3D or more) and be able to assign values to those points with the following: struct Point p1=6,-13; No, it has to be p1.x=6; p1.y=-13; Or we might make a function set(&p1,6,-13), but that's ugly.

                    • (Score: 0) by Anonymous Coward on Sunday November 12 2017, @03:53PM (1 child)

                      by Anonymous Coward on Sunday November 12 2017, @03:53PM (#595915)

                      // Modern C syntax to initialize, not naming the members:
                      struct Point p1 = (struct Point){6,-13};

                      // Modern C syntax to initialize, naming members:
                      struct Point p1 = (struct Point){.x=6,.y=-13};

                      Naming the members lets you skip ones that don't matter; they will be zeroed. If done everywhere, it lets you reorder the members in a header file without having to fix all the initializers. This is valuable because it lets you lay out the struct to keep members that used often/together in the same cache lines. Performance is better when the "hot" members are in just a few of the CPU's cache lines.

                      There is similar syntax for arrays too, and it all works together recursively in two different styles. You can do stuff like [17].foo.bar[6].baz = 42 to initialize or you can repeat the cast-like part of the syntax for each level. You can also freely mix it with the no-name style, in which case you can just add curly brackets where desired.

                      • (Score: 2) by bzipitidoo on Tuesday November 14 2017, @03:23AM

                        by bzipitidoo (4388) on Tuesday November 14 2017, @03:23AM (#596641) Journal

                        > sometimes C is too high-level

                        Yeah, I don't like having to hope that code optimization makes up for C's lacks there. For instance, a combination multiply and add operation has become popular, because the add can be done for free, as part of the work done to perform a multiplication. There's no way I know of to call for that in C source code. Or, how about a comparison in which one of 3 branches are taken depending on whether the result was less than, equal , or greater than? C can't explicitly code that either. Then there's the whole world of parallel and distributed computing. Semaphores? Atomic test and set? Nope. But that's maybe unfair to C as it was never meant to handle that, and it wasn't until the 486 that Intel's lame x86 architecture finally became less lame by adding that.

                        > Normally it is best to put a literal 100 in the array definition

                        No, I disagree. A literal 100 is fine for a small program, with only a few hundred lines of code in one file, no separate header file. But even that is a judgment call. For bigger projects, you should at least use a #define. The problem is that you may end up using the same value for 2 unrelated things, and the bigger the program is, the more likely that happens. If you need to change one of those literal quantities without changing the others, you have to search through the source code and check each one to determine if it is the correct one.

                        > but it isn't all that reasonable of a request. You're making code hard to parse by humans when you do this; your non-standard little language is a source of confusion. You're also creating more of that problem you complained about in C header files: you can't make a tool to parse things without implementing almost the whole language.

                        Not at all. Trees are quite easy to handle, for both people and computers. All that's needed is to reserve 2 symbols to serve as open bracket and close bracket. Probably also want a comma-- LISP shows how dense it can get with just brackets. For human readability, it would also be good to have some sort of whitespace. No need for a complete language to parse that.

                        > // Modern C syntax to initialize

                        Those are nice improvements. But if it was possible to easily express a tree in a literal...

          • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:43AM

            by Anonymous Coward on Saturday November 11 2017, @10:43AM (#595549)

            Suppose a struct layout changes. Now what? Are you proposing to parse something, then dynamically convert between the layout used by the library and the layout used by the program?

            A number of Microsoft APIs uses versioning of structs. Given their marketshare vs yours, you must reflect long and hard before presuming to know better. ;)

      • (Score: 0) by Anonymous Coward on Friday November 10 2017, @09:40PM

        by Anonymous Coward on Friday November 10 2017, @09:40PM (#595356)

        ABI stability is possible with C++. But it's indeed very hard.

        It's so hard that even C++ language implementers routinely fail to achieve it in their standard libraries. I agree that it is possible, however. The best way to have binary stability with a C++ library is to give it a pure C-compatible interface and not expose any C++ features across the library boundary. Also avoid relying on any C++ features that cause binary compatibility problems in practice, such as C++ exceptions.

        It's also hard with C, though. Having to add padding members and space to your structs so that you can add members later on.

        The only "hard" parts are identifying when the ABI actually changed (automated tools can help with this), and then possibly the ongoing maintenance that may be associated with early bad design decisions. This is rarely even particularly difficult, though obviously doing it is more work than not doing it. The C language itself does not define any particular binary interface but in practice every major platform has a de facto standard interface that is documented and implemented by every major toolchain targeting that platform.

        Most library authors and users consider "ABI stability" to mean just "binary compatible with previous versions". Specifically, if I update a library to a newer version, then any program using that library and depending only on its documented interface, then I should not have to recompile any those programs for them to continue working.

        Spending some time up front to reduce future maintenance (like planning your interfaces to accomodate future expansion) may add to the programmer's work in the short term but usually reduces work in the long run, and even if you don't do that and your interfaces are all horrible you can still almost always keep them stable without too much effort -- in the worst case, this might mean adding a new function instead of changing an old one.

    • (Score: 3, Interesting) by The Mighty Buzzard on Friday November 10 2017, @09:07PM

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Friday November 10 2017, @09:07PM (#595350) Homepage Journal

      Rust's still in its infancy but it can already produce and make use of shared libs. It's in no way ready to produce a usable kernel (or probably libraries that need to have a stable ABI, I haven't checked.) but you can produce anything else you like and end up with a much less bug-prone binary while still having fairly compact binaries/libs. The real downside to it is most of its OSS libraries are even less developed than the language itself, if they exist at all.

      For a C coder who never, ever, ever writes or uses vulnerable code, Rust's pretty useless. I haven't met one of those yet though.

      --
      My rights don't end where your fear begins.
    • (Score: 2) by isj on Friday November 10 2017, @09:09PM (1 child)

      by isj (5249) on Friday November 10 2017, @09:09PM (#595352) Homepage

      and now you can use C++, so why use C?

      I do that whenever I have a choice. But one of the semi-embedded projects I work on is based on a vendor framework that generates C code and wrappers. No extern "C" exists in any headerfile, and the framework has ownership of main(). So I have to stick with C.

      Even if I restarted that project I would use C because the framework and code generation is worth it.

      • (Score: -1, Redundant) by Anonymous Coward on Friday November 10 2017, @11:19PM

        by Anonymous Coward on Friday November 10 2017, @11:19PM (#595401)

        and now you can use C++, so why use C?

        See here [soylentnews.org]

  • (Score: 5, Insightful) by crafoo on Friday November 10 2017, @07:07PM (19 children)

    by crafoo (6639) on Friday November 10 2017, @07:07PM (#595278)

    Ooh ho ho! Another shill proclaiming the end of C because Rust or (hahah) Go is here.
    Go - a Google monstrosity pet langauge
    Rust - a containment zone for people too retarded for C/C++

    Oh he's a self-proclaimed systems programmer. Let's see: "relentless cycling of Moore’s Law had driven the cost of compute cycles cheap enough to make the runtime overhead of a language like Perl a non-issue".

    Sure sounds like it to me! Maybe at one time. Not now. This is not the world transitioning system code away from C. This is one man transitioning away from Systems programming and, in his narcissistic fog, assuming his reference frame is The One True reference frame for the world.

    • (Score: 5, Insightful) by bob_super on Friday November 10 2017, @07:39PM (6 children)

      by bob_super (1357) on Friday November 10 2017, @07:39PM (#595296)

      More to the point:
      > "relentless cycling of Moore’s Law had driven the cost of compute cycles cheap enough to make the runtime overhead of a language like Perl a non-issue".

      My battery doesn't like the way you think.
      My server's customers, always bitching about more throughput and less latency, do not like the way you think.

      Sure, for an AC-connected personal computing device doing human-scale stuff, go ahead and use the easier language. The rest of us have to deal with Intel/AMD no longer bumping clock speeds up.

      • (Score: 3, Interesting) by Grishnakh on Friday November 10 2017, @11:07PM (4 children)

        by Grishnakh (2831) on Friday November 10 2017, @11:07PM (#595395)

        Totally wrong.

        Python and Electron are all the rage now, so it is clear that performance is no longer important to either developers or users. If your server's customers cared about throughput and latency, they'd be demanding that everything be written in C++ or similar. Are they? Didn't think so. Tell them to shut up about those things until they're ready to change.

        • (Score: 1, Insightful) by Anonymous Coward on Friday November 10 2017, @11:22PM

          by Anonymous Coward on Friday November 10 2017, @11:22PM (#595402)

          The bean counters see only the need to get to market first.

          They'll gnash their teeth in frustration when it comes time to scale; that's when they'll demand the performance of an industrial language such as C++, which is what everybody will agree should have been the initial implementation.

        • (Score: 2) by bob_super on Friday November 10 2017, @11:28PM (2 children)

          by bob_super (1357) on Friday November 10 2017, @11:28PM (#595407)

          > If your server's customers cared about throughput and latency, they'd be demanding
          > that everything be written in C++ or similar. Are they? Didn't think so.

          It's cute how you're telling yourself a nice story that matches the conclusion you want to reach. Which soundtrack did you put in the background?

          Some of us have to design against specs, because our customers have system requirements to meet in order to make money with our products. The actual language doesn't matter, as long as the specs are met, on time and on budget. Can we meet the spec with the overhead, and can we find all the supporting code in $fashionable_language? Will we be able to provide the ten years of support? How long will it take to figure those questions out?

          • (Score: 1, Informative) by Anonymous Coward on Friday November 10 2017, @11:57PM (1 child)

            by Anonymous Coward on Friday November 10 2017, @11:57PM (#595409)

            To indicate that you are quoting someone, use the "<blockquote>…</blockquote>" construct.

            You know what? Maybe this is why all software sucks ass; you people just don't give a fuck about doing things "correctly".

      • (Score: 2) by forkazoo on Saturday November 11 2017, @12:51AM

        by forkazoo (2561) on Saturday November 11 2017, @12:51AM (#595422)

        90% of the performance problem on that hypothetical laggy server with performance problems comes from something like 1-2 % of the code that it needs to run.

        Sure, some part of it needs to be native code, but the nightly cron job that cleans up /tmp at 3:00 am doesn't need to be hand-tuned assembly. A shell script will work just as well, and doing it in the easiest way possible will free up developer resources to focus on that 1-2% of the code on the box that is actually going to cause you trouble.

    • (Score: 0) by Anonymous Coward on Friday November 10 2017, @10:11PM

      by Anonymous Coward on Friday November 10 2017, @10:11PM (#595372)

      Rust - a containment zone for people too retarded for C/C++

      Most people loved to be contained. They LOVE it.

    • (Score: 2) by The Mighty Buzzard on Friday November 10 2017, @10:47PM (3 children)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Friday November 10 2017, @10:47PM (#595388) Homepage Journal

      Rust - a containment zone for people too retarded for C/C++

      So, 99% of C coders then? Roger. Really, you should be held civilly liable for financial damages for the second C bug you write if there's a viable alternative that would have prevented it.

      --
      My rights don't end where your fear begins.
      • (Score: 1, Insightful) by Anonymous Coward on Friday November 10 2017, @11:03PM (2 children)

        by Anonymous Coward on Friday November 10 2017, @11:03PM (#595392)

        Want a vendor to be liable? Put that in the contract.

        You have the power to create law; it's called "contract negotiation".

        • (Score: 2) by The Mighty Buzzard on Saturday November 11 2017, @02:10AM (1 child)

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday November 11 2017, @02:10AM (#595445) Homepage Journal

          Who said anything about vendors? I think the coders should. It'd keep chuckleheads who can't write safe C from putting it on their resume.

          --
          My rights don't end where your fear begins.
          • (Score: 0, Troll) by Anonymous Coward on Saturday November 11 2017, @03:15AM

            by Anonymous Coward on Saturday November 11 2017, @03:15AM (#595471)

            You have nothing worthwhile to say.

    • (Score: 2) by RamiK on Saturday November 11 2017, @12:26AM (5 children)

      by RamiK (1813) on Saturday November 11 2017, @12:26AM (#595415)

      Similar things been said about C from assembly programmers. Fact of the matter is, the C memory model is reaching its EOL as bounded-pointers or capabilities (and stop-less garbage collection) are being baked into the hardware. C will be dragged along kicking and screaming through the C++ type system. But the performance advantage just won't be there once the new languages get their compilers up-and-running. Go especially puts all its memory addressing stuff withing the compiler's domain or under the "unsafe" package so it's highly probable to be the language-of-choice for user-lands on new hardware.

      --
      compiling...
      • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @03:20AM (2 children)

        by Anonymous Coward on Saturday November 11 2017, @03:20AM (#595473)

        Not only does C++ provide suitably low-level access to hardware, but it also provides a fairly strict type system and robust abstraction facilities.

        What with modern syntactic sugar and a push to include in the standard library most of the kitchen, why isn't C++ the potential heir?

        • (Score: 3, Informative) by RamiK on Saturday November 11 2017, @12:41PM (1 child)

          by RamiK (1813) on Saturday November 11 2017, @12:41PM (#595564)

          C\C++ is cross-compatible to modern hardware in the same way Intel's x86 MOV instruction is Turing complete or CRISPR is suitable for carpentry. It's technically true. But oh God...

          But lets put aside what any google search for "why C++ is bad" can yield and give some concrete examples of why C\C++ is starting to build some rust that won't be easily removed:

          1. Read through https://www.cl.cam.ac.uk/~dc552/papers/asplos15-memory-safe-c.pdf [cam.ac.uk] carefully. Other approaches to bounded pointers like NV-RISC similarly can't really work around these issues either. Only Mill's belt machines are claimed to solve the harder problems without significant user-land rewrites (and indeed they're using C++). Though the compilers still need a lot of work and using linux on them will be an abysmal waste even if you can get the performance. Regardless, if Java taught us anything, it's that many software developers, companies and governments will go to extraordinary lengths to avoid vendor lock. Either way, since Go (re)moves the memory addressing details to the unsafe package and the compiler, it should become cross-platform between different fat pointers solutions even at the kernel level let alone the user-land where C\C++ definitely won't.

          2. Look up how C++11 atomics tied the x86 and C++ threading together (One example: https://stackoverflow.com/questions/29922747/does-the-mov-x86-instruction-implement-a-c11-memory-order-release-atomic-store [stackoverflow.com] ). This already led to rewrites for ARM. I can't even imagine how they're going to sort through this mess for newer machines.

          3. Hardware-assisted Garbage Collection: There been a few papers since around 2007 all proposing different mechanism to get stop-less, cheap and concurrent GC through the hardware. One recent and less invasive approach being https://people.eecs.berkeley.edu/~maas/papers/maas-asbd16-hwgc.pdf [berkeley.edu] . Whichever you choose, manually freeing memory will become less and less relevant when even the kernel end up using GC.

          Anyhow, there are better examples and better articulated papers covering these points but I think it's clear how C\C++ is not ideal for the task where modern languages are. Again, I believe C\C++ can be twisted to any hardware you'd like. But once you're dealing with Google coming up with their own capability-based design, Microsoft coming out with another one, Intel with a third machine and Apple with a 4th... Well, you start needing languages that move the low-level memory details to the compiler and let you write code you can actually execute on different platforms without rewrites or reading through book sized specs detailing undefined behavior at least in the user-land.

          Overall, I'm not saying people won't run plenty of C\C++ on the newer platforms for years to come. But there are real hardware reasons that go beyond wishful thinking and language features to make me believe ESR is correct in saying C++ is is losing its appeal with system development in the same way it lost its place to Java with business oriented application development even if I don't have exact figures to prove this.

          --
          compiling...
          • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @03:41PM

            by Anonymous Coward on Saturday November 11 2017, @03:41PM (#595604)

            I'll definitely look into those points further.

            However, the C++ community really prides itself on providing a language and a set of libraries that can both provide useful degrees of abstraction and squeeze out every last cycle for performance; while old code might need a re-write, I bet that C++ will provide the tools necessary to handle with ease any such new hardware features, and it will be just like having a brand new language, yet one that builds atop an existing one.

      • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:56AM (1 child)

        by Anonymous Coward on Saturday November 11 2017, @10:56AM (#595551)

        as bounded-pointers or capabilities (and stop-less garbage collection) are being baked into the hardware.

        Do remember some history, please. A grand announcement for new shiny silver bullet is every week; it getting "baked into the hardware" "any minute now, we swear!!!" is several times a year; any of it getting anywhere but into murky waters of Lethe is maybe twice a decade at best.
        What new capabilities we've got in the hardware in this last decade, that did affect programming practices? Virtualization. That's it.
        And that is 1 (one) real thing out of how many advertising spiels?

        • (Score: 2) by RamiK on Saturday November 11 2017, @01:36PM

          by RamiK (1813) on Saturday November 11 2017, @01:36PM (#595573)

          Capabilities was referring to https://en.wikipedia.org/wiki/Capability-based_addressing [wikipedia.org]

          What new capabilities we've got in the hardware in this last decade...

          ARM going with multi-stage branch prediction and Jazelle enabled Android's Java app development. Quite the different programming practices when you're not dealing with pointers and have a GC.

          Nvidia and AMD switching form VLIWs to RISC brought GPU compute to HPC which is a big segment of servers. Very different workflow with profiling and unit testing taking priority.

          Virtualization on servers was\is huge. Whole business models and software markets came and went\stayed over it. It's not "just" by any means when you have distro developers bisecting commits and running a system clone on qemu to find which user-land change halted boot to X.

          DRM moved game development from general purpose to consoles. Well, the Cell architecture came and went so it's not quite the last decade... But the way game-engines became the target rather then each game developer making their own engine even for AAA games is a huge change in practice that was only possible due to the hardware forcing it.

          You can't even say the desktops went unaffected since they clearly lost market share to everything from smartphones to tablets to streamers over power usage to the point Microsoft is sweating and Intel has been busy failing Atom and switching to a value-adding scheme ( https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/baumann-hotos17.pdf [microsoft.com] ) as they're rapidly losing market share.

          many advertising spiels?

          Does it matter how many? The point is that there are gradual changes that change practices all the time. More unit testing. More scripting to glue C\C++ instead of writing C. More GUIs in browsers over native which also means more server-client designs even for locally run apps... And a lot less pointers.

          --
          compiling...
    • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @10:29AM

      by Anonymous Coward on Saturday November 11 2017, @10:29AM (#595546)

      > This is one man [...], in his narcissistic fog, assuming his reference frame is The One True reference frame for the world.

      He should run for President! :D

  • (Score: 5, Insightful) by KiloByte on Friday November 10 2017, @07:08PM (18 children)

    by KiloByte (375) on Friday November 10 2017, @07:08PM (#595281)

    So we get death of C announced every a year or so per tech news site. But please tell me, what language is used to write the kernel, most of those fancy new programming languages' interpreters, and most serious code? Yeah, code that is used to directly earn money (business logic, shit webpages, etc) tends to be written in a higher level language, but all the foundation is in C.

    Remember the big hype of Go or Swift? They're moribund now, with Rust being hyped up this year (for extra irony, Rust predates Swift, its popularity spike merely happened later). Then in a couple of years there'll be something else. Meanwhile, C is going strong.

    The last project I started was in C++, just out of laziness because of STL for a single data structure. The other guy immediately went replacing that with proper C, so there'll be no entrenched C++isms later.

    Unless you're writing a webpage or some accounts receivable poo, there are only two kinds of languages: prototyping/glue and system/library. For the latter, there's no real choice other than C.

    --
    Ceterum censeo systemd esse delendam.
    • (Score: 2) by RS3 on Friday November 10 2017, @07:24PM (7 children)

      by RS3 (6367) on Friday November 10 2017, @07:24PM (#595287)

      At least one other guy agrees with you and I: http://harmful.cat-v.org/software/c++/linus [cat-v.org]

      Recently I was helping a friend with an Arduino project he dreamed up. I hadn't touched Arduino before, but have done a fair bit of assembler here and there on an assortment of microprocessors. I was annoyed that the code was C++. He, like me, thinks a processor does actions, (verbs) applied to things (data, ports, etc.) I tried to explain the object model concept to him, but it was difficult because I was trying to sell something I don't believe in, and fundamentally wasn't making sense to either of us. I had to explain that C++ is being used because so many programmers are (only) doing OOP. It would have been nice if they had given us a C, or some other procedural programming environment. I'm sure they exist, I just haven't bothered to look.

      • (Score: 2, Insightful) by Ethanol-fueled on Friday November 10 2017, @08:21PM (4 children)

        by Ethanol-fueled (2792) on Friday November 10 2017, @08:21PM (#595321) Homepage

        C++ is used for Arduino because Arduino is for babies and assembler makes babies cry.

        99% of Arduino code is essentially dumbed-down C/C++ anyway. When was the last time you saw any pointers in any hobbyist-written Arduino code?

        • (Score: 4, Funny) by RS3 on Friday November 10 2017, @08:28PM

          by RS3 (6367) on Friday November 10 2017, @08:28PM (#595329)

          C++ is used for Arduino because Arduino is for babies and assembler makes babies cry.

          Very funny

          When was the last time you saw any pointers in any hobbyist-written Arduino code?

          You mean intentional ones?

        • (Score: 2) by forkazoo on Saturday November 11 2017, @12:53AM (2 children)

          by forkazoo (2561) on Saturday November 11 2017, @12:53AM (#595423)

          When was the last time you saw any pointers in any hobbyist-written Arduino code?

          If a microcontroller environment is extremely memory constrained, it may be impractical to have a sensible malloc implementation and a normal free store. It may not be useful to do anything that requires bare pointers in that kind of environment, especially if you just need to blink an LED.

          • (Score: 1) by Ethanol-fueled on Saturday November 11 2017, @01:04AM

            by Ethanol-fueled (2792) on Saturday November 11 2017, @01:04AM (#595427) Homepage

            Bare pointers in baby's first Arduino code are most often associated with timers and other mechanisms to have non-blocking clumps of logic. They are most certainly a good idea but abstracted away in libraries that babies don't bother to read.

            As far as Arduino hobbyists go, you can tell an O.G. Nigga from a baby because the O.G. Niggas use long instead of int.

          • (Score: 2) by crafoo on Saturday November 11 2017, @01:51AM

            by crafoo (6639) on Saturday November 11 2017, @01:51AM (#595440)

            indirect addressing on a micro-controller? you think this is uncommon? I've changed my mind. No place is safe from the javashit and python cancer.

      • (Score: 1, Touché) by Anonymous Coward on Friday November 10 2017, @09:03PM (1 child)

        by Anonymous Coward on Friday November 10 2017, @09:03PM (#595346)

        C++ (and Python) are perfectly able to do procedural programming without those silly objects.

        The nice thing is, when objects start to make sense for your code, you have those as well.

        • (Score: 2) by HiThere on Saturday November 11 2017, @02:00AM

          by HiThere (866) Subscriber Badge on Saturday November 11 2017, @02:00AM (#595442) Journal

          Sorry, Python's better than Java about it, but you can't do much in Python without invoking objects. Almost all the built-in libraries are quite heavily object oriented.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 4, Informative) by darkfeline on Friday November 10 2017, @07:50PM (2 children)

      by darkfeline (1030) on Friday November 10 2017, @07:50PM (#595300) Homepage

      Remember the big hype of Go or Swift? They're moribund now

      You're very misinformed.

      Go is used a lot. Hell, it's been listed at 15-20 on TIOBE and Github for years now. Just because it's not used as a systems language doesn't mean it's dead. By that logic, all programming languages except C are dead, which is a simply stupid conclusion.

      I don't know why you mentioned Swift, but considering that it's a language exclusively for the Apple ecosystem (yeah, in theory you can compile it for Linux, but no one does that), the only language you can really compare it against is Objective C, where it is doing pretty well.

      there are only two kinds of languages

      There are only two kinds of people: those that incorrectly oversimplify things into two categories, and those who realize that reality is a bit more complex than that.

      --
      Join the SDF Public Access UNIX System today!
      • (Score: 4, Insightful) by aristarchus on Friday November 10 2017, @08:22PM

        by aristarchus (2645) on Friday November 10 2017, @08:22PM (#595322) Journal

        There are only two kinds of people: those that incorrectly oversimplify things into two categories, and those who realize that reality is a bit more complex than that.

        Wrong! And needlessly complex and obfuscatory. Those that incorrectly oversimplify things, and those who correctly oversimplify things, these are the correct two kinds of people. Those who claim that reality is a bit more complex belong to the former.

      • (Score: 2) by KiloByte on Friday November 10 2017, @09:04PM

        by KiloByte (375) on Friday November 10 2017, @09:04PM (#595348)

        Note that I explicitly excluded business logic, websites and so on. It depends on what type of program you're writing. Heck, most of my coding time is spent doing Perl! Use the right tool for the job.

        there are only two kinds of languages

        There are only two kinds of people: those that incorrectly oversimplify things into two categories, and those who realize that reality is a bit more complex than that.

        And those who quote out of context to ignore the third kind I mentioned. But yeah, include the obligatory joke about off-by-one errors. :)

        --
        Ceterum censeo systemd esse delendam.
    • (Score: 5, Informative) by Thexalon on Friday November 10 2017, @07:51PM

      by Thexalon (636) on Friday November 10 2017, @07:51PM (#595302)

      Also, to write those higher-level things properly, sometimes you need to be able to dive into C. A couple of examples:

      - I worked on some fairly high-level business code at a Fortune 1000 company for nearly 5 years. Most of it was in Python, which worked just fine. But we had to interface with a 3rd party that didn't have a Python library but did have a C library. So we took advantage of Python's C interface capabilities [python.org], wrote a wrapper around that C library in C, and were in business. And yes, sometimes we had to go mucking around in that C wrapper as the 3rd party's C library evolved.

      - I worked on a PHP-based website for a while, and one of the key behind-the-scenes processes had a bug where it was periodically seg-faulting, and eventually the server would be out of actively running interpreter processes. With nothing being reported at the PHP level, I had to go digging through with some stracing and reading interpreter code until we located what the interpreter was doing that caused the problem (specifically, pointers to non-scalar default arguments that were modified over the course of a function were persisting between calls to that function, so all of a sudden we'd get pointers off to nowhere in particular).

      - Occasionally when compiling system software from C source, I've come across situations where I needed to create a patch to make the thing work with an unusual distro setup, which I couldn't have done had I not been familiar with C (and yes, I send the patch upstream when it makes sense).

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by choose another one on Friday November 10 2017, @10:42PM (5 children)

      by choose another one (515) Subscriber Badge on Friday November 10 2017, @10:42PM (#595386)

      So we get death of C announced every a year or so per tech news site. But please tell me, what language is used to write the kernel, most of those fancy new programming languages' interpreters, and most serious code? Yeah, code that is used to directly earn money (business logic, shit webpages, etc) tends to be written in a higher level language, but all the foundation is in C.

      I think the problem is the ESR is measuring life and death by the creation of entirely new projects - realistically how many _new_ major kernels or system library projects are being written in C these days? I can't think of any, except possibly systemd which probably doesn't count because it shouldn't be written ever in anything, or something like that... The real question is does it matter? I'd say no. It is merely a sign of the maturity of what we have got, if you want a kernel you use an existing one, same for system libraries.

      What ESR is saying is like saying concrete is dead because no one is creating or researching new types of concrete foundation - everyone just picks a concrete foundation type that suits the building and uses that. The reality is that there is still a heck of a lot of concrete being used in foundations, and a heck of a lot of work available in doing it, but it may not be particularly interesting for some people because it is (largely) a _solved problem_. Same goes with programming foundations.

      Is concrete dead? No.
      Is C dead? ditto.

      • (Score: 2) by Grishnakh on Friday November 10 2017, @11:27PM (4 children)

        by Grishnakh (2831) on Friday November 10 2017, @11:27PM (#595406)

        Your point there is good, but I think you're incorrect about concrete. We wouldn't know much about it here because there's likely no civil engineers on this site, but there's a good amount of research still being done on improving concrete.

        If I were to try to think of other examples of things that are basically solved problems and there's really nothing new going on, just reimplementations of what's already known, I'd guess 1) laser printers (those haven't changed at all in 10-15 years now, except the engines having more memory and faster CPUs so they rasterize pages faster), 2) automobile suspensions (these haven't changed significantly in 15-20 years, and in some ways have gotten a little worse/cheaper), 3) automobile brakes (20 years, some high-end cars have carbon brakes but race cars had those a couple decades ago), and 4) speakers (they're all just paper/plastic cones and voice coils like 50-100 years ago, and frequently plastic or metal dome tweeters; there were some different technologies explored a couple decades ago like electrostatic speakers but they never caught on).

        Of course, there's also technologies which have gotten significantly worse over time: 1) computer keyboards (they continue to get worse and worse, with the latest being "island keyboards" on laptops), and 2) desktop computer UIs (they peaked around 2005-2009, and have gotten horrifically bad since 2010).

        • (Score: 2) by crafoo on Saturday November 11 2017, @01:55AM (1 child)

          by crafoo (6639) on Saturday November 11 2017, @01:55AM (#595441)

          I'm an optimist. I think the UI thing will turn. Just like a wheel, all that was old is new again. I'm looking forward to the invention of the menu bar and boxes around buttons.

          • (Score: 2) by Grishnakh on Saturday November 11 2017, @02:40AM

            by Grishnakh (2831) on Saturday November 11 2017, @02:40AM (#595456)

            It'll take at least a decade; we have to wait for a new crop of young people to rise up and displace the current 25-30yo idiots who are pushing this shit.

            I'm looking forward to the "invention" of 3D-looking buttons.

        • (Score: 2) by choose another one on Saturday November 11 2017, @02:09PM (1 child)

          by choose another one (515) Subscriber Badge on Saturday November 11 2017, @02:09PM (#595579)

          Your point there is good, but I think you're incorrect about concrete.

          I know it's not a perfect analogy, but I still think it is a good one. Sure there is work still being done on improving concrete itself but it is tinkering, if you take, say, skyscrapers the basic foundation design is concrete piles and concrete mat/raft - Chicago has been building on concrete piles for over a century, some sets of piles have even been reused for new buildings. Sears/willis (1970) sits on a bunch of concrete piles and a mat, so does Burj Khalifa (c2004), so does the BT Tower in London (50yrs old) and the Shard (2009) on the other side of the river. Skyscraper foundations were a problem before 1900 (big problems in Chicago), but for at least the last 50yrs - solved problem.

          Automobile brakes? Yes, buuuttt... hybrids and EVs have brought re-gen braking, an addition rather than replacement but nonetheless arguably a major change. The others - yep.

          Computer keyboards getting worse - nah. My kids may argue over which colour cherry keyswitches they want, but the IBM Model M will always win simply because it is so much heavier and therefore shuts kids up so much faster when you hit them over the head with it :-) What is getting worse is the standard of keyboard provided with modern consumer computer kit - but that is just because most modern consumers don't actually need or want (to pay for) a decent keyboard, so it's been value-engineered out.

          As to UI, yes it's gone backwards, and not just on the desktop. I don't think we will see 3D buttons on desktop again, because real live buttons no longer look or work that way. I can think of stacks of examples, too many to list, but I think we must have reached the tipping point where touch screen UIs are cheaper to design and build than physical moving buttons, because touchscreen-for-the-sake-of-it UI is all around us. UI discovery has moved from "what does this button do" to "where the **** do I prod the featureless bit of glass/plastic to get something to happen". UI design will now move on through gesture control to voice control, because we have to fix the mess we've made of physical UI somehow.

          Voice control will get messed up too once we start making stuff too "smart" - refer to Douglas Adams' predictions from decades ago for this, he was actually a great observer (and predictor) of technology and UI. I give it maybe ten years before Alexa has a mode that analyses your tone of voice and will only open the door if you ask it nicely, it'll be a fair few more years before opening the door requires an argument and a threat to reprogram the computer with an axe (or counting, or maybe quoting C code at it!), but it'll happen.

          • (Score: 2) by Grishnakh on Monday November 13 2017, @01:31AM

            by Grishnakh (2831) on Monday November 13 2017, @01:31AM (#596031)

            Automobile brakes? Yes, buuuttt... hybrids and EVs have brought re-gen braking, an addition rather than replacement but nonetheless arguably a major change.

            I disagree; it's an addition like you said, not a change at all to the actual mechanics of the friction brakes. They're the same; they're just not actuated quite the same. This is splitting hairs perhaps.

            Computer keyboards getting worse - nah. My kids may argue over which colour cherry keyswitches they want, but the IBM Model M will always win simply because it is so much heavier

            The Model M is only still made by one tiny company for enthusiasts (and even there, it's generally said to not be quite as good as the originals). Back in the "old days", those keyboards used to be ubiquitous and standard, and others were somewhat similar even if not quite as good (such as the old Dell Quietkey).

            What is getting worse is the standard of keyboard provided with modern consumer computer kit

            And see, this is the problem: you can only use a dedicated keyboard with a desktop computer or a docking station or if you plug it into a USB port on your laptop. You won't be taking it with you, and these days, most PC users have laptops, not desktops, so we're pretty much stuck with whatever craptastic keyboard they build in. For a while, it wasn't so bad: the Thinkpad keyboards were generally considered the very best, and the Dell Latitude keyboards a close runner-up, while keyboards on cheaper laptops were generally crap. Not any more; the keyboards on both Thinkpads and Latitudes have gone down the tubes, with the adoption of the "island" keyboard scheme.

            I don't think we will see 3D buttons on desktop again, because real live buttons no longer look or work that way.

            We still have real live buttons on some things. But they are fading. But even in cars, we still have knobs and buttons, though a few shitty brands have tried to eliminate them, but there's no agreement across the industry on this at all. Touchscreens are dangerous in cars if intended to control major functions while driving (they're fine for displaying info, and for accessing rarely-used settings).

  • (Score: 0) by Anonymous Coward on Friday November 10 2017, @09:58PM

    by Anonymous Coward on Friday November 10 2017, @09:58PM (#595366)

    To be a prolific writer, you should never proofreed your work.

  • (Score: 1, Interesting) by Anonymous Coward on Friday November 10 2017, @10:19PM (2 children)

    by Anonymous Coward on Friday November 10 2017, @10:19PM (#595376)

    https://www.tiobe.com/tiobe-index/ [tiobe.com]

    On that chart, the big movers is that Go is tanking after being "winner" last year - so that peaked! And Rust, not even on the main list, and well at the bottom.

    Top languages are clearly Java and C and C++. And Javascript is gaining, thanks to NodeJS. Of crouse, .NET language families on Windows, but that's about it. Go and Rust are nowhere near anything. And Python is probably going to be displaced by NodeJS in many tasks as push for full stack developers continues.

    • (Score: 2) by tibman on Saturday November 11 2017, @12:00AM

      by tibman (134) Subscriber Badge on Saturday November 11 2017, @12:00AM (#595410)

      Um, the biggest increase of any language in the top 20 is +0.69%. That's like noise. Error margins. Go only dropped -0.57%. The only two drops worth mentioning in the top 20 are Java with a whopping -6.37% and C with a small -1.46%.

      Looking at the "TIOBE Programming Community Index" graph and basically every top 20 language is in decline or barely holding.

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 0) by Anonymous Coward on Saturday November 11 2017, @04:08AM

      by Anonymous Coward on Saturday November 11 2017, @04:08AM (#595488)

      The company I currently work at has well over 5k in python devs. They are going to nodejs in a big way. 0 new projects in python all new ones in nodejs/angular with some smallish tomcat/java packages to support. Python is not going away but it has been relegated to support only. We joke about using go or rust. But it will never happen.

(1)