In spite of my status and obvious bias as co-creator of D, I'll do my best to answer candidly; I follow Go and Rust, and I also definitely know where D's dirty laundry is. I'd encourage people with similar positions in the Rust and Go communities to share their honest opinion as well. So here goes.
First off, C++ needs to be somewhere in the question. Whether it's to be replaced alongside C, or be one of the candidates that's supposed to replace C, the C++ language is a key part of the equation. It's the closest language to C and the obvious step up from it. Given C++'s age, I'll assume in the following that the question also puts C++ alongside with C as a target for replacement.
Each language has a number of fundamental advantages (I call them "10x advantages" because they are qualitatively in a different league compared to at least certain baselines) and a number of challenges. The future of these languages, and their success in supplanting C, depends on how they can use their 10x advantages strategically, and how they overcome their challenges.
[Another way to look at this is to ask "What is wrong with C?" and then assess how well these languages solve those problems. -Ed.]
(Score: 2, Insightful) by Anonymous Coward on Friday November 13 2015, @10:56AM
Another way to look at this is to ask "What is wrong with C?"
Not much. I can pretty much recall zero criticism of C that is about the language itself and not the standard library.
(Score: 1, Interesting) by Anonymous Coward on Friday November 13 2015, @11:34AM
The main problem with the language itself is that it's not C++. How am I going to program without all those obscure features, needless bloat, and terrible syntax?
(Score: 1, Touché) by Anonymous Coward on Friday November 13 2015, @01:25PM
You can just switch over to the original syntax of Objective-C.
Not enough @'s in your code? Objective-C can help. Now with NSEverything!
(Score: 2, Insightful) by Anonymous Coward on Friday November 13 2015, @01:11PM
Here's one: The handling of signed and unsigned types.
Consider the following code:
On the other hand, consider
That's just bad. The very minimum they should have done is to make type promotion preserve the signedness of a type (so unsigned short always gets promoted to unsigned int, never to signed int). But even then, the signed/unsigned semantics would remain error-prone. It simply is not designed well.
(Score: 2, Interesting) by JoeMerchant on Friday November 13 2015, @01:26PM
Can't we just banish unsigned int from all code? If you need the extra range (and how often does this happen, really?), bump up to the next larger integer type - it might even make your code faster to execute, and unless you're doing satellite comms, those extra bits aren't costing anybody anything close to your development time spent wrangling signedunsigned issues.
Just because "you're sure" that your variable will never be 0 is NOT a reason to type it unsigned.
🌻🌻 [google.com]
(Score: 3, Insightful) by pe1rxq on Friday November 13 2015, @02:14PM
I assume you never use bitwise operations or access real hardware with C?
Doing those with signed integers or with extra large registers opens up a completly new can of worms.
There is a very good reason for the existance of unsigned integers.
Btw: Both example programs are a WTF on their own. If you write progams like this in the real world (and not just as examples on this site) I have some advise for you:
Don't bother learning the integer promotion rules, you have worse problems anyway. Please use a language like rust or go, that way you will:
- Make my life easier as I won't have to worry about encountering your C code in the future.
- You will prove that rust and go are also not idiot proof
(Score: 2) by JoeMerchant on Friday November 13 2015, @09:52PM
I'll grant you hardware registers as a special case, if the top bit is ever 1 that's a lot easier to deal with unsigned.
But.... how often have you run into a slew of compiler warnings when working with legacy code that just went unsigned happy with quantities that are getting counted, or using unsigned int as a for loop control variable?
🌻🌻 [google.com]
(Score: 3, Informative) by TheRaven on Friday November 13 2015, @02:21PM
sudo mod me up
(Score: 1) by Francis on Friday November 13 2015, @08:35PM
In some applications you don't want to waste the memory. Why use 2 bytes for something where you know you'll never actually use half the possible values?
I don't think C is the answer to every problem, but it's ridiculous to suggest that there aren't benefits to using it. It can be a serious PITA at times, so I doubt very much that there's a group of people twirling their mustaches insisting that C be kept alive to confound the white man.
(Score: 2) by Subsentient on Saturday November 14 2015, @01:03AM
Or you can just not be an idiot, and not do implicit signed-unsigned conversions. I make extensive use of unsigned int in my code. It causes *far* fewer issues than you think. It also helps to know the integer promotions and conversion rules as specified in the C standard, which I do.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by JoeMerchant on Saturday November 14 2015, @05:16AM
I refer mainly to disparate legacy code that I'm attempting to graft together, so often I have been left fighting with signed vs unsigned in places where it was totally irrelevant to make it unsigned in the first place. Just a personal frustration, type-casting is usually easy enough, but sometimes C++ can be a pain about it.
🌻🌻 [google.com]
(Score: 3, Informative) by Grishnakh on Saturday November 14 2015, @03:14AM
No, we can't banish it, because it's absolutely necessary. You obviously don't work with any actual hardware if you never use unsigned integers.
If I'm reading some data over a serial line from some device and the fields are specified as unsigned integers, how exactly do I handle that if the programming language doesn't support unsigned integers? With C, it's easy.
What C does suck at, in its original design, is all the types: int, short, long, etc., in both signed and unsigned versions. But C99's stdint.h defines uint8_t, uint16_t, int8_t, etc. This should have been the standard all along, and they should banish the old things like "short" and "unsigned char". chars should only be used for actual characters, and for everything else there should just be ints, which the compiler figures out automatically based on the architecture (and then the fixed-width types I mentioned above are used when you need to specify it explicitly because of the data you're working with).
(Score: 0) by Anonymous Coward on Friday November 13 2015, @04:15PM
GCC warns you. VS does not. Have not tried it but I suspect clang does as well.
Yes those warnings are trying to tell you something...
(Score: 2) by meisterister on Friday November 13 2015, @11:25PM
I will give you the fact that the behavior is unintuitive for most use cases, but when you consider what actually happens to make that comparison, it makes sense.
-1 is equivalent to all bits being high, and in an unsigned comparison it is definitely greater than 42. Perhaps situations like this should prompt a warning to be generated by the compiler?
(May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
(Score: 2) by bzipitidoo on Saturday November 14 2015, @12:33AM
A few I don't like arise from some confusing and inconsistent syntax. Normally, the scope of a variable is confined to the block containing the definition-- between the matching curly braces, nice and simple, right? Except when the variable is declared in the condition section of an if statement or the definition part of a loop. Or in the start of a function body. In fact, it was a bug in old versions of gcc (back in version 2) that
for (int i=0; i99; i++) { i; } if (i99) puts("exited loop early!");
actually compiled, with "i" not being declared outside the loop. Was annoying to have to rewrite code that originally depended on that bug, and which generated hundreds of compiler errors when they fixed the bug.
And how about an exception to the idea of lvalues? Shouldn't code like "++i=i*2;" be illegal? Especially since "i=++i*2;" does the same thing?
Then there's the newbie gotcha "if (a=b)".
(Score: 3, Insightful) by tonyPick on Friday November 13 2015, @01:31PM
The underhanded C competition [underhanded-c.org] has quite a few things I'd count as language issues...
(Score: 2) by HiThere on Friday November 13 2015, @07:58PM
For my uses the main thing wrong with C is it's handling of Unicode. (glib makes a good stab at that, but that's not standard C.)
My secondary criticism is the difficulty with handling multiple simultaneous threads of execution with message passing between them. (I don't want them to be deaf and mute until they die.) The easiest way to do this that I can see is to set up separate TCP servers for each thread. YUCK!
Both of these are things that I can see reasons for C not to handle, so I'm not really complaing about C's handling of them. In C++, however, it's unforgivable.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.