Eric S Raymond, author of "The Cathedral and the Bazaar", blogs via Ibiblio
I wanted to like Rust. I really did. I've been investigating it for months, from the outside, as a C replacement with stronger correctness guarantees that we could use for NTPsec [a hardened implementation of Network Time Protocol].
[...] I was evaluating it in contrast with Go, which I learned in order to evaluate as a C replacement a couple of weeks back.
[...] In practice, I found Rust painful to the point of unusability. The learning curve was far worse than I expected; it took me those four days of struggling with inadequate documentation to write 67 lines of wrapper code for [a simple IRC] server.
Even things that should be dirt-simple, like string concatenation, are unreasonably difficult. The language demands a huge amount of fussy, obscure ritual before you can get anything done.
The contrast with Go is extreme. By four days in of exploring Go, I had mastered most of the language, had a working program and tests, and was adding features to taste.
Have you tried using Rust, Go or any other language that might replace C in the future? What are your experiences?
(Score: 0) by Anonymous Coward on Monday January 16 2017, @09:09PM
If I had known about scala I would never have invented go.
(Score: 2) by HiThere on Tuesday January 17 2017, @01:08AM
Scala suffers from the dependency on Java pushing 16-bit characters on it. There's just really *no* excuse for utf-16. Utf8, yes. Utf32 sometimes. Utf16? The only time that's a good idea is when interfacing to some idiot program that insists on utf16 input. It's a seriously bad choice.
(It's also true that I'm not really taken with Scala in general, but the character set is the killer for my uses.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 0) by Anonymous Coward on Tuesday January 17 2017, @02:12AM
UTF-16 is what Windows library functions use. So if your software targets Windows then UTF-16 is not a bad encoding choice for your strings in memory.
It's also what the ICU library uses so if your software uses ICU then UTF-16 is not a bad encoding choice for your strings in memory.
Unless your application extensively uses codepoints above U+FFFF then UTF-16 will use about half as much memory compared to UTF-32. So if you're considering UTF-32 for any reason, UTF-16 is probably a better encoding choice for your strings in memory.
However, if you use a lot of functions that take 0-terminated C strings (typical on Unix systems) then UTF-16 may be a bad choice for your strings in memory.
(Score: 0) by Anonymous Coward on Tuesday January 17 2017, @04:46AM
(Score: 0) by Anonymous Coward on Tuesday January 17 2017, @08:53AM
UTF-16 is a bad choice because it inherits the bad sides of both UTF-8 and UTF-32 at the same time.
It has the enormous complexity from the UTF-8 compression scheme, at the same time as taking twice as much space as the iso-8859-X series.
If you pick UTF-8, you only get the compression scheme. If you pick UTF-32[1], you only get the size.
[1] Should be UCS-4 IMHO. Currently it's the same thing, but next time they run out of space and move to UCS-8, UTF-32 will be redefined to a compression scheme like UTF-16 was when they ran out of space in UCS-2.
(Score: 0) by Anonymous Coward on Tuesday January 17 2017, @05:30PM
I guess if you are writing Chinese texts, you may see the space argument a bit different. Most Chinese characters are in the 2-byte part of UTF-16, but none is in the two-byte part of UTF-8.