Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Monday January 16 2017, @08:42PM   Printer-friendly
from the comparing-tools dept.

Eric S Raymond, author of "The Cathedral and the Bazaar", blogs via Ibiblio

I wanted to like Rust. I really did. I've been investigating it for months, from the outside, as a C replacement with stronger correctness guarantees that we could use for NTPsec [a hardened implementation of Network Time Protocol].

[...] I was evaluating it in contrast with Go, which I learned in order to evaluate as a C replacement a couple of weeks back.

[...] In practice, I found Rust painful to the point of unusability. The learning curve was far worse than I expected; it took me those four days of struggling with inadequate documentation to write 67 lines of wrapper code for [a simple IRC] server.

Even things that should be dirt-simple, like string concatenation, are unreasonably difficult. The language demands a huge amount of fussy, obscure ritual before you can get anything done.

The contrast with Go is extreme. By four days in of exploring Go, I had mastered most of the language, had a working program and tests, and was adding features to taste.

Have you tried using Rust, Go or any other language that might replace C in the future? What are your experiences?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Monday January 16 2017, @09:40PM

    by DannyB (5839) Subscriber Badge on Monday January 16 2017, @09:40PM (#454536) Journal

    Some people may have a different philosophy, but . . . I view the compiler as the first line of testing. If it fails the compiler, then no need to move on to the next level of testing.

    Or look at it another way. I would rather have as many proofs of good quality code done in the compiler before we ever start testing. Testing is great. But it doesn't make the guarantees that the compiler can. Testing obviously can catch higher level problems that the compiler doesn't. Problems where the compiler emits code that does just what the source code says, but it is wrong in the source code. That is what testing is good for. I just want to move as many problems as possible down to the level where they are caught by the compiler. The simplest example is type checking. Trivial example, if I multiply a string by a date, this is a type error. I just prefer to catch it at compile time rather than run time.

    I don't know rust. And it MAY BE unnecessarily complicated. I looked at it about a year ago. It seemed, IIRC, that a lot of it was about memory management discipline. A good thing, if you're in an environment without GC. It would be good to eliminate memory management problems at compile time, to the extent possible. Back in the 80's, as program complexity exploded, especially with the advent of GUIs, the biggest classes of bugs were: not releasing memory, releasing it more than once, or accessing it (through the pointer) after you have released it. GC is one way to fix this. But what about lower level code where you can't have GC.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 4, Informative) by turgid on Monday January 16 2017, @09:52PM

    by turgid (4318) Subscriber Badge on Monday January 16 2017, @09:52PM (#454547) Journal

    Test Driven Development isn't "testing" per se, it's writing code a statement at a time to pass a particular test. So you decide one condition at a time what your code must do, implement the test for it, write the line of code to make the test pass, and refactor. Rinse and repeat.

    You are 100% correct that the compiler should detect as many errors as possible. For quite some time now gcc has been very good at that, and I always compiled with -Werror and fix failures as and when I put them in.

    Static analysis is very helpful as well. At one place I worked we had some C and C++ payware static analysis tools that found many subtle bugs. As you can imagine, the C++ one was very verbose. We had a trial license of Coverity too for a while, and that was superb. However, a license costs as much as a small house.

    Some people I've spoken to have had success with splint for FOSS C development.

    Automated regression tests should also be used. If you write your code to be scriptable, with a good CLI, you can do a lot using plain old shell scripts. For one project I wrote a very rudimentary "scripting language" with sscanf() and was able to test my code unattended over nights and weekends on prototype hardware.

    But to reiterate, TDD is writing a test case first and then implementing the code to pass that test case. It takes a small amount of time to learn and is strange at first, but it's highly addictive and you become much more productive and a better coder very quickly.

    • (Score: 0) by Anonymous Coward on Tuesday January 17 2017, @04:15AM

      by Anonymous Coward on Tuesday January 17 2017, @04:15AM (#454723)
      <quote>Test Driven Development isn't "testing" per se, it's writing code a statement at a time to pass a particular test. So you decide one condition at a time what your code must do, implement the test for it, write the line of code to make the test pass, and refactor. Rinse and repeat.</quote>

      Sounds like a stupid way of writing extra lines of code.

      Regression tests are fine - you don't want old bugs to resurface.

      Programmer should realize that there can be far more ways a program can go wrong for each line of code added, so if you're going to test "everything" it's going to take a lot longer (often exponentially). But if you're only going to test each statement by itself it's retarded.

      If it's a simple obvious statement and you still get it wrong you are likely to get the test wrong too (e.g. off by one error in the test too ;) ).
      • (Score: 2) by turgid on Tuesday January 17 2017, @08:08AM

        by turgid (4318) Subscriber Badge on Tuesday January 17 2017, @08:08AM (#454799) Journal

        It's an extra level of defense. You actually write less code (YAGNI) and your logic is tested more thoroughly. It also gives you the confidence to change your code later since your unit tests pick up regressions instantly. This is "agile."

      • (Score: 0) by Anonymous Coward on Wednesday January 18 2017, @10:04AM

        by Anonymous Coward on Wednesday January 18 2017, @10:04AM (#455306)

        You write and run the test first as a way of debugging your test. If the test passes before you have written the code that should make it pass, then you know your test is faulty. It is surprising how many followers of TDD (test driven development) don't know this.

    • (Score: 0) by Anonymous Coward on Tuesday January 17 2017, @08:55AM

      by Anonymous Coward on Tuesday January 17 2017, @08:55AM (#454816)

      That requires having the complete spec from the start, otherwise you will be rewriting every test whenever you find out that you need an extra parameter or a different return value.

      • (Score: 3, Insightful) by Scruffy Beard 2 on Tuesday January 17 2017, @04:33PM

        by Scruffy Beard 2 (6030) on Tuesday January 17 2017, @04:33PM (#454947)

        That is actually a good thing.

        If you are not force to re-write the test every time you make a change, how much code coverage are the tests going to have?