Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday June 26 2017, @11:08PM   Printer-friendly
from the He's-checking-his-list,-he's-checking-it-twice... dept.

Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in the code that a programmer wants to identify as early as possible. One of the ways to reduce the number of errors is the use of a static analyzer like PVS-Studio. Moreover, the analyzer is not only evolving, but also constantly learning to look for new error patterns, some of which we will discuss in this article. If you care about code quality, this article is for you.

[I debated running this story as it was specific to Unreal Engine and PVS-Studio. Stepping back and looking at the larger picture of static code analysis, there seems to be plenty of room for discussion. What, if any, static code analyzers have you used? How helpful were they? Was it effective in finding [obscure] bugs? How much time did running the analysis consume? Was it an automated part of your build process? How many false positives did you run into compared to actual bugs. On an entirely different perspective, is it easier to find coding errors in compiled code or interpreted? --martyb]


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by Anonymous Coward on Monday June 26 2017, @11:35PM (1 child)

    by Anonymous Coward on Monday June 26 2017, @11:35PM (#531640)

    Now for the other, when lives matter:

    NASA picking a computer [nasa.gov] and coding for it. [fastcompany.com]

    In the end, it is not the number of lines to crush out. It is the safety in knowing that a graceful failure is planned.

    • (Score: 2) by Wootery on Tuesday June 27 2017, @11:57AM

      by Wootery (2341) on Tuesday June 27 2017, @11:57AM (#531868)

      That second link was a really great read - thanks.

  • (Score: 3, Informative) by isj on Tuesday June 27 2017, @03:06AM (4 children)

    by isj (5249) on Tuesday June 27 2017, @03:06AM (#531728) Homepage

    I'm using Flexelint on a regular basis as well as Coverity. Both have their strengths and weaknesses. I also had a trail for PVS-studio an it was OK. The more tools the better.
    Judging from an earlier discussion (https://soylentnews.org/article.pl?sid=17/05/10/0357233) it seems to me that that few developers know of static analysis.

    With regard to specific tools:
    GCC: getting better, but doesn't warn about unsavoury constructs that may be fine in kernel code but very dubious in user-land code. Warning suppressions is poor when compared to some of the other tools.
    Flexelint: quite a lot of setup and false positive, but its laser-precision suppression rocks. And it warns about constructs that are technically OK, but dubious. And it can be adapted to some of the weird language extensions offered by embedded compilers.
    Coverity: good value-tracking. Suppression is on a case-by-case basis and can get tedious.
    PVS-studio: not bad. Found 64-bit errors that the other tools didn't. Didn't find errors the other tools did.
    Conclusion: more tools are better.

    With regard to compiled versus interpreted: Doesn't matter. It seems to me that what matters is statically typed versus dynamically typed. Statically typed languages are easier for static analysers to handle.

    • (Score: 2) by driverless on Wednesday June 28 2017, @10:45AM (3 children)

      by driverless (4770) on Wednesday June 28 2017, @10:45AM (#532364)

      PVS Studio is nice, but has probably the second most annoying licensing model of any software analysis tool after BoundsChecker. Admittedly anything is second worst after BoundsChecker, whose licensing mechanism was designed by Satan to torment souls in Hell, but still...

      Of the free tools, clang's analyzer is probably the best, and PREfast is the best if you go through and annotate all your code. Of the non-free ones I tend to prefer Coverity, although Fortify is also nice. It has far more FPs than Coverity, since Coverity makes elimination of FPs a major priority. Klockwork I would rate third, although it's not far behind Fortify.

      One that I haven't tried is Goanna Studio, because they make it kinda painful to play with.

      Oh: Downsides of Coverity, Fortify, and Klocwork is that you can't afford them unless you're a major corporation. PVS and Goanna are very expensive but affordable, the other three are out of reach for almost everyone except corps with deep pockets.

      • (Score: 2) by isj on Wednesday June 28 2017, @05:31PM (2 children)

        by isj (5249) on Wednesday June 28 2017, @05:31PM (#532533) Homepage

        It is interesting that you mention Fortify. I looked into it years ago but got the impression that the raw reports were being filtered for false positives by a group of merry Indians before you received the final report. Perhaps that was just for their semi-hosted solution.

        • (Score: 2) by driverless on Thursday June 29 2017, @03:17AM (1 child)

          by driverless (4770) on Thursday June 29 2017, @03:17AM (#532786)

          Fortify has many levels of reporting, so you can decide on how much detail you want. Which is basically deciding which level of FPs you're prepared to tolerate. The problem with it was, when I was looking at it, that to get a good level of detail you had to put up with huge numbers of FPs. Coverity at the time said they put... either 70 or 90%, can't remember which one it was, of their effort into dealing with FPs. That's what gave them the edge. Fortify just reported everything, and let you wind the level up and down.

          • (Score: 2) by isj on Thursday June 29 2017, @03:40PM

            by isj (5249) on Thursday June 29 2017, @03:40PM (#532977) Homepage

            It sounds like the warning suppression possibilities in Forty is/were insufficient.

            What I particularly like about Flexelint is that I can turn up the warning level (usually to 3) and then suppress warnings wholesale with high precision. For example, warning 1788 is "... local instance .. only used by its constructor and destructor" which normally indicates a forgotten and superfluous local variable. Except when it is deliberate. I can tell Flexelint about that with:

            -esym(1788,boost::lock_guard)

            and then all 1788 warnings about local instances of type boost::lock_guard goes away. And only that.

  • (Score: 2) by Wootery on Tuesday June 27 2017, @11:37AM

    by Wootery (2341) on Tuesday June 27 2017, @11:37AM (#531856)

    I worked on a large embedded-ish C/C++ codebase. We used Coverity. All in all I was impressed.

    I don't think they'd used Coverity from the start -- it found some suspicious old code, which was indeed buggy.

    Adding the requirement that your code mustn't set off any Coverity warnings is probably a good one. It's like a hard-to-define coding standard, but one which probably does improve software quality.

  • (Score: 2) by DutchUncle on Tuesday June 27 2017, @03:30PM

    by DutchUncle (5370) on Tuesday June 27 2017, @03:30PM (#531947)

    While the story may be "specific to Unreal Engine and PVS-Studio", the concept is not, and posting this as a "business case study" might help convince the managers who still can't understand why they should pay for such a product (yes, we still have some).

    This development team is using static analysis for games; I can tell you that there are life-safety products running right now that haven't been checked as thoroughly.

    We did get people to try CodeSonar, and aside from being very slow, it generated an overwhelming ration of false positives. The effort to winnow the wheat from the chaff was a PR disaster, overcoming the value of a handful of very useful finds. Then the IT department managed to mess up the licensing for a few months, and the experiment was considered a failure.

  • (Score: 3, Insightful) by DutchUncle on Tuesday June 27 2017, @05:50PM (1 child)

    by DutchUncle (5370) on Tuesday June 27 2017, @05:50PM (#532042)

    "Welcome to Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average."

    In direct contrast to an oft-stated issue in the linked articles from PVS-Studio: I think I'm an above-average programmer (or, maybe, architect). I also turn on all of the compiler warnings, and I run a syntax checker, and I run static analysis, because I know I make typing errors while getting those great ideas down into the keyboard. I do not feel that this indicates any weakness on my part, any more than using high-level languages instead of coding in assembler. The suggestion that programmers are too vain to use power tools seems silly. Even better, I'd rather have the impersonal computer-power-backed analyzer point out something questionable *before* the collegial code review.

    • (Score: 2) by driverless on Wednesday June 28 2017, @10:56AM

      by driverless (4770) on Wednesday June 28 2017, @10:56AM (#532366)

      The suggestion that programmers are too vain to use power tools seems silly.

      Actually it's way too accurate. You're very much the exception to the rule. Most developers know they're way above average, and don't need code analysis tools or even compiler warnings.

      A few years ago I had a discussion with a very well-known OSS software guy who said that he wrote great code and didn't need checking tools. I ran some representative samples of his code through gcc -Wall and... cppcheck, I think. Some of it wouldn't even compile with a newer version of gcc that enforced proper checks on things like function parameters, it was that bad. It worked by accident, not by design.

(1)