Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday October 23 2017, @06:48AM   Printer-friendly
from the MY-code-is-perfect! dept.

I am really astonished by the capabilities of static code analysis. The tool surprised me the other day as it turned out to be smarter and more attentive than I am. I found I must be careful when working with static analysis tools. Code reported by the analyzer often looks fine and I'm tempted to discard the warning as a false positive and move on. I fell into this trap and failed to spot bugs...Even I, one of the PVS-Studio developers.

So, appreciate and use static code analyzers! They will help save your time and nerve cells.

[Ed note: I debated running this story as there was an element of self-promotion (aka Bin Spam), but the submitter has been with the site for a while and has posted informative comments. Besides, I know there have been far too many times when I've seen a compiler complain about some section of my code and I'm thinking there is nothing wrong with it — and then I, finally, see my mistake. Anyone have samples of code where you just knew the compiler or static analyzer was wrong, only to find out otherwise? --martyb]


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Andrey_Karpov on Monday October 23 2017, @09:07AM (8 children)

    by Andrey_Karpov (6589) on Monday October 23 2017, @09:07AM (#586241) Homepage

    As it has already been said, sometimes developers don't understand, what a compiler wants to tell them. I wanted to touch upon another close topic. Sometimes developers blame compilers in their errors :). I wrote a note related to this issue. The compiler is to blame for everything - https://www.viva64.com/en/b/0161/ [viva64.com]

  • (Score: 4, Interesting) by bzipitidoo on Monday October 23 2017, @10:25AM (7 children)

    by bzipitidoo (4388) on Monday October 23 2017, @10:25AM (#586255) Journal

    And sometimes, the compiler really is to blame. I haven't forgotten the last time I used Borland C++ 4.5. That terrible compiler had really bad bugs. They screwed up the segmented memory. If your program needed more than 64K data, Borland C++ would reuse the same 64K memory segment! I caught it doing this when I was trying to figure out why my program was not working. Put watches on everything, and saw a totally unrelated array element change on the same line that a loop variable was incremented. Compiled that program, unchanged, with gcc, and the resulting binary worked flawlessly. Turbo C++ 2.0 was even worse. Could not do "x<<=1;", had to write "x = x<<1;" to work around another compiler bug. When it tried to compile the former, it would give up and display a view of the incorrect assembler code it generated.

    gcc wasn't without flaws of its own. Saw a program that declared a variable inside a loop, and then use that variable after the loop: "for (int i=0; i<n; i++) { ... } i=x;", and it worked up through about gcc 2.7. Then the gcc maintainers tightened the rules, and version 2.95 had a fit over that code. Well, 2.7 shouldn't have allowed it either, but it did and the programmer ran with it.

    • (Score: 2) by FatPhil on Monday October 23 2017, @11:15AM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Monday October 23 2017, @11:15AM (#586271) Homepage
      Compilers didn't "tighten the rules", they simply adopted support for a more modern version of the standard. Don't blame them for first supporting an early version, then supporting a later version. I think the change you're talking about was about the CFront 2.0 level, there were lots of changes around that time.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 1, Informative) by Anonymous Coward on Monday October 23 2017, @03:01PM

      by Anonymous Coward on Monday October 23 2017, @03:01PM (#586355)

      And sometimes, the compiler really is to blame.

      Indeed. I once got mysteriously wrong results from my code. I couldn't explain it until I found out that output of a long double simply didn't work correctly; the data was calculated right, but output wrong. Casting to double before output solved the issue.

      And I was using std::cout so it isn't a case of me using the wrong format specifier in printf.

    • (Score: 0) by Anonymous Coward on Monday October 23 2017, @09:26PM (1 child)

      by Anonymous Coward on Monday October 23 2017, @09:26PM (#586584)

      You realize you are talking about a compiler from 1994 from a company that does not make compilers anymore right?

      and it worked up through about gcc 2.7
      The language specification changed. The compilers were keeping up. Most of the magazines of the time made a big deal about what the changes were. That was one of them.

      For your particular example it was not clear from the specification if the declaration was scoped inside of the loop or outside. Some compilers did it one way and some others (I had the joy of moving between a couple that interpreted it differently). The guys who wrote the original specification put what they intended into the ISO group meetings. It broke a few bits of my code too. It was an easy fix. Move the declaration outside of the loop initializer. Same effect and you were being clear you wanted the scope of that variable to be longer than the for loop which is what your code was doing and what you intended. The compiler does not interpret your intent. It interprets what you tell it.

      • (Score: 2, Troll) by aristarchus on Tuesday October 24 2017, @05:24AM

        by aristarchus (2645) on Tuesday October 24 2017, @05:24AM (#586727) Journal

        You realize you are talking about a compiler from 1994 from a company that does not make compilers anymore right?

        No, I did not realize this. Should I have? Could you have included indications of the irrelevance of your experience in your original post? Back in the early days of computing, we had to carry our own calcules, or pebbles, and an abacus, of course. And we had to both compile and test our algorithms by hand! Computer Scientists these days! What a bunch of wooshes!

    • (Score: 2) by JoeMerchant on Monday October 23 2017, @09:46PM (2 children)

      by JoeMerchant (3937) on Monday October 23 2017, @09:46PM (#586602)

      So, you were trying to use Borland C++ in the early 1990s? Couldn't you tell that was a mistake? I remember evaluating it in 1991 and coming to the conclusion that it was unsuitable for any applications more complex than: "Hello world, how are you?". We spent the next several years developing our apps in straight C, didn't revisit C++ until Windows 95 came out (in 1996.)

      --
      🌻🌻 [google.com]
      • (Score: 2) by bzipitidoo on Monday October 23 2017, @11:06PM (1 child)

        by bzipitidoo (4388) on Monday October 23 2017, @11:06PM (#586632) Journal

        Why, no, I couldn't tell right away. The school I graduated from had a very poor CS program. In 1990, they were still stuck on Pascal, and Turbo Pascal (from Borland, of course) was their compiler of choice. All this was on PCs running MS DOS, and Windows 3 was pretty much a useless curiosity, more for lusers in business school who couldn't handle the command line and insisted on bogging their computers down with GUI versions of office apps. They didn't teach or use C, let alone C++. I had to learn C on my own. And since Turbo Pascal seemed to be an excellent compiler and environment, why should I think that excellence would not extend to their C/C++ compilers? The only exposure to C was a class on OSes that used Minix, as Linux didn't yet exist at that time. The professor didn't spend any time teaching C, seemed to expect everyone to be able to dive right into the Minix source code.

        • (Score: 2) by JoeMerchant on Tuesday October 24 2017, @02:59AM

          by JoeMerchant (3937) on Tuesday October 24 2017, @02:59AM (#586699)

          I guess I got lucky, I tried to do a few simple things in C++ and it became clear within less than a week that the compiler wasn't doing what the book said it should.

          Your school experience sounds typical late 80s - in my Computer Engineering program, we had a Pascal class, then later we had a Compilers class where we were supposed to write a full assembler using C language, but no C language class as a pre-requisite. Further, the project was supposed to be done on a mainframe that was always down, so I did it in Turbo C on IBM PCs in various other labs and at a friend's house, thinking I would port it to the mainframe when it started working - which it never did. The prof tried to give me a ration of (*#% "porting is not a trivial exercise, the assignment was to do it on the mainframe" - to which I responded "did a single student successfully code anything on that mainframe? It was only working for about 4 hours this semester, total."

          We spent the early 90s coding in C on DOS with PharLap 32 bit extender, using the Menuet GUI library - which we eventually purchased the source code to and ported it to compile in the PharLap environment also. Finally around the fall of '96 we started porting the C project into C++, running Borland's API - whatever they called it. Around 2000, someone bought the code and the first thing they did was hire a team to port it to Visual Studio, cost them about a quarter million to do the port - but the investors were happier that way since they could hire Visual Studio programmers more readily.

          --
          🌻🌻 [google.com]