Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday September 10 2016, @01:13PM   Printer-friendly
from the some-assembly-required dept.

Dan Luu demonstrates that even when optimizing, compilers often produce very slow code as compared to very basic source that is easily accessible to every assembly code programmer: Hand coded assembly beats intrinsics in speed and simplicity:

Every once in a while, I hear how intrinsics have improved enough that it's safe to use them for high performance code. That would be nice. The promise of intrinsics is that you can write optimized code by calling out to functions (intrinsics) that correspond to particular assembly instructions. Since intrinsics act like normal functions, they can be cross platform. And since your compiler has access to more computational power than your brain, as well as a detailed model of every CPU, the compiler should be able to do a better job of micro-optimizations. Despite decade old claims that intrinsics can make your life easier, it never seems to work out.

The last time I tried intrinsics was around 2007; for more on why they were hopeless then (see this exploration by the author of VirtualDub). I gave them another shot recently, and while they've improved, they're still not worth the effort. The problem is that intrinsics are so unreliable that you have to manually check the result on every platform and every compiler you expect your code to be run on, and then tweak the intrinsics until you get a reasonable result. That's more work than just writing the assembly by hand. If you don't check the results by hand, it's easy to get bad results.

For example, as of this writing, the first two Google hits for popcnt benchmark (and 2 out of the top 3 bing hits) claim that Intel's hardware popcnt instruction is slower than a software implementation that counts the number of bits set in a buffer, via a table lookup using the SSSE3 pshufb instruction. This turns out to be untrue, but it must not be obvious, or this claim wouldn't be so persistent. Let's see why someone might have come to the conclusion that the popcnt instruction is slow if they coded up a solution using intrinsics.

In my own experience, I have yet to find an optimizing compiler that generates code as fast or as compact as I am able to with hand-optimized code.

Dan Luu's entire website is a treasure trove of education for experienced and novice coders alike. I look forward to studying the whole thing. His refreshingly simple HTML 1.0 design is obviously intended to educate, and is an example of my assertion that the true experts all have austere websites.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Sunday September 11 2016, @09:49AM

    by TheRaven (270) on Sunday September 11 2016, @09:49AM (#400248) Journal

    A clever human could reserve a specific register for functionality that is used on a global scope while a compiler could never even fathom such an optimization because it cannot fathom at all!

    A human that writes a compiler can though, and that's one of the optimisations that happens when you do interprocedural register allocation. You generally don't, because it's very processor (and RAM) intensive, but there are lots of things that compilers do now that were invented 20 or more years ago and considered infeasible. It's also worth noting that a register-register move is as cheap as a NOP on any processor that does register renaming, so the wins from this are a lot smaller than you'd think. A few years ago, I got a big speedup in some code by undoing this optimisation: depriving the register allocator of a register globally cost a lot more than occasionally having to move a value between registers or spill it to the stack.

    The kinds of optimisations that humans are good at are not these mechanical transformations that can be inferred from the code, they're things that can only be done by understanding the purpose of the code. Microoptimisations may give you a factor of 2-3 speedup, but they won't change the complexity class of the code. Going from an n^2 algorithm to an n log(n) algorithm in a hot path will do far more than either a clever compiler or clever assembly optimisations will ever do.

    This is why high-level languages tend to do better in the real world than in microbenchmarks. A C or C++ programmer may be able implement the same algorithm and have it run 5 times faster than someone using a higher-level language, but in the same time the high-level language programmer can try half a dozen algorithms and pick the one that performs best on the data. An inefficient implementation of a good algorithm almost always beats a good implementation of a bad one.

    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2