"I've been writing C for quite some time, but I never followed good conventions I'm afraid, and I never payed much attention to the optimization tricks of the higher C programmers. Sure, I use const when I can, I use the pointer methods for manual string copying, I even use register for all the good that does with modern compilers, but now, I'm trying to write a C-string handling library for personal use, but I need speed, and I really don't want to use inline ASM. So, I am wondering, what would other Soylenters do to write efficient, pure, standards-compliant C?"
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Donald Knuth.
Write some example/load-testing applications that will use your library heavily, and keep track of which operations slow the programs down in practice. This might because of how they're written, or because of how often they're called. A fairly well-written operation that you'll need to call millions of times, needs more optimisation than a poorly written operation that's used rarely.
This is doubly true if it's for personal use, because you probably know what you want it to do a lot, and what you need it to do only occasionally.
> "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Donald Knuth.
This correlates with some other helpful axioms, though:
* Don't repeat yourself* The best optimization is a more efficient algorithm* Don't do more work than necessary, etc.
So don't spend 90% of your time trying to squeeze out that extra 10% performance, but DO think about efficiency from the beginning of your design.
I like to refer people to the parable about Shlemiel the Painter [fogcreek.com] to emphasize why good planning is necessary BEFORE you start to code!
Sorry, going to have to part with conventional wisdom here. The best reason for profiling without intending to optimize is just so that you know what your code is actually doing, and how it performs under load. Testing and measurement are part of the process, and that includes the binary end-product, not just source code.
Also, responsiveness is always important...always. You can never have too much optimization. Every millisecond counts. As Steve Jobs pointed out to Larry Kenyon [folklore.org], if you shave 10 seconds off the boot time and multiply that times 5 million users, "thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives!"
However, you CAN waste too much time optimizing for diminishing returns. I agree with Michael Abrash as he puts it in his Black Book:
"Before we can create high-performance code, we must understand what high performance is. The objective (not always attained) in creating high-performance software is to make the software able to carry out its appointed tasks so rapidly that it responds instantaneously, as far as the user is concerned. In other words, high-performance code should ideally run so fast that any further improvement in the code would be pointless. (emphasis added)
"Notice that the above definition most emphatically does not say anything about making the software as fast as possible. It also does not say anything about using assembly language, or an optimizing compiler, or, for that matter, a compiler at all. It also doesn't say anything about how the code was designed and written. What it does say is that high-performance code shouldn't get in the user's way--and that's all."