This post is a repost due to the loss of my original blog location
What's something everybody wants for their application, but very few people have the time to deliver? Performance. Let's face it, in most software projects, performance requirements are relegated to the very end of the project, when every knows they won't have the time to address them. In one sense this is a good thing, as one of my biggest bugbears is premature optimisation.
Premature optimization is the root of all evil.
- Hoare's Dictum, Sir Tony Hoare
When should I optimise?
Now keep in mind that by denigrating premature optimisation I am not saying that you should never think of performance when writing an application. Of course you should, especially when looking at your design. Sir Tony's' quote is all too often taken out of context as Randall Hyde argues. Good performance is most effectively obtained by thinking very carefully about your design up-front.
However, when you have the choice between an easy to write, slow algorithm and a difficult, fast one then chances are that you should probably write the easy one. This is not always true, of course. For example, you may know that the algorithm is being used on a critical path in your application, in which case you definitely should go for speed. Far too few people even keep these trade-offs in mind and go for one extreme or another; optimising code that will not be a bottleneck, or ignoring code that will. I often comment code that I know is slow with a // PERF: comment , so that I can go back to it later to improve it. The nice thing about this approach is that you can move the slow code into your unit tests in order to ensure that the results match your optimised code, since all too often bugs are introduced during optimisation.
The type of application you are writing matters a lot as well. Obviously, if you are developing a software product, you will want much tighter performance requirements from most of your code than if you're writing a bog-standard enterprise app. The reason? Well, in my experience almost all custom enterprise applications are IO-bound, and spend an awful lot of time waiting for the user or for results from the database. In such an application, your database design and tweaks will likely make far more impact than anything else you may do.
That said, what happens when, at the end of your project you find that your code is too slow to deliver to the customer? Well, apparently in one talk Rico Mariani said that the Ten Commandments of Performance are
Scott Kirkwood has some interesting arguments and counter arguments to premature optimisation:
[...] Now back to premature optimization. I think what they really want to say is that "Unnecessary optimization makes code that is unmanageable, buggy and late" and there's more:
Well that's the theory. And here's some of my counter arguments:
- When a program has performance problems the programmer always knows which part of the code is slow...and is always wrong.
- Only through profiling do you really see where the performance issue is.
- You can waste a lot of time doing optimization that doesn't matter.
- Optimization can often make the code more obscure, and hard to maintain.
- Spending more time on optimization means you are spending less time on other things (like correctness and testing).
- If a developer really enjoys what he is doing it wont take any "extra" time. In other words, taking time to optimize doesn't necessarily steal time from testing, more likely it steals time from surfing the web
- Every time a developer looks at the code for something to optimize, he's looking at the code! He understands (groks) it better and may fix more bugs.
- Encouraging developers to leave in code that they know is embarrassingly slow makes them a little less proud of their code, a little less enthusiastic about finding and fixing their bugs.
- Products have failed because in a review they mention that it took twice as long to load a document than the competitor (even though it was 2 seconds instead of 1)
- When you put the code in production and it's too slow, you may be able to fix it by profiling and optimizing, but then again, you may not - you may have to redesign it.
So, crack out your profiler (I'm a big fan of the ANTS Profiler from Red Gate) and measure and find your bottlenecks and optimise them. If you follow my approach with marking code you know to be slow, you might be surprised to find how rarely you are correct in your estimation of what code is a performance bottleneck.
When should I stop optimising?
Obviously, this is a big question to ask. Usually, you will get a few low-hanging fruit, a couple of optimisations that give you large performance benefits. After that, however, it will become more and more difficult to find good high-value optimisations. At this point most people stop optimising and ship the code. However, what if that's still not good enough? Well, let's think about faking performance.
John Maeda says
Often times, the perception of waiting less is just as effective as the actual fact of waiting less. For instance, an owner of a Porsche achieves the thrill of directness between translation of a slight tap on the acceleration pedal, to be manifest as an immediate burst of speed. Yet in any normal rush hour situation, a Porsche doesn't go any faster than a Hyundai. The Porsche owner, however, still derives pleasure from his or her perception that they are getting to work faster in a quantitatively faster machine. The visual and tactile semantics of the Porsche's cockpit all support the qualitative illusion that the driver is going faster than when he or she is sitting inside a Hyundai.
[...] The premise was when a user was presented with a task that required time for the computer to crunch on something, when a progress bar was shown, the user would perceive that the computer took less time to process versus having been shown no progress bar at all.
So, one of the easiest ways to fake performance is to slap in a BackgroundWorker component, put your expensive code in there, and report progress via a progress bar. Since you are adding not only an extra thread, but also more UI updates there is no doubt whatsoever that your code is less efficient, yet the user will perceive it as being more efficient.
Now, if even that's not enough, another approach is obviously to offload the processing to another machine. This is even better than the progress bar if the user does not need the results of the calculation right away, since they say what they want to happen and press the start button or whatever and can immediately begin working on something else. By offloading the processing, perhaps to a multiprocessor server, you are gaining a massive improvement in the users perception of the speed of your applications, as well as an improvement in the running time. The cost, obviously, is the work required to implement the handover as well as the hardware costs of the server.
Now, I am not advocating no optimisations at all, but I am trying to get across that sometimes these "faking it" approaches are easier and cheaper than extensive performance tweaking. Needless to say, sometimes, even with massive optimisation, you still need massive offloading capabilities. Just look at SETI@Home as an example.
So, keep performance very much in mind when designing your software, keep performance trade-offs in mind when writing it, keep difficulty and impact of optimisations in mind when profiling, and keep faking it mind when polishing your application.
Jeff Atwood has a nice post on how changes to the File Copy progress bar made users see the copy as less efficient, even when it was in fact more accurate.
Looking for a Document Management System? Signate 2010 is powerful, secure and easy to use.