Performance Matters

Performance matters to users, both practically and emotionally. Note that the performance the user perceives may not be the obvious, or easy to measure, performance -- for example, in a GUI, the button response time may be what the user perceives, rather than the total task finish time.

Systems always have a performance requirement. It may be fairly low and easy to meet -- then the performance risk is low and we scarcely need worry about it. On the other hand, it may be high -- close to the capacity of the hardware, of the substrate software we're building in, or on the edge of what we know how to program. In that case, the performance risk is high and should be addressed in tandem with the functionality risk.

Often we need lots of functionality at "reasonable" performance. Then the functionality risk is dominant, and we are wise to focus on well factored expression of functionality. If we do have a performance problem, we have predicted that it will be minor and that our system that can be tuned for performance as needed.

In other cases, we are closer to the edge on performance and hence the performance risk must play a larger role in our planning. In these cases, performance requirements must be on the table so they can be explicitly balanced with functionality decisions.

Performance requirements affect the shape of the system. Therefore, when there is the possibility of a performance risk, do an early ArchitecturalSpike to validate your assumptions.

When you have a performance problem (hopefully early in the project), profile and measure first, OptimizeLater, and RefactorForPerformance? in preference to bumming microseconds. A Shell sort in Java will beat a bubble sort in assembly almost every time. Sometimes, you even want to use a quicksort -- unless your input data is almost ordered. You should profile to find out.


It makes me sad when my customers schedule load testing for the week before roll-out. Sometimes we get lucky, and we find performance problems that have a quick fix (like undersized or misconfigured routers). But what can we do if we find an architectural problem at that late date? Sigh. -- RogerHayes


Some teams make their developers optimize constantly, everywhere. This can lead to poorly-factored, obfuscated, unmaintainable code that is still slow. Instead of optimizing where it counts (at the bottlenecks), the team has wasted energy optimizing useless code that isn't frequently executed. Refactor, profile, then OptimizeLater. Well factored code is much easier to optimize. Refactoring also removes a lot of the cruft that slows you down.

Do you write databases entirely in assembly? Or do you instead improve the data structures and algorithms to be more efficient? Does making the database administrative GUI code superfast really matter? Do you bother optimizing the mouse event loop?

If I expect that the database administrator will be using the GUI much, then you can bet that I would make sure the GUI is optimized enough to feel right. And I have already spent quite a lot of time optimizing mouse event loops. Why? Because have the program act responsively to the mouse is more important to most users than whether the program actually processes it data a few seconds faster. However, optimization should be left until later, unless your program is so slow that you can't test it in a reasonable amount of time.

If performance is critical, or it is apparent that it is poor, you should have profiled and optimized earlier as you completed each feature or integrated. BigBangOptimizing? is the same as BigBangTesting. ContinuousIntegration helps identify where the slowdowns come from. -- SunirShah


I write mostly numerical codes for problems where there is a huge bottle neck -- solving the final linear system Ax=b. With that in mind, I always write my code to be readable, maintainable and mutable and leave optimizations for some later day. If I want my code to be faster, I know where to look: Ax=b


I am reminded of a conversation I had once with my old boss, DaveDodson. We had both moved on since working together at Purdue, he to Boeing, I to Tektronix. Anyway, with great excitement I tried to explain my then recent work with Smalltalk. "With Smalltalk," I said, "I can write in a line or two things that would take pages and pages to write in Fortran."

He was impressed, of course, but went on to explain, "I'm doing the exact opposite. I'm taking code that I could write in a couple of lines of Fortran and writing it in pages and pages of Cray XMP assembly code."

Dave made the XMP worth owning. Every tenth of a percent he shaved would pay his salary. He went to the limit. I'd done that before myself. I kinda miss it. -- WardCunningham

I always try to judge efficiency from a more global perspective by thinking in WallTime?. If I'm only going to run my simulator 100 times and shaving 1 hour per run takes me three weeks of programming time then it just ain't worth it. Then there's always the issue of maintenence time etc. I'm not saying that optimizing isn't worth it, just that you always have to step back and see the forest from the trees. --PatNotz


I've set up a website with lots of performance tuning info and links at http://members.optusnet.com.au/speedup. --BruceCropley?

See also FuturistProgramming, MakeItFastBreaksMakeItRight, AsPossible


CategoryOptimization


EditText of this page (last edited April 15, 2010) or FindPage with title or text search