As Fast As Necessary

The question I have is: suppose you need to make a system that must perform an operation O in M milliseconds. If it doesn't you simply do not have a usable system. How does this affect the way that you design? Should it? Or should you proceed optimistically and ignore the constraint?-- MichaelFeathers


You can't ignore a constraint like that; it is part of the requirements. You said it yourself: you do not have a usable system. There are many examples of this kind of system, most in communications. In the system I am developing right now, performance is tied for number 1 (with reliability) as a main goal of the system. Our users will only tolerate latency up to a point, then switch products!--DavidHooker


This is all becoming much ado about nothing. We never said you CAN ignore a requirement like this. The question comes down to -- how do you get there? Obviously, you have to build something that approximates a first guess as to how to accomplish the thing you need. (i.e. how would I do Operation O?). This is what Kent & Ken mean by make it WORK. The next thing to do is to review your approximation -- does it accomplish this thing in a sensible way? Is the code understandable? Is it extensible in a reasonable way for reasonable purposes? This is how you make it RIGHT. Finally, you ask, can it do it in M milliseconds? This corresponds to make it work FAST.

I think that this is a fundamental misunderstanding based on differing assumptions. For you to specify something down to the level of "do operation O in M milliseconds" to my opinion you must have already completed nearly the entire analysis and design of your system. The make it work and make it work right part have long past at this point. In the world that Kent and I live in, this sort of requirement usually doesn't surface until you have outlined "this is what I can do and this is how I can do it". On the other hand, sometimes it does -- for instance in C3, there were some hard requirements on how long it took to do payroll. But you still must walk before you can run. Determining how to do something is key to understanding how to do it well, and well means different things to different people.

KyleBrown


It seems to me that you have two sort of conflicting goals. You want to know early if the system can't possibly be fast enough and you want the system to be fast enough when it ships. The goals conflict because a constant concern with performance will undoubtedly yield a system that doesn't perform as well as it could.

LazyOptimization encourages an early BackOfTheEnvelopeCalculation or PerformanceExperiment to allay fears of being unable to reach the PerformanceGoal?. After that, I sure don't know any better way to get performance than only implementing what I need, refactoring the hell out of it, and then tuning based on real profiles. --KentBeck


Okay, this is what I was getting at when I said We came to the conclusion that you have to do make it work, make it right, make it fast in many tight iterations way back at the beginning of MakeItFastBreaksMakeItRight. Many of the people I talked to in the workshops used 'work, right, fast' in a waterfall-type way (you spend a long time making it work, you spend a long time making it right, you spend a long time making it fast and come up with something quite wrong ... and then the system is shipped). Maybe this isn't what Kent meant when he said 'work, right, fast' but it is certainly how these people were doing it.

I'm not involved with systems that are specified to the 'operation O in M seconds' level but I am involved with a system where the user sometimes says 'this just isn't fast enough'. By doing tight iterations of 'work, right, fast' we have a better chance of finding out what the user's performance constraints are and designing a system that is as right as possible in the code management sense whilst meeting these requirements. --PaulDyson


In our environment, if it takes more that 1/60th of a second, it's too slow. Your common 8 year old Nintendo player is simply too demanding and rigorous a user to allow you to get away with anything but blinding fast code. As a result, we operate on three principles:

  1. You probably have a pretty good idea up front of where the slow parts of the system are going to be. Use good design patterns to isolate these sections of code so that they can be improved without trashing your entire design.
  2. You probably don't have a very good idea of where the slow parts of your code are. Never waste time optimizing unless you know for sure that you are spending a signifigant fraction of time in a particular section of code.
  3. Spend more time thinking of a better way to accomplish your goal than time optimizing your current solution.
--JeromeKaraganis


In such situations I believe in continuous benchmarking to know how systems behave. How fast is a function call, a million iterations, a file open/read/write/close, a database query on a 100/10000/million record table, a view, a image scan, a feature scan, a line draw on the screen, ..... whatever.

If you do enough of these - and get a feeling for benchmarking - you can predict the time behaviour of your systems and know in advance whether you are on the safe side or not. If you are safe then any work on performance is wasted. But if there is a risk, then be sure that your design is performance optimized from the start, because otherwise you will be on a hell trip. No "work, right, fast" will help you then. -- HelmutLeitner


See also: IdontHaveToBeTheFastestJustFasterThanYou, AsPossible


EditText of this page (last edited August 1, 2005) or FindPage with title or text search