Make It Fast Breaks Make It Right

See also MakeItFastAndRight


If you are iterating "make it work, make it right, make it fast" then I think in early iterations you need to avoid PrematureOptimization and instead try to DesignForPerformance (so you don't end up with UniformlySlowCode). "Make it fast" should mean "refactor your elephant into a potential cheetah". OptimizeLater. Only when things are pretty stable and you understand the specific performance bottlenecks should you do the kind of optimizations that interfere with "making it right". ProfileBeforeOptimizing.


I ran a couple of workshops at a recent conference, about CORBA, distribution and service-based vs. entity-based architectures. One of the issues that cropped up (independently) in both these workshops was the problem of performance and how performance should be an explicit consideration right from the start. This breaks the 'Make it work, make it right, might it fast' maxim that has influenced a lot of OT developers (including me) but the problem for many of us in the workshop was that the 'make it fast' stage undid a lot of the effort put into the 'make it right' stage (one project passed entire database tables around to get performance, another used the Smalltalk BOSSing mechanism to file objects out of and into different address spaces). We came to the conclusion that you have to do 'make it work, make it right, make it fast' in many tight iterations so that you aren't wholly compromising your design for performance but you're also not compromising performance for an overly-elegant design. -- PaulDyson


Yeah, OT does seem to bias towards structures that are resilient in the face of functional requirements change, but not nonfunctional requirements change. I tend to think that it has to do with the fact that nonfunctional requirements are not often dealt with at the same level or at the same time as static structure during design. -- MichaelFeathers


"Make it work, make it right, make it fast..." I think the difficulty is that the requirements were not understood at the outset: The code's not working *or* right if it won't handle the throughput that will be required in production.

Because of our deployment environment on my current project (a 2400 baud WAN), we've had to pay close attention to this issue from the very begining. It has positively influenced our overall design/coding experience. -- AnthonyLander


Yeah, I agree. Sometimes people forget that making it fast is an up-front requirement for some systems. I've seen people designing and coding it "right", but it was so slow no one wanted to use it. You gotta give at least some thought to performance. For many other systems, however, giving too much consideration to performance bogs down the development effort unnecessarily. It all depends on the requirements. -- DavidHooker


I think you have to get it within an order of magnitude at the outset. You're not doing performance tuning if it's that large - you're redesigning algorithms. -- AnthonyLander


I disagree. I've seen order of magnitude improvements come through redesign as a part of performance tuning. Redesigning algorithms IS a big part of performance tuning -- it's finding what parts are important, and focusing all of your efforts there, vs. leaving the parts that can be done the "easy" but not necessarily efficient way. Quite often I'll start my students off representing something as a linear search, knowing full well that some of those collections will need to be re-done with a dictionary (hashed lookup) or with a sorted collection and a binary search. Getting them to find the right business abstractions early is more important than getting down to the bits and bytes level of what collection representation you need to use. -- KyleBrown

P.S. A great statement of this is in the LazyOptimization paper that KenAuer and KentBeck wrote for PatternLanguagesOfProgramDesign 2. There are some terrific rules of thumb about performance tuning, as well as some good war stories about the process.


I think this is certainly true in a single-processor environment but it is less clear to me whether the same principles apply in a distributed environment. In such an environment the remote message sends dominate the performance and so it is decomposition and physical distribution of your objects that really matter, the efficiency of your algorithms are of secondary importance. -- PaulDyson


Paul's solution seems to me to be exactly the same as the well-known, well-understood, and widely abused DatabaseDenormalization? problem. In both cases you have one model with superior long term characteristics being replaced with another with superior short term characteristics.

The best solution to this kind of problem is to solve it at another level. If you had an object distribution mechanism that efficiently moved an object and its state, then you could code clean and still be fast enough. If you had 10X the network bandwidth, then you could code clean and still be fast enough. If you could reduce the performance requirements, then you could code clean and already be fast enough.

The second best solution to this kind of problem is code clean, understand the architecture, and then muck the system up in as limited a way possible to achieve the desired performance. Paul- how long did it take you to convert StateObject back into flags and conditionals?

About an hour

The problem with this approach comes not with the approach itself, but with our reaction to it. We have all been in the situation where we didn't have the performance we needed and we couldn't fix the problem. Some people will destroy any number of projects with premature optimization to avoid being in the situation again.

Unwinding one application of StateObject is not an admission of failure. It is not that you shouldn't have used StateObject in the first place. While you were learning about the options for encoding changing logic, you used a tool (StateObject) that helped you learn fast. When you were learning about the performance of distributed systems, you used a tool (ObjectDenormalization?) that helped you learn fast. It was still the right thing to do to use StateObject first.

It would only be wrong to apply StateObject if it never worked in a distributed system. Paul- do you have other uses of StateObject that stuck? Even then, if you had lots of dynamic logic to learn about, and if the cost of denormalizing was low enough, it would still make economic sense to use it and then unwind it.

Yes - I always look to use StateObject first and then undo as necessary

-- KentBeck, LazyOptimization rules!

One thing though - perfomance and memory/database efficiency are perhaps the most obvious symptoms of the effects of preferring class management over object management (see ClassManagementVersusObjectManagement where the StateObject example came from) but they are not the only ones. I often see people wielding GoF like a weapon and applying patterns to everything in sight. They produce designs that they feel must be good (it's got lots of patterns in, right?) but they're catering for changes that may never happen and their design is overly-complex in the object management sense (see DesignShield). This doesn't sit well with PiecemealGrowth which requires that we build things stronger than we perceive will be necessary but not that we give a family home foundations capable of supporting a car park

Italics interspersed by PaulDyson


"This breaks the 'Make it work, make it right, might it fast' maxim" No, it doesn't. You wanted a system that performed and was as well structured as possible. You followed the maxim. Now you have the system. You can't think of a cheaper way to get there. So it worked.

Nobody ever said, "make it right and then keep it like that forever." You put what you know into your design. When you know more, you put in more. Design is always changing- because the requirements change, because you learn, because you need performance.

-- KentBeck


I have an insight about this. Performance is usually secondary to functionality. We are more apt to say "I need a square root function, how fast is it?" than to say "I have a 50 microsecond hole in my system, what can I place in it?" For this reason, it makes sense to go for functionality first, since you know that you will need it.

I disagree, performance is usually just as important as functionality. Most software is real-time, ie the time taken to deliver results is just as important as the correctness of those results. Consider a payroll system which is run once a week to calculate employees' wages - it's useless if it takes longer than a week to run.

Imagine holding the speed or memory usage of a system constant and slowly adding functionality. It is next to impossible. Holding the functionality of a system constant and optimizing speed and memory usage is much more practical because we work in terms of functionality. Nonfunctional qualities are emergent properties of things with functionality.

-- MichaelFeathers

Good point. I guess counter-examples would things like SetiAtHome, encryption/ prime factoring programs, elaborate screen savers, and nulling free memory pages, where one thinks, "My machine has a lot of spare cycles when I'm not at my desk; what can I use them for?". -- DaveHarris


Two observations:

For instance, lately I broke up some complex window invalidating code into a kind of "Composite Visitor." I broke the basic "redraw" operations down into atomic elements and then snapped them together into a collection class with the same interface. I passed the entire collection into the layout algorithm which just called methods on the redraw interface, which were delegated to the components. Immediately, I had a simple and powerful way of tweaking the algorithm just by readjusting the components I threw into the collection.

-- SunirShah

Well observed. Optimization (looking for performance beyond initial perf. requirement), like any new requirement addition, has an effect of deteriorating the design integrity which has an effect of reducing maintenability. Therefore the more you optimize against design integrity (i.e. making it fast breaking making it right) the less you'll have opportunities to opmize. So PaulDyson was right concluding that you should always have small steps. Optimization needs refactoring. -- ChristopheThibaut

"Make it fast" should be a requirement, in the complete sense. Define the needed speed, or performance, or whatever explicitly at the outset and work to meet it. You may need to change your requirement upward or downward as time goes on, but you avoid arbitrary tweaking in the name or "performance."


Discussion moved to AsFastAsNecessary


See IncompatibleGoals


EditText of this page (last edited April 11, 2006) or FindPage with title or text search