What are the goals of a 'good' software design?
I would suggest
This over-focus on form unfortunately leads to destructive behaviour of a sort. The idea is that it is impossible to design for performance, and that we must always first design the function and the form, and ONLY then, should we look at fitness. It can be argued, however, that this approach has dire consequences. Fitness has tremendous impact on form, and even on the ability for the software to function.
The contrary argument is that we cannot know what elements of the software will be bottlenecks, and we must profile in order to discover them. This is an abuse of the PrematureOptimization argument. Optimization is about eliminating bottlenecks. Designing for a particular fitness has tremendous impact on the function and form of the software. You can't optimize an elephant into a cheetah.
Areas in particular that must be designed for and cannot be refactored (without tremendous effort) are:
None of this necessarily can cause code that is inherently more complicated or "optimized". It is about highlighting what might be considered arbitrary design choices in a single-process software system into key design choices for a multi-process, multi-user software system. For example, minimizing latency in a distributed system is often achieved by using larger individual messages as opposed to using several round trips. The impact on code complexity is negligible, but the principle chosen will have tremendous impact on performance and will pervade itself throughout the system, thus making refactoring to a more performant approach an expensive chore.
Designing for performance is a key means to successfully delivering software, and should not be ignored. I worry that it is, all too often, in many object-oriented systems.
See InexperienceGeneratesFailure
Honest question: what is it about our programming systems that causes these decisions to "pervade themselves throughout the system"? Could we improve our language/framework/whatever to the point where the decision is just in one place? I think that our inability to optimize an elephant into a cheetah is a failure of our programming systems. ;) -- AdamSpitz
This has less to do with technology constraints in programming systems and more to do with the creative endeavour of design itself. I'm looking at this from an AgileMethods view: there's only so much you can anticipate, and only so much indirection you can build before you actually have to *do* something, and make a limiting decision about certain concurrency and distribution principles for this software, based primarily on today's business priority. I believe there are still major classes of decisions that have the traditional cost-change-curve problems. Perhaps this could be viewed as a failure of being unable to abstract these "cross-cutting concerns" satisfactorily (though AspectOrientedProgramming may try). -- StuCharlton
Right. That's how I'm viewing it. :) If you're out in the real world trying to build something, the mindset is, "When you come across a decision that you can't factor, think harder about it, and make sure to consider performance." But if you live in the fantasy world that I live in, the mindset is, "When you come across a decision that you can't factor, fix your universe so that you can factor it." So I'm trying to figure out what kinds of unfactorable decisions caused you to create this page. :) -- AdamSpitz
I'm not sure I agree with the distinction between "performance/fitness" and "functionality". For one thing, the definition of performance given here differs from the traditional definition - which relates to things such as throughput, user response time, speed, memory consumption, operation under stress, and the like. Depending on the application, these things can either be inconsequential (a "hello, world" program) or critical (a real-time system).
The definition for performance given above seems to be a rehash of "does it work". If my application breaks on an Oracle database - and produces incorrect results, then I don't have a performance problem. I have defective software - failure to conform to requirements, which is a functional problem.
Of course, there is a key issue saving the argument - for many applications, a specific level of performance is a bona-fide requirement. If I'm writing an engine controller, I had better be able to respond to a certain sensor reading within x microseconds; if my software takes x+1, it has failed. If I'm writing a RDBMS, throughput is a key BannerSpec? and an important requirement; a slow database engine will not be well-received. If I'm writing a distributed enterprise application; the deployment environment must be considered. What suffices for an office with twenty PC's and a single service won't work in a worldwide corporate network with thousands of servers, tens (or hundreds) of thousands of clients, millions of customers logging on to the website, and potentially unreliable and slow links between sites. Conversely, the solution for the megacorporation is just as inappropriate for the small office.
The author is indeed correct - one cannot optimize a cheetah into an elephant. However, portions of his argument appear to be a strawman; I don't see anyone advocating the design of elephants, with the notion that if necessary they will be optimized into cheetahs, and that a profile/refactor loop will magically do so. (Or the reverse - while a cheetah may be fast, it's useless for hauling stuff around. Many systems need to be elephants but don't need to be cheetahs). The PrematureOptimization problem that people warn about is the process of trying to design a cheetah when a turtle will suffice - squeezing every last cycle out of a command parser for an interactive program, for example. Trying to design a cheetah when an elephant is called for is a completely different problem - and one which is indeed difficult to fix.
To sum it up - the author is onto a very important point. I just don't think it should be called DesignForPerformance, as the word performance has lots of baggage associated with it. HorsesForCourses is one good name; ConsiderTheTargetSystem? might be another.
-- ScottJohnson
In my definition of performance, I really did mean response time, speed, memory consumption, resource conflicts. The context I am suggesting is that the amount of response time, memory constraints, throughput, etc. is stuff that pervades the whole system, it's not a clearly demarcated "feature". For example, I mean that the locking models on Oracle are night-and-day different from Sybase, and you will not scale past 20 users unless you take into account Oracle's very specific SnapshotIsolation and MultiversionConcurrencyControl approach to locking and concurrency management.
As you say, fitness could be viewed as a special case of function, in that "I need this software to work for 1000 users". But that's in terms of requirements. In terms of design, performance is not something you can pin down to a describable use case that can be implemented in a module. It's something that pervades the entire system.
Perhaps my argument is a strawman. I worried that it was when I wrote it, but I'm not so sure. I see too many software failures due to absolute disregard for performance when designing the system (in an evolutionary manner).
"Performance doesn't matter, throw more hardware at it" is a major cultural trend I see everywhere I train and consult. To a certain extent it's pure economics and it's acceptable. I see it somewhat as a counter-reaction to the decades of "performance-oriented computing" we had in the 1980's and 1990's with C and C++. But I think the counter-reaction has gone to an unacceptable level of foolishness and leads to expensive software failures.
I see AgileMethods advocates that are walking a fine line in suggesting that "performance doesn't matter". I know about systems written by some agile gurus (in Smalltalk and Java) that eventually hit a performance wall and required tremendous re-engineering effort to make them perform. Certainly they were well-meaning when they did the original job, and I am not suggesting they were blatantly "anti-performance". But I wonder if their work ever truly included basic performance analysis techniques such as queuing models?
That's a very interesting assertion. What references cite specific examples of agile projects that "hit a performance wall" and what the effort was to resolve performance requirements? -- StevenNewton
No references, just anecdotes from people that worked on those systems. Generally the problems were: systems that don't let go of object references (defeating disk garbage collection), too many "do little but delegate classes" that causes problems with concurrent updates, and SQL-based systems that don't actually use SQL, preferring a lot of iteration in the domain model. The solutions were ugly. (Update - I believe that the book DomainDrivenDesign refers to one of the systems I'm alluding to here.)
Not only the above assertions are a true characterization of many real life situation that a lot of people witnessed, but they also came to be advocated in books such as PatternsOfEnterpriseApplicationArchitecture, J2EE Architecture Patterns and quite a few others (there's a whole littany of ObjectRelationalMapping tools out there chief of which EJB that make a virtue of this non-sense). It's yet one more aspect of ObjectRelationalPsychologicalMismatch. Until we get the entire workforce to submit to the idea that a minimal education in basic ComputerScience cannot be replaced by ad-hoc pattern voodoo, we'll see many such manifestations.
I see J2EE advocates positing that you can and should write your code in a database independent manner, and avoid stored procedures at all costs, implicitly suggesting that "performance doesn't matter". I see "distribution by skill set", with literally hundreds of application servers popping up inside organizations, only physically distributed because of an inability to distinguish between a logical and physical SeparationOfConcerns. These hundreds of application servers lead to systems that do not scale, that require tremendous investments in hardware and tremendous re-engineering when they eventually completely fall over.
On the other hand, I see pseudo-expert architects that point to a software system running on a single machine and say "it will not scale", and proceed to re-write it in J2EE only to require 5x the hardware to reach the same level of performance, but "it scales now" because we can add more hardware - even though the original system had resources to spare, and could scale on to a cluster if you tried, but it wouldn't have been a J2EE cluster. This is is classic "marketecture", designing for the glossy ad instead the needs of the system.
All of these observations have a variety of cultural reasons behind them, and the solution, I think, is to DesignForPerformance.
I'm not sure I agree with the distinction between "performance/fitness" and "functionality". ... The definition for performance given above seems to be a rehash of "does it work".
Form, Fitness and Function are measurable criteria for us to judge our system by--the kind of criteria that can be used as requirements. You seem to be arguing that Fitness is a kind of Function. The reason its important to treat them separately is that Function is implemented directly by the code we write, but Fitness (performance) is a derived (emergent?) property of the system as a whole. You can directly control some of the factors in the performance equation (e.g. by changing algorithms or optimzing your code) but you cannot 'bolt on' Fitness to your code in the way you can 'bolt on' a new piece of Function. If you have specific Fitness requirements, you will need to consciously consider them throughout the development cycle in order to make sure you don't build a lumbering Elephant of a system which has no hope of becoming Fit enough for your purposes. Even if you don't have specific Fitness requirements, its worth keeping an eye on Fitness to make sure you are building a potential-Cheetah which you can OptimizeLater (rather than a lumbering Elephant).