You can have it right, or you can have it now, but you can't have it right now. -- Will Rogers??
One way (argued in MakeItFastBreaksMakeItRight) to get fast and right is to redefine right to include efficiency. There are other ways to negotiate between abstraction and efficiency.
For instance, you can capture the efficiency requirements in your abstractions. My favorite example of this is the StandardTemplateLibrary. It captures efficiency requirements in its hierarchy of iterators. For example, there is an iterator that moves one way, a bidirectional one, and a random access one. Algorithms like search use the one way iterator, while their sort requires random access iterators. Using the C++ type system, the library constrains the general sort algorithm to be used only where containers support constant time random access.
Another method encapsulates the leakage of knowledge needed for efficiency. For example, if one object needs fast access to another's data structure, set up a mediator between them that localizes the exposure of implementation details. Then you know that if you change implementations, only the mediator will require alterations. See [1] for a pattern system about such tradeoffs.
The most common way employs compilers to unabstract abstractions. You can give a user control over this using various pragmas. You can also "hand compile", in some cases.
-- AamodSane
I must respectfully disagree with putting efficiency requirements into most specifications. Developers have a very poor record of knowing what will or will not be efficient. Users even less so. I wouldn't be concerned about requirements like "average screen response = 0.5 seconds", but I would be very concerned about a requirement like "this search must be O(logN) and take less than 1 ms".
The ExtremeWay is to get the system correct, and correctly factored, then measure it and optimize what needs it, leaving the rest alone. -- RonJeffries
The remarks above were for the case when it must be right and fast. Most times, you go for right before fast. Attempting to guess where you need optimization is probably inching towards purgatory, and the next step is hell - premature optimization is the root of all evil (DonaldKnuth).
That said, see MakeItFastBreaksMakeItRight. -- AamodSane
It is interesting to consider why we want to get it right and then optimize. Every system releases pressure in the easiest way available. The U.S. political system, for instance, releases pressure through massive debt because it is a far easier course than confronting constituents. In software systems, pressure is released through logical complexity. Software tolerates complexity far more easily than it tolerates physical resource limitation (memory/processor time).
The ease with which complexity is tolerated up front in software creates a different risk dynamic than in fields with more physical constraints. Getting it right becomes harder. People ask for things in software that they would not dare ask of a physical system.
Getting it right and then optimizing works for most systems. But, I wonder whether it is advisable in embedded systems and real time systems. A logically correct system that blows all physical constraints would only be useful if there were doubts that the system was logically feasible in the first place. And then, when it is proven, if there are major architectural modifications that must be made to meet physical specs, you are really in a hole.
There is no doubt that we want to pretend that physical limitations are immaterial (excuse the pun) for software, but that is not always the case. To me, it seems far better to acknowledge the risk and deal with it up front. -- MichaelFeathers
Michael's right that software tolerates complexity far more easily than it tolerates physical limitations. But what about the people who have to write and maintain it? Physical limitations are the most obvious boundaries (which is why I originally referred to them in MakeItRightBreaksMakeItFast? and ClassManagementVersusObjectManagement) but complex software is a people issue too.
Given some problem of a given complexity you can't produce a solution that reduces the complexity - only one that partitions it in different ways. Given a complex problem we could split the solution up into three classes but each class would be complex. We could also split it up into 30 classes, each of which are very simple, but we now have a complex arrangement of classes. I prefer to go for a balance - I don't always favour CompositionInsteadOfInheritance if going this way produces many new classes in order to make a few lines of code simpler. Maybe it's just me but I would rather have 10 classes, which have some complexity in the code management area (i.e. they use isNil rather than a separate StateObject or embed algorithms rather than have a separate Strategy object) than 3 or 30.
I'm not much (or, indeed, anything) of a psychologist but I think it has something to do with the brains ability to hold 7 +/- 2 chunks of info. If you need to understand 5 separate classes in order to figure out one conceptual entity then you've close to capacity ... and its really important to understand how this concept relates to others around it. If you need to understand 5 separate concepts in order to understand 1 class, then you have the same problem. One concept per class is probably restrictive (because I do believe in using StateObjects? where I can and where it is appropriate) but I don't think we want to move too far away from this in order to facilitate future (perhaps unrequired) flexibility.
-- PaulDyson