Comments On Principle Six

Plan Ahead for Reuse

--KentBeck: This principle ignores the most powerful force in SoftwareDevelopment- risk. As a consultant, 80% of the time my job involves taking out premature abstraction so we can get the damned thing out the door. Looking ahead to reuse adds some extra effort to the development, but it adds loads of risk. Good project management is about risk management first, and effort management second.

I'll suggest a counter-principle- Program for Today, for Tomorrow May Never Come.

One problem I tend to see when people talk about managing risk, is people tend to focus on short-term risk/benefit at the expense of long term risk/benefit. Although, I can agree that short-term should be more important, it shouldn't be exclusive. --TylerMac

--DavidHooker: I agree, to some extent, but should we sacrifice good software to "getting it out the door?" I know it happens all of the time - I have to fight this almost every day. To me, one element of risk is the quality level of the software. Good project management balances this risk with the others in an effort to get the "best possible" out the door, while the Architect must try to ensure that the system can continue to grow to its potential after the release.

--DaveSmith: Planning for reuse before planning for use is a tempting, but dangerous goal displacement. Until you've had experience using a thing, pre-judging how you'll reuse it can preclude the type of organic growth between an artifact and its clients that's the basis for experience-based reusability.

Abstraction joined Optimization on my "what not to do prematurely" list after I saw the kinds of knots people tied themselves in trying to "get it right" the first time.


--AlistairCockburn: <taking out premature abstraction so we can get the damned thing out the door> I'll try on for size that what is happening is this: We see some really lovely piece of software, designed by someone who simply saw the abstractions, then programmed them up to get the thing out the door. We look at what this person did and say, "She/he did abstractions! That's what all the rest of us should do!" Or, "She/he just 'got it out the door'! That's what we should all do!". Or, "She/he managed to get reuse! That's what should all do!"

My guess this morning is that what that person saw cannot be taught late in life. Some beautiful abstractions really save some software - giving simplicity, ease of maintenance, reuse. From the people I have deal with over the last few years (same type as Kent), few of them would EVER come up with those beautiful abstractions, even if you let them reprogram the beast as many times as they wanted to.

So "Get it out the door" is a business necessity, and a fine number of companies go bankrupt because they never do get it out the door. And "finding lovely abstractions" takes time, which most hasty project don't allow time for. And finding lovely abstractions can also take the full forever, in very many programming teams. You should always have a day or two to argue about the abstractions, in any project. But if you don't see them, then you better program up what you have, and wait for inspiration or insight to hit you whenever it might. Because it might not.


I think that good practices early in the process can raise the liklihood that reuse will happen later. For example, a good DomainAnalysis can find the design secrets that need to be hidden (long-term stable aspects of architecture) and the variabilities that must be tweaked.

In many domains, the past predicts the future, and the insightful domain practitioner can learn from the past. Of course, the past doesn't predict the future perfectly, and you can't plan that reuse will happen. But you can raise the probabilities.

My opinion is that most domains aren't yet mature enough that I'd make reuse planning one of my top seven priorities. But there are exceptions. We've had several recent successes in (fairly narrow) telecom domains at Lucent. Planning does help, if you understand the business well, and if you have a good vision of how the planning will help. (And much of such planning is business planning, not design per se, which I think is lost on most folks who do software for a living.)

Maybe this just means that PlanningForReuse? is a synonym for NotDoingDumbThings?, or not NotPlanningForReuse?. I guess I'm in the minority here, but I do feel that you can plan for reuse, and I've seen it work in practice to increase the chances of "getting lucky."

-- JimCoplien


What he said.

Having re-read the principle I wrote (about 2 years ago), I think I was trying to say three things. First, like Jim said, planning for reuse of whatever artifacts you may generate can increase the liklihood that they can and will be reused. Second, we must look for opportunities to reuse artifacts that already exist. And third, that somehow reuse possibilities must be adequately communicated.

I don't think I was proposing that "reuse" becomes an overriding goal, or something that takes a lot of time and effort; more that it should be one of the forces considered while making those thousands of decisions that are part of our daily tasks as software architects/designers/developers.

Of course, this force must be balanced with the many other forces involved ("getting it out the door" being one). Sometimes we have the luxury of placing greater weight on PlanningForReuse?, sometimes we don't. Those are some the tradeoffs present in this business. Maybe there's a pattern here that could help balance these forces...

--DavidHooker

I don't know about a pattern per se, but I certainly know about limiting the scope of planning for reuse. Embedded systems is self-limiting in this respect anyway, since one piece of hardware may be wildly different from another one in both operation and substrate. Of course I look for patterns to replicate: user interface, motion/process control, real world feedback, etc. Often, though, the client says, "Look. This thing is going to be a widget. Just a widget, only a widget, always a widget, never anything other than a widget. Build me a widget, and I mean like right now."

I have to respect that. Even when the project is to build a (sic) platform for other products there is a limitation on the projected use of the platform. "This new widget is going to be a pump controller. That's all. It might be a syringe pump. It might be a systalic pump. It might be a high volume pump. It might pump blood. It might pump saline. It might pump plasma. The only thing we know for sure is that it's gonna be a pump."

This tells me that I need to plan for reuse of my software system in the context of being a pump, not a general motion controller or a radar direction finder or a navigational aid or a fire control center. The damn thing is a pump. With this limitation in mind I can ask questions about how pumps might be configured, how they might be used, and what kind of flexibility I need to design into my system. But when that's all done I end up with the design for a pump, and that's what I build. -- MartySchrader (crossing arms, sitting back, very self-satisfied)

''This seems to me like a perfect example of DoTheSimplestThingThatCouldPossiblyWork leading to small, well-factored, reusable code. It's direct and to the point, and does its one thing well, and that makes it reusable. -- CalebWakeman ''


Would a better Principle 6 of the SevenPrinciplesOfSoftwareDevelopment be:

Code for use; refactor for reuse?

Here's why.

When I program, I usually first write a chunk of code, designed to satisfy a test case or set of requirements. Then I do some testing. Next, I refactor that chunk and do some retesting. Then I work on the next chunk, designed to satisfy a different test case or set of requirements. This procedure is iterated over until a nice little section of the entire program is finished.

As the design evolves, some chunks become more closely coupled with other clumps. This isn't good, since small changes in hingly coupled chunks could effect unexpected bugs into other clunks. So if I find that one clump contains a solution to another clump, I first try quick (sometimes bad) techniques to test that assumption - e.g. CopyAndPasteProgramming. Next, I refactor the already working clump into a more general solution, and then retest with the additional requirements (i.e. the general requirements verses the specific case's requirements). Finally, I refactor and retest the clump with the 'bad' techniques.

This seems to work well for me. Now a specific solution evolved into a more general solution. Instead of having to work out which abstractions to define ahead of time for a particular piece of code, they emerge from other chunks as the requirements are encoded into the source. This helps me avoid DeadEnds.


I've seen cases of YagNi both helping and hurting projects. Generally, using experience and domain knowledge is the only way to select the best balance and guess. I wouldn't want to say "when in doubt, always use YagNi". It depends on the type of project and the politics of the organization. -t


EditText of this page (last edited November 14, 2013) or FindPage with title or text search