Xp Design Conversation

The following is taken from an e-mail conversation from mid-April through mid-May, 2001.

Summary: Responsibility-driven design is good OOD, and TestsAreResponsibilities. UnitTests, as well as the code, are design artifacts.

RonJeffries' first message is the best "story" I've seen of how "design happens" in XP (or test-driven development).

I've cleaned up the conversation in minor ways; mostly, I've elided folks quoting each other.

PaulChisholm:

The following left me completely nonplussed ("put at a loss as to what to think, say, or do"):

http://www.joelonsoftware.com/news/fog0000000128.html

http://www.joelonsoftware.com/news/fog0000000123.html

I don't know which is worse: "pair programming, which is all anybody remembers about extreme programming," or the suggestion that, when programming in pairs, "you can't stop and concentrate, so it's probably not so useful for longer programming tasks."

(later)

"... it's very effective when you have a long list of small bug fixes you want to sprint through, because you can reach incredible velocity. Typos and small bugs get caught right away."

If you're programming without designing, and the kind of programming you're doing doesn't really require design, you're good.

"... you can't stop and concentrate, so it's probably not so useful for longer programming tasks."

If you're programming without designing, and the kind of programming you're doing *does* require design, you're in trouble.

It's not at all clear exactly when or how XP'ers do design. Think of Joel's attempt as a call to better explain this?

P.S.: I mentioned this to a co-worker of mine. He said, "Extreme programming? Oh, yeah, that's where programmers work side by side, right?" (*sigh*)

KentBeck:

Do we need some stories? "We had just implemented this test case, and making it work required that we duplicate code from A to B, so after we got it working we found C which both A and B could delegate to, then we noticed that C could take on some of the other work duplicated in A and B that we hadn't noticed before. We did this before checking in."

Are you saying you don't see when or how, or that we haven't communicated to others?

RalphJohnson:

That story is only part of the design. Kent, I like your phrase "Design is so important that we do it all the time". You have to design a little before you can make estimates of the time it will take to implement a story. You have to design a little before you can write tests. You design a little when you are refactoring.

The problem with stories is that each only tells a little bit about where design occurs. Design occurs everywhere.

PaulChisholm:

In that story [Kent's above], I clearly see coding and refactoring. I don't see anything that's recognizably "design" (which isn't to say it isn't there).

[In response to Ralph quoting Kent:] "We don't know who discovered water, but we're certain it wasn't a fish."

I count about two centuries (in man-years) of software design experience among the [members of the conversation]. If I asked you not to imagine a pink elephant with blue stripes, you might manage it; but if you were developing software, you'd find it easier not to breath than not to design.

So far as I can tell, Joel's kids managed to do pair programming without doing design. If they were following XP, is there something they did wrong, or forgot to do?

Is there any chance that XP projects do design *despite* XP, rather than because of it?

P.S.: Were the blue stripes horizontal or vertical?

RalphJohnson:

Refactoring is a way to design. XP elevates it, and supresses other ways. It still uses those other ways, but depends much less on them than other ways of making software.

Perhaps you define design in such as way as to rule out refactoring. Then XP has little design, and Kent talks about it as though it had no design. Kent exagerates to make a point. But the point is that refactoring IS design. I agree with the point.

WardCunningham:

Here is the deal. You get to make the code as good as you can. You will always be able to make the code be as good as it can be. You can take as long as you need to do this. It is built into the schedule. As you learn more, you are allowed, even expected, to add that to the code.

But there is a catch. You aren't allowed to write code to solve problems you don't yet have. If you start solving problems that you might have then you loose the ability to estimate because it becomes a contest to see who can think up the most hypothetical problems.

This do you/don't you design business is a bunch of bunk. If we get caught up in semantic arguments about what is/isn't design then we will quickly forget what we do know: it takes effort, but not unbounded effort, to make a clear program.

PaulChisholm:

[In response to Ward:] Agreed. I've got a story below that follows that, but still asks the question.

[I bring up the whole is/isn't design thing] largely because, rightly or wrongly, it's one of the most common criticisms of XP. If it's wrong, I'd love there to be a good crisp rebuttal.

Here's an example. Say we've got the following existing Perl program:

0;

(I chose Perl because an empty program is very compact:-), and we want a Perl program that does Markov chain random text generation (see chapter 3 of Kernighan and Pike's THE PRACTICE OF PROGRAMMING, or the second edition of PROGRAMMING PEARLS, and note the huge variety of data structures used for the different solutions). We've got user stories describing the input and output. (We accept a value for the random number generator's seed, so our runs are reproducable. I've been thinking about this.)

Maybe we decide our first step is to read all the input, one word at a time, into a list. My partner and I write a unit test case, test it, blink at the red light, think about twenty seconds, write about five lines, test it again, and rejoice at the green light. So far, so good. Even Joel's kids could handle it.

Now we want to put all those words into ... an object, a data structure, a whatever ... so that a few minutes later, we can also search the data structure for the next word, given an input phrase. (Searching a list of words for a phrase will be complicated, at least in Perl, and we want to do something simple. The right data structure will help.)

This is not a problem we don't have; it's the basis for the next unit test that will go green. We don't worry that my data structure will be perfect (we can always change it), but we need *something* for that next green light.

As with any method, some thinking will happen. Okay, that doesn't differentiate XP. Good sanity check.

As is the case with many methods, something may or may not be sketched on a whiteboard, but nothing will remain as an artifact (except perhaps some comments in the code, once we've written some). Some would argue that means we're not doing design; refutation of this lame criticism is left as an exercise to the reader.

We're not refactoring, because we're adding functionality. We're not writing something we ain't gonna need. We're happy to do the simplest thing that might possibly work. (We might create an empty data structure, and postpone the "real" data structure until the third unit test; but there's still some unit test that requires something. What do we do for that one?) We're following all the XP practices, but we're about to do something before we write code or unit tests.

That's an example of what I call design.

Pretty clearly, nothing in XP has prevented me from doing that. Okay, one concern dealt with.

Here's my remaining concern: In the story above, a pair of XP'ers are about do something that's not talked about in most discussions of XP. There's some hand waving ("we're always designing") that, in my (I hope never anything but humble) opinion, muddies the water.

Here's my challenge. With this or some similar story, show what XP'ers would do. If it's something that's trivial for the gurus in this discussion, pick a harder problem, because it's not trivial to everyone interested in XP, and there needs to be an example of how to do non-trivial stuff.

And call it whatever you like.

RonJeffries:

I wish you had picked an example from less deep in my bag of tricks, so I could remember better what Markov chains are and what I might want to do with them.

I'm going to start with a test that asks a Chain object for all the successors to a word. (I have this vague idea that I'm interested in the probability of transfer from one word to the next.)

This code is uncompiled and untested. It is a sketch only.

def testSuccessors
  chain = Chain.new ( [ 'pet', 'dog', 'child', 'david', 'pet', 'cat' ] )
  pets = chain.successors('pet')
  assert(pets.include? 'dog')
  assert(pets.include? 'cat')
  assert_equals(pets.size, 2)
end

In this test, I "designed" the idea that there is a Chain object, that one way of creating it is with a list of words, that it can answer the successors to a word as some kind of collection, and tested that it got the right number and values for successors to 'pet'. Might have been too big a bite for some people. Some might have just created the Chain object. I'm confident that I can create an object. Some might have initialized it with an empty array and checked successors of 'foo' to be empty. I might also, but I didn't. Must have been feeling confident.

Compile this test. Won't compile, there's no Chain class. Define Chain class:

class Chain
end

Now it compiles and doesn't run. Chain initialize hasn't been written.

class Chain
  def initialize(wordArray)
@words = wordArray
  end
end

Now it doesn't run because successors isn't understood.

def successors(string)
successors = []
@words.each_with_index
 { | item, index |
 if ( item == string )
successors << @words[index+1]
 end
end
successors
  end

Now it runs.

Then I write a test to look for something that isn't there. That test will get an array out of bounds I think, because I need to create a successor for the last word, or otherwise deal with the index+1 problem. I might put an if statement in the code for index == size. Don't know yet.

And so on ...

Does this help at all?

PaulChisholm:

The point of the exercise was a problem that required a conscious (design) effort. (All the recipients of this message are at the "unconscious competent" level when it comes to software development in general, and design in particular. XP needs to be taught to people who aren't at that level.)

This, and the rest of Ron's story, is exactly what I was asking for.

>Does this help at all?

Completely. Thanks!

Maybe I'm missing something, but I think Ron's story shows something about design and XP that I don't remember seeing [...] anywhere: what to do when you have user-visible functionality that requires some deep thinking (call it "design") before coding. (Another way to say the same thing: there are objects in the implementation domain, i.e., the software, that are not in the problem domain, i.e., the real world.) You can ask Ron if you may use his story, or come up with your own.

Thanks to everyone for listening, and to all those who replied (and helped me understand what I was really asking).

(later)

Object-oriented design has always been responsibility driven. "What does this class do? What does this method do?" The answers are usually couched in terms of responsibilities.

If OOD is part of XP, I presume the responsibilities are associated with unit tests. Without XP, finding the responsibilities is about as hard as finding the objects. With XP, the responsibilities are found in the unit tests. Agree, disagree? Somebody want to say this somewhere?

In all the XP (and xUnit) examples I've seen, a new unit test leads to changes in one method, or the addition of one method, or the addition of one class. (It was even the case in Ron's example in this thread.) Is that a rule? Is that a goal, occasionally met but not always? If not, could someone somewhere (Ken, Roy) give a more complicated example?

Took me a while to figure what questions I was really asking. Thanks for everyone's patience in getting here.

RalphJohnson:

This is insightful. OOD is NOT always responsibility driven; there are lots of books that make it data driven or even procedure driven. But the first book on OOD that I read that made me say "that is right!" was the one by Wirfs-Brock et. al. I agree that OOD should be responsibility driven.

Note that Kent and Ward are from the same Tektronix culture that produced the book on responsibility driven design. One of Kent's rules about using CRC cards is that the only way you can criticise a design is by producing an example (a test) that breaks it, or else by simplifying it. So, there are a lot of similarities between all these things.

RonJeffries:

Well, it seems that a test that required zero changes would be a bit too small, so the single change test is probably the finest grain that's practical.

It also seems that a test could be too large. If our first test was "talk to the program and be unable to tell whether it is a human or a program", it could be a long time before we got it running.

So we're now concerned with the shape of the curve showing effectiveness as a function of size. It is reasonable -- but not obviously correct -- to assume that the curve has maximum effectiveness somewhere in the middle, but probably toward the small end.

If, as in the olden days when I used to walk three miles through six feet of snow to get to the window to put in my card decks, you only get one shot at the computer per day, then tiny tests are probably going to be too small: you'll only make tiny progress per day.

But larger tests increase the chance of error, and the amount of rework is (at a guess) half the original work you do, so if turnaround is fast enough, tiny steps are probably just fine.

In my experience, in a really dynamic system like Smalltalk or Lisp or Python or Ruby, little tiny steps like those in my example are just fine. In a system where running a test took two minutes instead of two seconds, then I'd do more work before submitting my test. And I'd make more mistakes, the more work I did.

So, having played with the question and the idea for a while, I reach a conclusion in another dimension: using computers and tools with the fastest possible turnaround on a test is incredibly important, because slow test turnaround becomes a dominant force in how big a step I'll take, while the dominant force SHOULD be how big a step I can take while keeping a very low chance of error.

For me, I really do find that working at the scale I showed in my example is just right. If I go for ten or twenty minutes without running a test, it is almost certain that I'll have made a mistake in that time period, and almost certain that I'll have to enter the debugger rather than just "know" what I did wrong. That slows me down a lot.

KentBeck:

Re: tests are responsibilities. Very nicely put. If this is true, then when we do the kind of design that moves responsibilities between objects, we should see changes in the tests, and we do. When we do the kind of design that reorganizes implementation without changing responsibilities, we should not see changes in the tests, and we don't.

Re: each new test causes one change. As Ron so eloquently put it, you can chose to write itsy-bitsy-teeny-weeny tests like this, or you can write bigger tests. There is a crossover point based on cycle time and probability of failure.

The example that comes to mind where a small test causes a big change is when you have already have a test case for input of size one, and input of size two encourages you to use Composite. One little test more causes you to create a class or two. If you also start using double dispatch, you'll also create three more methods (I can work a real example if you care).


Please add your comments to the conversation here (or at TestsAreResponsibilities):


EditText of this page (last edited January 3, 2003) or FindPage with title or text search