Test First Is Declarative Programming

Poor-man's declarative programming, that is.

Suppose a team had just done a release. They have a body of "unit tests", a body of acceptance tests, and a body of application code.

We can imagine giving the two bodies of tests to another team, and have them write new application code that will satisfy them. This new code would be very, very similar to the old, but probably not identical. Get another team, again they will write similar but not identical code. I can even imagine using the number of tests passed as a fitness measure for a genetic algorithm and getting the code written automatically (tho' perhaps very slowly).

So, in some sense, the unit and functional tests are the thing that are closest to being uniquely the application. They could potentially be satisfied by any one of many different bodies of application code.

Now, one of the nice things about declarative languages is that they usually come with very dynamic development environments. Much more dynamic than Java developers, or even Smalltalkers are used to. Environments where it is very easy and fast to try ideas out. But, the lesson to learn when using TestFirst is that small increments quicker, and to press the button as often as you can. Again, I can imagine a limit to both writing small tests and pressing the button almost all the time that gives rise to something approximating the responsiveness of a good declarative environment.

Hence: test-first is a poor-man's declarative programming.

Does anyone else feel that this is a reasonable model? Using it, do you gain any insight into how you write your Xp unit tests? -- KeithBraithwaite


Do you mean the tests are the design?

I can't speak to functional tests, having written very few of them. But for unit tests as I know them, I think the model breaks down. I don't write tests that cover enough cases to serve as a design. I'm not sure it's possible for someone to do that. If it is possible for someone to do that, I'm pretty sure it's impossible for me to do that.

My unit tests would be horrible for this model. I don't test enough cases for a genetic algorithm to do a good job (the genetic algorithm would be as likely to produce code that only passes the cases I tested). I don't test enough cases that another programmer could write code to. I only test enough to make me feel good, and I do that by shamelessly using my knowledge of how the code works internally to make my job of testing easier.

Here's an example:

Suppose I have an object with an accessor which uses half of that object's state to calculate a value, and the calculation is not trivial. There's a fair chance that I'll create another object just to do that computation, and test it to smitherings, it being easier to test the computation in isolation than to test it as part of a larger object. Then I'll have the accessor use the computation object. Knowing that I tested the computation object to smitherings, I am unlikely to test to smitherings the accessor that uses that computation object. Instead, I'll just do one or two tests to convince myself that the right parts of the object's state are being passed to the computation object. Definitely not black-box testing, and a bit fragile if the design (the code) shifts around too much.

There's a cost to the way I test - when I refactor objects, I have to be aware of these short-cuts I took when writing the test.

The benefit is that I am able to write the tests in finite time.

Anyhow, because my tests are so focused on exactly what makes me feel good today and never mind tomorrow's changes, I don't think they'd be sufficient to someone (or something) else wanting to re-implement the same objects. -- WayneConrad

(A few minutes later): I'm thinking more about the "test first" part of the question. I re-read what I wrote and wondered how it is possible that I claim to do test-first but end up testing to internals the way I do. I do test first. Really. But the moment the tests I'm writing reveals that the object is hard to test, I change the design (the example I gave above is something that tests drive me towards pretty often). I often don't even get two lines into a test before I abandon the attempt and go change the design to make the test possible. Then I go back and do the test. Then I make the test pass. So the "first" part is, for me, a kinda-sorta big-fat-stinking lie. But only kinda-sorta. I do start the test first. But I finish it somewhere in the middle. -- WayneConrad

Maybe you're trying to do too much design, but your testing self is ever so gently reminding you that you can get away with less....

I'm pretty sure that's what is happening.

Last night as I coded I payed more attention to what happens when I really, really try hard to do test first, and I noticed something interesting. When I only kinda-sorta do test-first, I'll write a test for a method - sometimes the whole test, and then write the method, and then make it pass. What I tried doing different was something I learned (but hadn't been practicing) from XP Immersion - I wrote the mimumum test, and then the minimum code necessary to make it pass, then added the minimum new thing to the test, and then the minimum code necessary to make it pass, and so on. For example, instead of writing a test that created an object, did something to it, and made sure its state was correct, I wrote a test that created an object. It failed. Then I wrote a constructor that ignored the parameters. It passed. Then I added an assert against the object's state. It failed. Then I stubbed out the accessors. It passed. Then I copied the test and changed all the parameters. It failed. Then I made the constructor store its parameters and made the accessors actually use them. It passed.

This is nothing remarkable, I think. But what I noticed is that when I test in this fashion, adding a small thing to the test and then stubbing things out to make the tests pass, I naturally have to add more cases to your test to show that the stubbed-out methods are wrong. This leads to better tests. I think tests written in this manner would have a better (but still not great) chance of producing useful code when they got thrown over the wall. -- WayneConrad

One potential problem (depending on how one thinks) with adding tests like this is losing the flow of coding. When I write code there is a certain flow to producing the code. Doing things like stopping to write a test would interrupt that flow. Stopping the flow of code is the same as an interruption. It takes a certain amount of time to get back into what you were doing. -- ChrisRule

Have you found that flow is actually is lost when testing first? What I've found is that writing more, smaller tests, more "firstly", does indeed interrupt the flow of typing program-statements-who's-only-purpose-is-to-be-built-into-the-releasable, but it greatly enhances the flow of thinking about and solving the problem. I might even say that it does the first by doing the second. Anything that makes developers think more about the problem and less about the solution than they typically do is good, I submit.


No, the tests are the specification - as in what, not how.

Discussing this issue at the ExtremeTuesdayClub last night, TimMackinnon had an interesting insight. When coding test-first, the first tests you write sketch out the essentials of the behaviour you want to achieve; these might be the declarative, designing part. Eventually the thing settles down, at which point you add the tests for boundary conditions and other more conventional issues; these are purely testing.

I guess the moral of this page is that good unit tests concentrate on intention rather than implementation. --SteveFreeman


Maybe we need to flesh out the DesignByIntention? page.


WayneConrad said:

I don't write tests that cover enough cases to serve as a design

Sure you do. You write tests that cover as much of the design as you've worked out so far.

Perhaps what I should have said was, "I don't write enough test cases to serve as a blueprint."

The idea of throwing the tests over the wall and having someone or something else recreate the software assumes that it is possible for anything other than the software to adequately specify the software. I have a hunch that it's not possible (or at least not affordable) for most domains. This is what I was getting at when I smart-mouthed, "Do you mean that the tests are the design?" -- WayneConrad


Testing seems like such a simple idea, but I think most of us have been doing it wrong for so long that it's hard to think clearly about it now.

In ScientificMethod, you test hypotheses by running controlled experiments. A hypothesis could be almost any statement about matter or energy. In software, the only hypothesis we ever really test is the one that says the software conforms to its specification. It's not possible to test software without a specification. Sure, you can writes tests and run them. But how do you know whether the results were the ones expected? You can automate the execution of the code, but you can't automate the decision of what's right and what's wrong. That's what I mean by specification, by the way, not some heavy document.

By "wrong" testing, what I mean is that we leave the specification until the very last moments, and then we apply ad hoc judgment to decide what's good and what's bad. If someone hands you software and says "please test this", what is your first response? Is it to run the program, or is it to ask "sure, what is it supposed to do?" Most of us assume we know what it's supposed to do, or that we will infer that soon enough after playing with the system.

Writing tests is the process of deciding what's valuable, while testing is the process of deciding whether expected value has been delivered. Writing tests is the best way I know of debugging expectations. How many out there write tests without changing their expectations of the system as the tests reveal silly assumptions and oversights?

-- WaldenMathews

This is very interesting and insightful, Walden. But isn't it becoming clear now that writing XP style unit tests is an activity unrelated to testing qa testing?

Your word "unrelated" is too strong. Writing tests with the intention of running them is what keeps the car on track, and running them is testing. My point is that they are related in a very interesting way that has eluded us in this industry for the most part. It's interesting to me that by thinking hard about testing, we can discover valid software process. There's one morsel of XP genius exposed. Declaration is specification, and both lie hidden within our mysterious test-centric methods. I hope that what we're finding is a relationship, not a dichotomy.

-- WaldenMathews


Tests are the FingerPointingAtTheMoon. -- ErikMeade


Moved from UnitInUnitTestIsntTheUnitYouAreThinkingOf?

Keith, I'd like to test your thesis that XP unit tests are instances of declarative programming. Could we please have an operational definition of "declarative programming", some discussion of critical criteria that distinguish it from things that are not declarative programming, and then have a look at some tests to see if they fit?

If I understand correctly, a test describes a solution more than it describes a problem, but perhaps it really describes neither. I'm not sure if I'm on track with that observation.

-- WaldenMathews


EditText of this page (last edited August 8, 2005) or FindPage with title or text search