Is Xp Unit Testing Best

Is the ExtremeProgramming form of UnitTesting the best way to do your testing, or are there better ways?

(Realize that to "do ExtremeProgramming UnitTests properly," you have to adopt all the practices of ExtremeProgramming.)

Actually there are very few details in XP UnitTesting, and they're all kind of butt simple and obvious:

So all the sarcasm about XP fanatacism aside, if you have a better way of testing, use it. XP just says you have to have UnitTests and offers some hints on how to do it.

Recent XP usage is to do TestFirstProgramming, which does seem to be a really good way to do things. Try it, you might like it.


Why?

See IfXpIsntWorkingYoureNotDoingXp.

Alternate answer: If you're not using the ExtremeProgramming form of UnitTesting, then you're not doing ExtremeProgramming; you're doing something else. You may succeed or fail, but you won't be doing ExtremeProgramming.


[Discussion from UnitTestsReconsidered. Unsigned comments are assumed to be from SunirShah.]

On this site, anything not XP must be defended, just like on Slashdot, anything not Linux (or BSD) must be defended. But more importantly, it could be that XP practices the better way, in which case I'd like to adopt it. Or, forgetting XP, there could be a better way that everyone should be aware of.

My actual position is that testing should use a variety of methods, whatever is most efficient for the case at hand. BusinessValueFirst, not what's written in the ThreeRingBinder. On the other hand, consider the WorseIsBetter stance that many companies take, including NetScape to its great initial success. Often bugs are acceptable if you can beat your competitor to market. This is also a real life concern that can't be ignored by a professional.

That being said, NetScape's BigBallOfMud shot them in the foot later. Ok, so it's complicated. That's life.

You mentioned one of the reasons for not UnitTesting is that when the classes change that you end up having to change all the tests. However, in my opinion, this is a good thing about UnitTesting. It pushes you to pay more attention to the design of your public interfaces while allowing you to continually refactor their implementation (and the protected interfaces). If you design the public interfaces haphazardly, you pay by having to rewrite your UnitTests. After all, if ones public interfaces are changing that often, then something is very wrong. They are either an inconsiderate provider or they are not spending enough timing on the design. -- RobertDiFalco

The fallacy here is assuming it's possible to get the design right the first time you code it. I'm a big fan of not getting the right design the first time, but jumping in and getting a feel for the solution space before I stabilize. Testing is deadly here. Moreover, any practice that makes you pay--that punishes you--for improving the design is wrong. It has to be. You should always be encouraged to make things better, and discouraged from making things worse. Anything other than that pushes the system in the wrong direction, towards worse.

How can you use the actual production code that is a client of your code if you haven't written your code yet? -- anon.

Obviously the client code is written. I implement enough so that the client code compiles, and then I keep writing until the functionality is complete. This is iterative, client-driven design. It's much like TestFirstDesign, except instead of using UnitTests (which have no callers), this keeps the design to the minimal necessary set. I've seen how UnitTest driven design encourages the creation of complete stand-alone units, that could almost be shipped on their own! This is bad. Never implement more than you need to, no matter how cool it would be to add a Visitor to a Sequence collection.

By the way, I don't strictly follow this, but it's the general heuristic I follow. I don't strictly follow anything, strictly speaking.

Okay, Sunir, let's take it from the top. You write a string class, how do you decide its okay to give to the rest of your team to use? How do you know your code does anymore than compile? After all, you say you only test things in applications and never in units. -- RobertDiFalco

Oh, that's easy. The last time I wrote a string class was just for the fun of it, so I had no callers. I wrote a UnitTest. It's true; I'm not trying to be smarmy.

Note that this isn't as much of a problem if you believe that MethodsShouldBePublic. The problem is unit code written specifically for the unit tests that in turn must be tested. Encapsulation is evil.


Discussion

I found CodeUnitTestFirst to be different from whatever I was doing before, and to be different from programming with a lot of UnitTests. In that light, it seems to me that it might be fair to ask whether someone who writes CodeUnitTestFirstIsWeak? has some experience with CodeUnitTestFirst. If he has, and it wasn't favorable, I'd like to hear about that rather than the discussion above, which sounds rather speculative and supportive of some other approach rather than reflecting directly on the practice under question. Does that seem inappropriate or unfair? -- RonJeffries

I hope I've explained my point of view fairly. [...] Well, I've at least defined what I consider to be UnitTests, so that should help. -- SunirShah

[...which would be the StandardDefinitionOfUnitTest, if I understand this right. -- JeffGrigg]


[A (somewhat biased) summary of SunirShah's XP comments on XpVsStandardDefinitionOfUnitTest -- focusing on the "IfXpIsntWorkingYoureNotDoingXp" issue:]

Problem: XP is marketed as the "One True Solution" -- it's an all-or-nothing affair; you do all of XP, or you're doomed. For example: Refactoring is good, but you don't need UnitTests to do it. (See CoreXpDependencies)

This is a very controversial statement. MartinFowler, for example, thinks you need UnitTests to refactor, and he probably knows something about the subject. A real-life example of a refactoring that was carried out without unit tests, while maintaining confidence that the code behaves the same, and expending less effort to do so than using unit tests would have, would be helpful here.

Don't you remember your university assignments? Who writes unit tests for those things? (Well, I do, so maybe I invalidate my own argument.) Yet most people can get them working in less than a day. Small scopes. This is not necessarily recommended practice if you have a customer willing to sue you, but an indication that doom and despair won't befall you without UnitTests... if you're "smart" enough. -- SunirShah

Not to mention the somewhat obvious observation that a suite of AcceptanceTests gives you much more confidence that the code behaves the same. -- GeoffBache

SunirShah's experience is that UnitTesting is often disastrous -- he's seen people stuck in mud give up and throw out all automated regression testing. ...which makes the XP experience, getting TestInfected with UnitTests, very interesting to Sunir ? "you love what everyone else hates with a passion, for some definition of everyone."

Sunir advocates using a variety of testing methods, complexity reduction, eye balling, and good faith. For instance, relying on AcceptanceTests, and doing UnitTests when necessary. For him, this approach works, while an extreme application of UnitTests has not.

Don't forget, if it's not automated, it's broken. If your entire system doesn't have automated quality control, you're hosed. UnitTests are part of that solution. Stress and load testing is another. Eye balling only works with aggressive eyes, and some domains can only be verified with eyes, like graphics and GUIs. Unsurprisingly, the GUI always contains the most bugs. Guess what domains I used to work in, which is why I'm so adamant you can't rely on UnitTesting alone. -- SunirShah

More recently I've been working in domains where the online debugger is difficult to use and porting is the norm, not the exception. ConfigurationManagement without regression tests is difficult, let alone developing code (No bugs!). Consequently, I've been UnitTesting more actively. However, the test code is expensive to write and is often wrong. Of course, the code that most needs the tests has tests that are the most wrong. Nonetheless, for the most normal parts of the system, the tests are cheaper than stepping through the code or learning and predicting all possible configurations under which it would be run (as would happen in code reviews). Then again, as you may have guessed, the unit tests were bad because we don't do PairProgramming--i.e. no aggressive code reviewing. Also, constantly poor tests are a flagrant sign of a lack of knowledge about the problem on my part, and consequently poor overall design. But that's why there are BugsInTheTests. -- SunirShah

OK; when are UnitTests necessary? XP's answer is "So often that you shouldn't even waste time thinking about it -- just write them."

XP's answer is really ingenious. Write one at a time, one implemented functional point at a time. That way you're guaranteed not to screw up. -- SunirShah


I tend to rely on what I will call functional tests (as something different from either UnitTests or AcceptanceTests), as this tends to keep the focus on what the user wanted. You can have all of your UnitTests pass and still not have the program perform the desired functionality. You can do test first design equally as well with functional tests as unit tests. Just view your higher level classes as the test environment for your lower level functions. Maintaining a full suite of UnitTests seems to be too high a price to pay and would seem to inhibit refactoring (due to the effort required to modify all of the affected UnitTests). -- WayneMack


I would tend to agree here - as someone who does all of XP bar the UnitTests and seems to be getting on just fine. This was largely because I was doing automated regression testing with AcceptanceTests before I heard of XP, followed the advice to solve your worst problem first, and found that testing never became a problem. The pragmatist in me said to not solve problems that don't exist.

Here is very briefly, some thoughts to trigger some discussion, based on what people say on why you can't do XP without unit tests. (Please mentally note that "I" refers to "my team and I" throughout - I don't do this alone!)

"AcceptanceTests are too coarse-grained and miss too many bugs". This seems an obvious fallacy - if the AcceptanceTests are missing the bugs, they are wrong. They should catch lots more bugs, because there are less "cracks" for the bugs to hide in. They should also catch bugs the customer actually cares about.

I don't think it's a fallacy. It's very hard to be sure that you're AcceptanceTests test all the boundary conditions deep inside the code. Do you have an AcceptanceTest case that exposes the rare divide-by-zero error deep in the program? How about the one bug that pops up when the disk fills up, but only when it fills up for a particular write statement?

"Although they catch many bugs, it takes too long to run them to do it often enough". There should be no reason why all AcceptanceTests should be slow. By being prepared to sacrifice even a tiny amount of the EndToEndPrinciple for just some of your AcceptanceTests, it should be easy to generate fast AcceptanceTests. This may not always work, there will always be someone whose domain is fiendishly hard to AcceptanceTest quickly. That shouldn't stop the rest of us.

"Although you can do this, it takes too long to get from an AcceptanceTest failure to fixing a bug". Again, this may be true sometimes, but I think it is true much less often than people say. You can always see what area of the system the bug is in, often to quite a detailed level, by looking at where behaviour first diverged. Comparison of detailed diagnostics can then often lead fast to a problem solution. Of course, you need to have detailed diagnostics, but I think this is valuable anyway.

"By leaving out the UnitTests, you miss the benefits of TestFirstDesign". This is true in a strict sense, but it is very possible (as we do) to do EvolutionaryDesign and Refactoring with just AcceptanceTests. In my view, this is where much of the power of XP design paradigm comes from. In any case, most proponents of TestFirstDesign compare it with BigDesignUpFront rather than EvolutionaryDesign with Refactoring - if anyone has any experience/empirical evidence of TestFirstDesign compared to other evolutionary approaches, I'd be very interested to see it.

"They're different, man, AcceptanceTests are to give the Customer confidence, while UnitTests are to give the Developer confidence". As a developer, I get my confidence from the customer's confidence. These should be the same thing. Confidence in my own design (or internal quality) can only come as I evolve it - it's not as if the presence of UnitTests proves that you have a good design.

"AcceptanceTests are too hard to automate". They shouldn't be. We wrote our own framework to automate them that has gradually evolved to be application-independent. I hope to write a paper about it soon.

Hope that gives some food for thought -- GeoffBache


I've also experienced problems with AutomatedRegressionTesting, when applied in isolation: Over the long term, conventional (non-XP) projects tend to oscillate between griping about how hard it is to maintain the tests (so they let them get way out of date and then discard them as no longer useful), and griping that the application is too full of bugs and too costly to test by hand (so they have to rebuild the tests from scratch). Very expensive and painful. But that's the way that lots of conventional (non-XP) projects really work out there in the real world. Blame the PointyHairedBosses, and uninformed/inexperienced project leads. -- JeffGrigg

The issue with UnitTests is that they are too fine grained. One individual functional test can replace multiple individual unit tests. The FunctionalTest is unaffected by refactoring, while the UnitTests must be added, deleted, or modified to reflect refactoring. If I assume that I am only making incremental changes in either refactoring or added functionality, then the UnitTest does not provide any more problem isolation than the functional test. The issue is not whether to provide regression testing, but whether regression testing needs to be at the class level. -- WayneMack

A FunctionalTest will cover one particular use of each unit collaborating in the functionality. Diligent application of OnceAndOnlyOnce will result in unit functionality being called upon for several uses. From these two hypotheses, can we predict whether defects might go undetected if we only have FunctionalTests, and no UnitTests? On the other hand: if the situation is such that an existing FunctionalTest covers EverythingThatCouldPossiblyBreak?, is there a decision we can make regarding adding one or more UnitTests to it?

See CompleteCoverageIsExpensive.

Hmmm.... a FunctionalTest that covers EverythingThatCouldPossiblyBreak? is going to have a lot of repeated parts. If we get OnceAndOnlyOnce on it, it's going to have a lot of reusable parts, which get called repeatedly. Each reusable part will focus on one or (sometimes) a few classes. We could group the parts according to which class they test. We could even call each grouping a "UnitTest suite" for the class it tests....

a FunctionalTest that covers EverythingThatCouldPossiblyBreak? is going to have a lot of repeated parts

Why?

Diligent application of YouArentGonnaNeedIt will result in class functionality that is only used in support of system functionality. Refactoring creates dozens of support methods used to generate a single system level function. Why not have a single FunctionalTest replace the dozens of UnitTests? In the end, isn't EverythingThatCouldPossiblyBreak? dependent upon the system level usage and requirements?


See IsUnitTestingExtreme


CategoryTesting


EditText of this page (last edited November 6, 2004) or FindPage with title or text search