Never Writea Line Of Code Withouta Failing Test

"Never write a line of code without a failing test." --KentBeck

Once you have a failing test, make the test pass with the simplest implementation you can think of. Now, think of a test that forces you to make the implementation a little smarter. Keep repeating until you can't think of any more tests (but not taking steps so big that things get confusing).

Do not make the test pass with the stupidest implementation, try the simplest one. Simple does not mean stupid!

Every time you see "All tests passed", you may switch from adding and modifying features to refactoring. Now DoSimpleThings means you must do whatever it takes - no matter how complex - to make the program simpler and more elegant.

If you follow this rule, even when you think it's silly, you'll be amazed at how much more thorough your test suites can be. (In fact, after you see the amount of UnitTests you generate, you'll probably be amazed at just how many different states even simple code can be in.)

Two different definitions of simple here are what ensures all tests have been tested in the negative, in the positive, and again after refactoring. This is the heart of the DynamicAttractor that converges us on bug-free well-designed code.


''(Note: This is standard ExtremeProgramming TestFirst stuff. However, I searched for "failing" and "never" on this Wiki, and couldn't find the KentBeck quote above... so I figured I'd add this.)

The quote is on CodeUnitTestFirst. Use http://www.google.com -> Advanced to search this site.


We have a green bar, so we start refactoring. We realize we need to use ExtractMethod. Do we first create another TestCase for this new method? Or do we not add new tests while refactoring? -- DavidPlumpton

Refactoring should never change the behavior of code, so it shouldn't usually be necessary to add new tests unless you're doing a particularly hairy refactoring; rerunning the old test will suffice. Heck, in some cases, the 'refactoring' you do can be moving the implementation from the test into it's new home in the production code (i.e., the 'faking it' method of implementing a test case). -- WilliamUnderwood

Adding a test for the result of ExtractMethod - adding before or after the actual refactor - should be very easy. If the method is trivial, or many things call it, let it slide.

Big hairy refactorings border on feature changes. So try DeprecationRefactor. That means write a new version from scratch, then incrementally replace the old one with the new.


Unless you're refactoring

Please note there is a key difference between writing code (initial development) and rewriting code (refactoring).

Writing code is changing the operation of the code and a failing test is necessary to define the desired operation. Refactoring is maintaining the operation of the code while improving the structure. Here it is necessary to maintain non-failing tests. In practical situations, refactoring code may also require refactoring tests, and this needs to be done in a controlled manner to ensure code operations are maintained.

One cannot refactor the code and make ad hoc changes to the tests if he wishes to maintain confidence in the result.

The MoveMethod refactoring is a fairly common and simple example of having to MaintainAsUsual the UnitTests as you refactor. Lets call the method being moved Coordinate(), the class it currently lives in Compass, and the class we want it to live in Chart. It is currently being tested with the method testCoordinate() in the TestCompass class. Additionaly, TestChart contains some UnitTests for the Chart class. (Arrangements may be different depending on how you like to organize your tests, but hopefully this is followable.)

A pretty safe order of actions is to:

Sometimes when we're not listening to our GreatHabit?s we get away with:


"Never write a line of code without a failing test" sounds like pure dogma (or religion, eek). This is straw man dogma that can easily be burnt down. Especially when writing small simple tools that you need for something quick, often just writing the program down and getting it done right is better than testing, testing, testing, and more testing. It would be impractical to test each line of code you write down, starting off with the greeting print() statement in your program.

Let's say you are building a console program that greets you with "hello" and then grabs a website and saves it to the disk. Do you test the hello greeting, and why? Print("hello") does not need to be tested even though it may be the first line of code you write. You assume Print("hello") will write to the console because it always has doneso, similar to how the sun rises every morning and we make that assumption that it will rise tomorrow. Worrying whether the sun will rise is a waste of human time because thankfully we can count on the sun rising (virtually all our life), and we can count on Print("hello") writing to the console unless there is some bizarre console out of memory error that isn't virtually of our concern (theoretically it could be a concern but virtually a console out of memory error never happens). Even if a console error did occur how could you test this with unit tests? You would test the program to see if it writes to the console, you wouldn't write a unit test...

Furthermore, there is no limitation of tests; tests can run up to infinity. You could test millions of different things but there are no guidelines about what to prioritize with testing - it's an art to figure out what should be tested and what shouldn't. Obviously Print("hello") doesn't need to be tested along with millions of other things in the program. Testing everything is mathematically impossible (virtually impossible, but possible theoretically if we had millions of CPU power and CPU time).

Concur. Although I have a particularly ardent dislike of testing in general when compared to formal inspection, the point you make about creating a zillion tests holds true. testing is supposed to encapsulate the wisdom of the test creator in a definitive statement about the outcome of a particular operation. That wisdom should encompass both the positive, expected results as well as the potential negative, unintended consequences, does it not? Then for what negatives do we test? How many? How deep? Yada, yada, yada.

This is the main drawback to the reliance on testing: that tests can be inconclusive, provide inadequate coverage, and don't really address the need for accuracy and completeness that formal inspections provide. Oh, well. I'm sure the "agile" foamers will jump on that statement with both feet, but that's what Wiki is for, eh?


See: ThingsYouShouldNeverDo

CategoryTesting


EditText of this page (last edited April 20, 2012) or FindPage with title or text search