The notion that we can design software by short and quick iterations that consist of:
Yes; very interesting.
Maybe we're trying to get at something like "CodeUnitTestFirst is not a testing technique, it's a design technique."
By doing things in this order...
The objective of testing is to find bugs. That's not what CodeUnitTestFirst is trying to do. -- JeffGrigg (...an ignorant 3rd party. ;-)
Why would anyone want to find bugs? Bugs are bad. I've always thought the saying above was flat wrong. The purpose of testing is to [when you stop] be as confident as you want to be that the important bugs are probably out. -- RonJeffries
Let me suggest the purpose of testing is to show correctness. Even in traditional system testing, we don't stop testing when we find bugs, we continue testing until we do not find bugs.
Bravo! This is a good way of expressing the importance of writing test cases first.
Somehow it makes me think of the case in the philosophy of science where, once upon a time, theories were assumed to be true, until someone came along and knocked them down. The better approach is to be explicit from the outset about the circumstance that will falsify the theory.
One of the key benefits of TestFirstDesign in process terms is the fast and concrete feedback between coding and proving. This is what is missing in other processes. The elaborated TestFirstDesign process is:
Fans of WilliamEdwardsDeming will probably recognize TestFirstDesign as the PDSA cycle.
I've also found that trying to write UnitTests on existing non-XP projects, to provide a scaffolding for change, roots out dependencies, instancing problems, etc. I can only believe that if each class is developed first in a test harness where all the dependencies are upfront, things go better. It has been my experience on the smallish things I've done, following the XP example. -- MichaelFeathers
If UnitTests aren't really for testing, what are they for? A few possibilities:
One of the interesting things about TestFirstDesign is that it provides parallel testing, coding, and designing. These are no longer serialized activities. Breaking the problem down into incremental steps makes it easier to figure out what to test, how to code the solution, and how to refactor the solution into the code.
I'm not up on a strict definition of "black box testing", but I think it could extend to probing something whose interface is not obvious or known. I'm not sure what code Kent is looking at when he writes the "next" test, but if it's not the code for the thing about to be written (which it can't be), then the example sounds more like black box than white box.
But really, isn't the idea of writing the test just an very neat way of getting totally concrete about the interface of the thing you're about to write? That total concreteness is what makes me think "code test first" is about specification. When I write a test before I've coded the unit, I'm focused on the interface because that's all I know. When I write a unit test after the code is written, I'll also throw in cases to try to break loop boundaries, etc., things that depend on how I implemented it. Maybe that's the difference Keith is referring to.
It's really important to stay aware of the fact that writing a test is not the same as testing. The advice is "code tests first", not "test the unit before writing it". Eh? -- WaldenMathews
No! Test First Design does say test before writing code. If the test passes, don't write any code. And, yes, this actually happens in a significant minority of cases. Sometimes, the big difficult issue that you are not sure how to solve isn't really there.
In the beginning I would glimpse a need and just start coding the method. When I got stuck (what am I trying to do exactly?) I would write the javadoc comment for the method, often spending longer on writing this that the method. Then I started writing the javadoc first. Now I write the test first. Philosophical issues apart (sorry) I now get better things done faster and I am enjoying my job more.
TestFirstDesign focuses one on the incremental steps in a process that produces code. One stops attempting to leap the river in one go and uses stepping stones instead. This practice requires patience, because each stone requires careful selection and placement, and thus goes against the spirit of the age. But by focusing on the quality of the process a quality end product should be inevitable, unavoidable.
-- MilesDaffin
The curious nature of XP UnitTests seems bound up in the way they are used. Let's look at that for a while.
To me, it seems an XP UnitTest answers different questions at different times:
Nicely put, Falk. When, on XpVsStandardDefinitionOfUnitTest I wrote
I'd add another bullet
I am not sure I really care what the initial author or any of the follow on authors wanted the code to do. I am really only concerned that it continue to meet past criteria, as appropriate, and that I can adapt it to meet current criteria.
So-called "standard" unit tests test a unit of code, usually against some standard or specification. I don't think that's what Xp unit tests do. I think they test your thinking about what the code needs to do next. If I remember correctly, it was WardCunningham that said somewhere "test-first is not a testing technique", or similar. Test-first is a design technique. I don't believe XP style unit tests can be understood separate from the test-first approach. Thinking about this some more, it seems as if whomever said that might have been better to say "test-first is not a code testing technique". It's a thinking testing technique.
When I write a public void testWhatever() method for JavaUnit, I'm expressing a design idea. The code I subsequently write to make that test pass is itself a test case for that design idea. That's the kind of testing Xp unit tests really do. It may well be this inversion that causes the confusion between XP folk and traditional testers. XP unit tests written first are very different beasts from traditional tests written afterwards because they serve very different, almost opposite functions.
And maybe this is why traditional coverage type ideas don't fit very well with XP tests. By focusing on
If Jester can (in the notation above) change the test, and still have the "test" pass, then the test doesn't properly exercise the "test". That is, the real test doesn't properly test the design idea.
Does this make any sense to anyone else, or should I go have a little lie down? -- KeithBraithwaite
It's my understanding that JesTer, and other MutationTesting tools, change the application code, not the RegressionTesting code. They do "reverse the role" of testing: UnitTests test the application code; MutationTesting tools test the UnitTests. They test the effectiveness of the UnitTests by making an arbitrary change to the application code (which, one assumes would change it from a "correct" to a "has a bug" state), and then run the automated RegressionTesting code, to see if the tests can detect the change. If the tests can't, then maybe the tests aren't "good enough." -- JeffGrigg
Tools like Jester indeed do not change the "regression testing" code. But that regression testing code wasn't written as a test, it only became a test over time. I think test-first leads to more than reversing the role of testing, it reverses what is being tested. My closing paragraphs above are not as clear as they could be, but are, I believe, unambiguous.
I think it's wrong to talk about the application code being correct vs. buggy, or the unit tests being not good enough. The idea that developing in my mind is that what we now call, in XP, a unit test (above, and here, "test") should always be accurate (if the developer that wrote it is competent), but it might not be very precise. An imprecise "test" will be satisfied by many different bodies of application code. Mutation tools show you how imprecise your "tests" are. Here's a couple of questions for folk who've done more computer science than I have: is it possible to write "tests" that only admit of one possible solution? In a sensible timescale for real-world examples?
No, there is always more than one way to skin a given cat. For example, I will take your function and replace it with an equivalent UniversalTuringMachine. There. Another solution. -- IanKjos
It's better than that. MichaelFeathers took the tests for the ObjectMentorBowlingGame and wrote a new implementation under them that isn't formally equivalent. It has a different set of abstractions from the problem domain, rather than just a different implementation of the same ones. -- KeithBraithwaite
No doubt he got it right. But the fact that the two implementations produce the some results on the existing body of unit tests does not prove that they would give the same results on all possible inputs. Saying that test-first design is not "testing" helped remind us that "testing" is still needed, especially in safety-critical applications. -- Chris Morris
The code in the "test" takes on a much greater significance than the scripts that a traditional testing tool might use.
I find it significant that we have SmalltalkUnit, JavaUnit CppUnit and all the others, each written in, and running tests written in, a particular language. It feels very natural to write tests for a Java application in Java.
My introduction to RelentlessTesting was years before I'd heard of XP, in a C++ shop. We wrote a lot of tests, and we ran them all many times a day. But we wrote them after the fact, and we wrote them in a home-grown scripting language. That caused a clear separation in our minds. As we wanted to do more and more sophisticated tests, the scripting language became more and more sophisticated, of course. Why didn't we just use C++ in the first place? Because (I believe) we thought of C++ as a programming language, and what we were doing was testing. If I were doing that work today, I'd use CppUnit. And I'd write the tests first. And I wouldn't have that distinction in mind, programming vs. testing.
So, what are the "tests", if we weaken the distinction between writing them and writing the application. They are declarations. What we're doing, when using JavaUnit or its ilk is declarative programming. We, or rather, the people that pay us, aren't bright enough to fully embrace declarative programming, but deep down inside, we want it. So, one way or another we fake it up. Test first and the odd "tests" that it produces are the latest way we've found.
JavaUnit makes Java into a declarative programming language, which is great. Sadly, it doesn't make out Java environments into declarative programming environments, which is why we still end up doing far too much legwork when it comes to writing the application. -- KeithBraithwaite.
I think you're onto something, else why the violent trouble communicating. A related problem is white-box vs. black-box tests. Test-first tests aren't black box, because I'm looking at the existing source code when I write the next test, but they aren't white box because I'm not looking at the source code I'm testing (I haven't written it yet). I never thought reversing the "proper" order of activities would cause such trouble... -- KentBeck
It sounds like to me that much of the purpose of the UnitTest is the create a definition of "A Functional Unit" first and then fulfill that function by adding code second.
I have found that writing UnitTests force me to clearly define for myself what the heck a given class is doing. Objects are supposed to have one clearly defined function. If I have difficulty writing a test the usually I am looking at a bad smell in my design. "Speculative Generality" or "Large Class" [Refactoring - Fowler] -- Todd Edman
What can we learn if we compare XP UnitTests with other techniques?
I've often compared UnitTests with DesignByContract. To me the UnitTest is a way of exploring (i.e. designing) and then formalizing the contract for a method. It then, as described above, becomes a driver to test the contract during development and finally a test to ensure the contract remains implemented correctly as the code evolves. -- DougClinton?
See also TestFirstModeling
RelentlessTesting and TestFirst seem very well aligned with DbC to me. This idea seems to get a lot of people's hackles up. The reason why might be something to do with the confusion between "formal" and "bureaucratic" mentioned on IndexCards. Like the cards, test-first "tests" are highly formal, without being a drag. -- KeithBraithwaite
In some respects, TestFirstDesign is essentially an iterative kind of FormalSpecification where the specification language (the language used to write test assertions) and the implementation language are the same. -- JasonGorman
BruceEckel also talks about DesignByContract in the context of XP-style unit tests, and further asserts that unit tests in general are a method for extending the compiler. In effect, you are allowing the computer to check the semantics of your code, as well as the syntax.
I hope this is not too way off the mark here, but... I have been doing some thinking about the differences between UseCases and UserStories and some of my thinking may apply to TestFirst.
When you look at an artifact in a process don't think of it as a static thing, be aware of its active role in a process lifecycle.
In XP, the act of writing a test first is to record a functional requirement but doing so also declares the interface to the subject of the test, declarations before the subject exists is an act of design. Whereas as has been said already, running a test is validating a system conforms to that requirement. I'm not sure if there is anything fundamentally different between functional tests and unit tests.
It also occurred to me that there were some similarities between what we are doing when we use MutationTesting tools (like Jester) and writing UseCases, as we are expanding our requirements by identifying additional scenarios for test.
-- RachelDavies
Keith's issue of definition is an important one, because in my experience (as a manager, not as a programmer), the XP notions of unit testing are the most difficult aspects to describe to non-XP-practising developers. Any good developer tends to get quite offended at the notion that they are not testing effectively, and when you try to describe XP unit testing, they comment, "But of course every good programmer does that!". I had precisely this situation about four days ago, where a conversation about XP broke down into an argument over testing practices (partly because there was also a language barrier involved, but even so...).
So, my question is: is the kind of testing we are discussing here an XP practice? And if so, what is the thing that makes it different from any good programming practice? Perhaps that "thing" is the definition we are looking for.
-- SallyMoss
Unit testing as a kind of testing is nothing new, although many programmers may prefer not to put the effort into writing such tests where unit testing is not valued by their development team and part of their development process.
What is different with XP is when such tests are written, before the code which satisfies the test. Also in XP other ways of documenting and checking the consistency of design and requirements are absent.
That "thing" is what we are using unit tests for in our development process: when writing a test first we are specifying or designing the interface to the unit under test and later when executing unit tests in a regression test suite we are verifying no code is broken by code changes applied incrementally.
Developers may find that they like to design the interface to their unit of code in other ways than by writing tests which call upon that interface, they may prefer graphical diagrams using UML. Developers may not see the worth in full unit test coverage where there are functional tests that test the same code. Developers may prefer traditional code coverage tools to check that all branches of their code are exercised.
-- RachelDavies
I will concur with the point raised above by SallyMoss. I have run into some hostile responses while trying to enforce test first design. As a supervisor, I am concerned with problems that continue to occur in our software. Sometimes, we have fixed a problem in 3 or 4 separate releases. Other times, we have functions moved between screens and the function no longer works. Finally, we have new features which fail basic final test requirements. I am convinced of the value of test first design, but my development staff feels that the above is the normal software development environment and do not feel a need to change (other than to keep the boss off their backs). -- WayneMack
Here is another point of view... Unit tests are a set of sensors on your code that let you know that you are changing only one aspect at a time. -- MichaelFeathers
I agree with SallyMoss's comment and WayneMak?'s comment that there is a lot of resistance, and that people often feel, "if its not broke, don't fix it." The problem is when I write a new method, I want to know if its broke NOW. Not in a week when someone does the functional testing. And honestly that sort of problem burden falls more on the project manager than the developers, so they really don't have as much motivation to change.
So the question is, "How do you get them to change?" Just like change in a software project can be difficult, introducing change into the complex world of project management can be difficult as well. So if you believe in this approach in your software, maybe you could apply the approach to management? This is what I did in a project....introduced small incremental changes in the software development process. First, I started to address the features in the way that XP recommends, (pushing cards around.) Then I began having developer meetings to watch our progress and do estimates. Suddenly as the velocity became important to developers, and the system already had some change momentum behind it, it became much easier to push the unit testing. Your test doesn't turn green, your feature isn't finished. Someone else breaks your test, you don't get the blame.
Putting all the onus on the developers to change is going to create resistance. Putting it all on the management is going to cause problems. You need to incrementally make small changes, when the people involved can begin to see the benefits. Once that trust is built, and you begin working like a team on this new approach, the developers will encourage each other to write tests. Look at your situation and make those small changes to the system that are easiest first, that bring the least resistance. Depending on your environment which pieces those are may change. Some changes may not work at first, and have to be withdrawn. Just like any complex system, evaluate (test), adjust (incremental change), test, adjust, test adjust... drive the management just like you drive the software. -- Todd Edman
IEEE Software will publish a special issue on Test-Driven Development in July 2007. For more information, see IeeeSoftwareSpecialIssueOnTestDrivenDevelopment
Question: why only WriteJustOneTest? I have found switching back and forth between testing and coding requires a different state of mind. It would be easier to write many tests at once, and then make them pass one at a time by writing the code. AnalysisParalysis is avoided by having a set of clearly defined behaviors in mind. Is that the only reason?
Nevermind, it seems I am doing TestFirstProgramming.
See Also: TestDrivenProgramming, TestDrivenDevelopment, CodeUnitTestFirst, FrameworkForIntegratedTest, RenamingUnitTests, TestDrivenAnalysisAndDesign, TestDrivenDesignPhaseShift, IeeeSoftwareSpecialIssueOnTestDrivenDevelopment