Rule: NeverWriteaLineOfCodeWithoutaFailingTest.
The interface & output of an object are more important than its implementation, so write the former first, in terms of code trying to exercise it.
See CodeUnitTestFirstExample, TestFirstDesign, TestDrivenProgramming, TestDrivenDevelopment, WriteJustOneTest, OneUnitTestAtaTime
Here is a really good way to develop new functionality:
When you get this right, development turns into a very pleasant cycle of testing, seeing a simple thing to fix, fixing it, testing, getting positive feedback all the way.
Guaranteed flow. And you go so fast!
Try it, you'll like it.
-- RonJeffries
Doesn't step 1 imply AcceptanceTestsFirst? -- JohnWhitlock?
The way I work, yes. I typically write a CucumberFramework story and watch that fail before writing a UnitTest for my implementation. --MarnenLaibowKoser, 14 April 2011
When you run a test, if it works, check the test and all your changes into your version control system (with a meaningful comment). Then you're done. -- DaveWhipp
No. These tests and increments are too small. Do the checkin after you've done a few hours of work (or less if you have a very well automated checkin process that's quick and easy). -- unsigned
Why? Small checkins make it easier to just pick the changes you actually wanted when merging. --MarnenLaibowKoser, 14 April 2011
I do as Dave does, waiting a few hours to checkin just increases the chance of a conflict or that I will lose a few hours of work rather than ten minutes worth. So I like to checkin all the time, and would suggest you do the same, unless your checkin process is slow and hard (in which case my advice is you had better make it quick and easy). -- ErikMeade.
I am not sure of all the advantages of checking in every change. But one is you can count line of code(LOC) changes (added, modified, deleted, etc). This counting could be automated by the Version Controler. A change control like ChangeLog can help this case. -- AlexVicente?
Do you ever write all the UnitTests for a class before starting to implement the class? Do you ever have a MasterProgrammer write the UnitTests and a GruntProgrammer change the system to satisfy them? Do you ever use UnitTests as specifications?
I'm expecting the answer "No". The very notion of a GruntProgrammer is probably antithetical to XP, that division of labour being superseded by PairProgramming. I thought I'd ask, to check, and to get the answers explicit and on the record. I'm currently coming at this from the direction of DesignByContract and assertions. -- DaveHarris
We generally don't do those things. Since we practice YouArentGonnaNeedIt, we couldn't in general write all the UnitTests for a class. We do sometimes write a batch of UnitTests for some piece of functionality we're working on, especially if they are all variations on a theme. For example: check SUB with one day less than the max, at the max, one day more.
We might well invite a MasterProgrammer to sit with us for some hard bit. In general we strive to have everyone cross-trained on everything. Even so there are areas of model or infrastructure expertise. Personally I believe that there are some short-term advantages in specialization, and I'm sorely tempted to use specialization in times of stress. Sorting out what is going wrong with our process and getting it back on track seems to work better.
But some people seem better at design, at seeing consequences faster, while others are better at painstaking programming. In theory, specialization would allow each to do his best. Does ChryslerComprehensiveCompensation take it as an axiom that PairProgramming is always better? -- AamodSane
No, they took it as a hypothesis, tested it, and found that the more they paired, the faster they went and the fewer problems they caused. I showed them how to do it. Taking it to such ridiculous extremes was their idea. And it works ... -- KentBeck
CodeUnitTestFirst. Why don't people like testing? Well, the traditional way of testing is tough to take. You write what seems to be perfectly sensible code, then you write a test and the test tells you that you failed. No one wants to hear that.
Let's turn it around. Write the test first; run it. Of course it fails.. You haven't written the code under test yet. Start writing code.. keep testing. Soon, the test will tell you that you've succeeded!
Nice, Michael! That really captures what TestDrivenProgramming feels like. It really is a blast to do! -- KielHodges
Michael, let me get this straight. You write a test and you know it will fail, since you haven't written any code for it to pass. So you run it anyway knowing it will fail. Here's a hint: you don't need to run that test, because it will fail. you know the outcome of that test so there is no reason to run it first. Running it only proves that you prefer wasting time. You know it will fail, so there is no reason for the test to be run. The only reason you would want to run the test is if there was a reason to run the test. There isn't one. A religious ritual of running the test for no reason could be created, but it wouldn't serve any purpose other than religious. Software programming has certainly turned into Quackery. Sad.
In a static language, the compiler/type-checker does some of the job of UnitTests. Thus instead of running the tests, you attempt a compile (or a link). Now the compile fails because the method doesn't exist yet. Never mind - continue as for step 3. -- DaveHarris
That was very much my experience with ModulaThree (http://www.research.digital.com/SRC/modula-3/html/home.html) which had the nicest type system I've come across. You sometimes have to work a bit to get the types in your program lined up, but once you do the code usually just works. -- SteveFreeman
Another strong advantage of writing UnitTests as early as appropriate is that you can use UnitTestAsDocumentation - if they're written properly, they provide a great example for people who have to use or work on the code later. -- MartinPool
It struck me last night while painting a wall: CodeUnitTestFirst is the programming equivalent of using masking tape and drop cloths. On the surface it doesn't look like you're making "progress", but the safety it gives you lets you work faster and safer. -- RobCrawford
Umm .. isn't that a negative example? In XP you start by painting, drip some, and then fix the problem. If you start with drop cloths and you finish painting, and then find them all clean, then you have violated YAGNI. Your example seems to justify PlanningAhead? which is the right thing to do when you're painting a house, because your requirements are fixed. -- NissimHadar
Another example is a technique used for drawing life-like pictures of just about anything. The idea is to focus your awareness on the Negative Spaces of your subject. By focusing on the negative, you trick your mind in not making assumptions about your subject, and your mind becomes aware of the true shape of your subject. See DrawingOnTheRightSideOfTheBrain by Betty Edwards.
For a beginning artist, the results of this technique are shockingly effective. And this analogy of life-like drawing is so fitting to Extreme Programming -- I think a comparison of the before-and-after drawing of someone using this technique would make a persuasive book cover for ExtremeProgramming. For a better and fuller description of this technique, read any book by Betty Edwards on the subject of drawing . -- StephanBranczyk
How can you say that a test is the same as negative space? A test represents a positive intent.
"Negative space" in the context of drawing doesn't map to "failure conditions" in programming. If you have a nameable object (like a book) leaning against another nameable object (like a wall), negative space means instead of drawing the book (positive space) and the wall (positive space), you draw the area between them (negative space) first. The result is you get a boundary for your book and boundary for your wall before you start drawing them. The reason is that when we have a name for something, it's usually attached to a generic model in our heads, but that generic model isn't a very good representation, because it's so vague. Just a sketch, really, even though we give it a lot more credit. By starting with an unnamed (and hence, unstored) shape, we have to focus on each angle, edge and proportion for this particular instance, because we don't "know" what it's "supposed" to look like. When you then draw the positive shapes, you can't slip into automatic mode, because they won't fit the edges you've already defined. It really is a ground-breaking technique.
In programming, the UseCases you start with (and implement as UnitTests) are the negative space, in the sense that they're not your code, they're the stuff your code needs to fit against. Your code is the positive space, but if it doesn't match every nook and cranny of the UseCases, the code's not right. When you start with the code, you might be tempted to think, "I know what a file reader class should do, I'll just slap one together!" and end up missing some bits that are required by this particular project. Working from the UseCase (implemented as UnitTests) inward, you define the edges and ensure that the code you eventually write matches this project's requirements precisely.
Another advantage of coding tests first is that tests exert a beneficial influence on the design. The tests themselves become additional users of your class interface, which ameliorates the problem of "you can't reuse code before you've at least used the code once" - because you start, right off the bat, with at least one extra user of each class.
Concrete example: you need to test code which (usually) takes input from an external device. But the device is not present during testing. So, in order to code UnitTests, you are forced to create an interface for your code to talk to: one implemented for the real device, and another implemented for the "fake" testing device. Having done that, you're in a much better position to further implement the interface for multiple other devices.
-- PieterNagel
How do you adapt UnitTesting, in the more realistic scenario, where you are adding functionality to an existing product? Do you unit code retroactively or just for new units? The advantage of the first approach is, you may catch errors that were previously overlooked. The advantage of the latter is you save time and focus on where errors are likely to occur. -- CayteLindner
Before you change a method, you have to write a test for what you think it does. Either it runs, and then you're sure that you aren't breaking anything, or it doesn't work, and you learn something about what the code does.
(Unless the design is a mess, the function has side effects you don't detect and don't capture in the unit test, and thus have a false sense of confidence in your unit test.)
Writing "retroactive" tests on demand reduces the amount of up-front investment (you won't ever get permission to spend two months writing tests). It also makes sure that the most rapidly changing code is tested soonest.
New code gets tested as above.
What is a good way to organize UnitTests for a non-trivial project? Does one usually write a whole slew of tiny test programs, or are they usually amalgamated into a "test app"? -- MaciejKalisiak
Lots of small UnitTests are generally added to larger TestSuites.
Questions:
I just did a toy case of this - a Perl CGI that produces a 30-day, free-trial JeraWorks license certificate. While developing, I ran the CGI in offline mode, and just piped the output to the input of a shell-based Java program. Programming consisted of modifying the Java code to use the new feature (like including Product Name in the license cookie), and then modifying the Perl code to generate the new feature that the Java code was testing for. Development was fast and fluid. Deployment was very smooth for a CGI. The only semi-major defect I found was that I hadn't uncommented the line "use CGI qw(carp);", which was disabled for offline testing. -- JohnBrewer
What about MicroSoft ActiveServerPages (ASP), which have embedded script (code) that generates the HTML?
Oh, ick. ASP is all about embedding the code in the web page. You'll want to push everything even slightly complicated into external objects called from your ASP script code. Then you can UnitTest those external objects. -- JohnBrewer
Not anymore in ASP+
I agree. I program mainly in Perl and put all of the real stuff into objects. Lately I've been writing UnitTests for everything (after I code the functionality; I'll try beforehand) and it makes everything much nicer. You do have to test the HTML output (for browser compat, etc ...) but at least you know that the core of the app already works. -- DaveTauzell
Has anyone used XP with ProceduralLanguages? How did you adapt it? -- JimHart
People have been doing so much TestFirst that there has been a growing interest in TestDrivenProgramming, and claims that there may be something to TestFirstDesign. -- ErikMeade
Is writing only OneUnitTestAtaTime a requirement?
The object is to write, test, and resolve one line of a UT at a time. This minimizes the number of changes before you check for mistakes, thus minimizing the number of places the mistake could be at.
When you CodeUnitTestFirst (manually) you could let a machine (automatically) implement classes to check against the test until the test is satisfied. This way extreme testers can move the complexity of developing some stuff to developing some tests for the stuff. I think the complexity is same. That is why I DontWriteUnitTests?. -- AlbrechtScheidig
How do you know your program works?
I do not know (but it sells). How do you know your test works? -- AlbrechtScheidig
We know our tests work because they fail before we write the code, and pass afterwards. At this point, we know that our code works. -- LaurentBossavit
You could also write a wrong test, that fails before you write the code, and pass afterwards. At this point, you have a wrong test and a matching, wrong code. How do you avoid errors when you write test code? Do you write test code for the test earlier? And what about that meta-test-code? Etc.
See BugsInTheTests. In a nutshell : it could happen... but then, you could also get run over by a car tomorrow. In both cases, you don't worry about it all the time - you just look both ways before you cross the street, and you avoid writing complex test code.
Another thing to consider is that while you could, in principle, automate the generation of code that passes the test, that would be unlikely to yield valuable results. Rather, coding test-first lets you focus, all the time, on two sides of a single question : what do I need the code to do (the test) and how does the code do it. Both aspects involve conceptual reasoning that could not be automated. -- lb
The test verifies the code and the code verifies the test. You have written two different implementations of the same problem and the likelihood of having made the same mistake in both is very low. When a test fails, you really do not know if the test or the code is at fault, however, since the test is usually much simpler than the code, it is most probable the fault is in the code. -- anon
Yes, it is not very likely that you mistype in both the test and the code in a way that the test passes, but: What if you are conceptually wrong? You write a wrong test and very likely a matching wrong code. What if you are conceptually right? You write the right test and very likely a matching right code. So why not automate the generation of an implementation of the interface expressed by the test? Because if I do it manually I have twice the time to think about the concept? -- as
I don't think you can "automate" the writing of code that passes tests - it's an AI-complete problem. Yes, having to think about "what I want the code to do" and then having to think once more about "how does the code do it" is what protects you against conceptual errors. But you have to do both anyway; no program will do either kind of thinking for you. -- lb
The only way to be "conceptually" wrong is miscommunication with your users concerning the requirements. The only real way to resolve this (in any development approach) is to get the software into the hands of the users. CodeUnitTestFirst ensures that the program works as intended, it does not verify whether the intentions were correct.
As for having an automated way of generating code from tests, let me know when you find one. Until then, however, I am stuck having to write my own code.
There is Genetic Programming, which does exactly that. BUT you would need a vast amount of test cases... -- GerrietBacker?
I have found an algorithm for automated programming, but I cannot express it...
Let me summarize it: UnitTests are highly recommended for developers that tend to concentrate on implementation and technical details (How much percent will that be?). UnitTests require to think about specifications, interfaces, external views, "what I want the code to do". CodeUnitTestFirst is a rule that says: "Think first about what the code should do before you think how the code can do it." An excellent side effect is that the output (UnitTests) provides test mechanisms that can be executed by the machine and are independent of implementation changes. -- as
A thought that I'm playing with is the task of writing "UnitTests" after the units themselves (classes) have been written.
My current thinking is that this task is nearly impossible. Once the code is written, the sorts of tests I can write against it tend to be tests which describe the behavior of the implementation, not tests describing the needs which the implementation was addressing.
Consequently, I find that when I CodeUnitTestLater?, I end up with tests which don't help me at all when I try to refactor. It's like I've surrounded the code with a thick, hard crust of tests that actually inhibit refactoring, rather than supporting it.
Not surprisingly, I also notice that the character of these after-the-fact "UnitTests" resembles the character of acceptance tests much more closely than TestFirst style UnitTests. -- EricHerman
Perhaps a useful tactic would be to code the UnitTests specifically for the purpose of helping you refactor. For instance, I recently found myself - pairing with a junior coder - working on an awkward method gleaned from an untested JSP page that pried into private details of other classes; we wrote an "outer" test to pin down the behaviour, then started extracting bits of the code into other methods, writing one or two UnitTests for each extracted method. Pretty soon we had three or so classes and an interface, with a more balanced design, tests that didn't look like mere afterthoughts, and two significant bugs fixed. We did have some inelegant code still lying around but... "it's a GreenBar, let's go home".
Even after-the-fact UnitTests help if they are written with the attitude of TestFirst style tests.
I find it curious given the above that many people have expended time and effort writing tools to generate test cases from existing classes (I do realize this may be useful in a legacy situation). You can find examples at http://www.junit.org/. Look under the IDE section, and there's more under the extensions section. No one seems to have attempted CodeGeneration in the other direction.
While there is some comment further up that writing (working) code from tests is an 'AI complete problem' it is surely not that hard to generate method signatures by examining test methods. I doubt generating code to pass tests is what people need from a tool anyway - generating code fragments is the time-saving step, and generating code to fail tests is a little easier. Just throw a RuntimeException, or the equivalent in your LanguageOfChoice. I love the EclipseIde little red x cue-balls that with a couple of clicks add classes, fields, and methods to the code being tested by my unit tests. Still have to do the method bodies by hand of course.
By not that hard (since doubt was expressed above) what I mean is this: work so that your test class is named the same as the tested class up to a 'wart' (eg Money and MoneyTest). An intelligent editor like emacs can then figure out the intended class name by removing the wart. Next, search for variable declarations. Next, look for method calls on those variables which have the same type as the tested class. Determine (a) method signature required by examining the variables or constants passed in (remember we already know the variable types). Finally open the class under test, check to see if the required methods exist, and insert them (possibly interactively) if they don't. A complete language parser is not required for any of this, the mechanics are there in any language-aware editor in order to fontify, indent, and navigate code.
It will not generally be practical to produce an exhaustive set of unit tests for a given function. It would just take too long. Take for example a unit test for a routine which implements y=sin(x). You can feed it various samples and possibly test some boundary cases, but you will not be able to test all values of input/output. Thus, the unit tests underspecify the function and could not be used to generate the function's behaviour other than at the points that are tested.
y=sin(x).
Here is the complete test: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + x^9/9! - ... + (-1)^(n-1)*x^(2n-1)/(2n-1)! + ... go on until you run out of precision.
Definitions:
"...you will not be able to test all values of [x]." The infinite sum above provides a mathematical verification of one value. On a machine with 64-bit floating-point math, you will wait a long time for 2^64 iterations of the algorithm. How does your machine respond to special floating-point values like Infinity or NotaNumber? How does your machine respond to mathematical functions when tested at discontinuities?
In safety-critical or life-critical applications, it may not be sufficient to verify code with more code. It is possible (though unlikely) that the UnitUnderTest? implements the same algorithm you have devised for the UnitTest. In any case, to satisfy certification criteria you may need to manually verify that your UnitTest produces a correct result to compare with the UnitUnderTest?. Manual verification cannot be performed exhaustively within any reasonable scope of time.
If only all functions were periodic, had bounded derivatives, and could be decomposed into monotonic pieces! There are three ways to comprehensively verify a function: formal methods, a comprehensive list of inputs and outputs, and a trusted function to compare against. Depending on the accuracy requirements, it may be quite easy to provide a trusted function that is easy to prove correct but is unsuitable as an implementation. For instance, if sin(x) is required to give a value within d of the "correct" answer, you can create a lookup table implementation of sin(x) that is proven to be accurate within d/3, then verify that sin(x) gives answers within 2d/3 of the table. Since sine is so well-behaved, it is easy to produce such a table, and it will be small for reasonable values of d. Another way that a relaxed standard of accuracy simplifies the problem is that it may be possible to restrict the number of inputs. If you have a formally verified 64 bit -> 16 bit converter, you can compose it with an exhaustively tested 16 bit -> 64 bit sin function to get a 64 bit -> 64 bit sin function of known accuracy.
Producing a verified 64 bit -> 64 bit sin function that is correct for almost all of its bits of precision is much harder, but even this task can be cut down considerably by producing a formally verified modulus function, verifying sin1 on [0, 2pi), and implementing sin(x) as sin1(mod(x, 2pi)).
Perhaps these methods seem too heavyweight to qualify as UnitTests, but I imagine that in the situation you describe, changes to the sin function would be undertaken with great care and deliberation, and these methods would be used much like UnitTests are -- to decide when code is ready for integration, to check whether changes broke the code, etc.
Is it necessary to write UnitTests that don't fail the first time they are run? See UnitTestsThatDontBreak.
For an interesting take on CodeUnitTestFirst, see CodeUnitTestThird.
I want to know how to covert legacy code to ExtremeProgramming. It would seem that what is needed is to add tests to the legacy code so that you can do the refactoring. Does anybody know how to do this? -- JonGrover
Moved from UnitInUnitTestIsntTheUnitYouAreThinkingOf?
Interesting fact-ette about this 'radical' CodeUnitTestFirst philosophy: it may not be so new after all. My mother started her career as a PL1 and COBOL programmer way back when computers were larger than my house. We were chatting recently about XP and she mentioned that when she started all developers were required to write a unit test plan before they went near any code. The test plan was then used as the basis for the code. I'm guessing that compile and debug times were so much greater that they simply couldn't afford to spend ages fixing silly mistakes. Perhaps its one of those universally good ideas that was simply forgotten about for a while, probably during the excitement of seeing the first computer that was small enough to fit on your desk...
Those who do not learn from history are doomed to repeat it. Guess we programmers prefer thinking about the future to studying history.
-- DarrenHobbs
That is interesting. Could you get your mother to expand a little on what was in a "unit test plan"?
It's a curious thing about our industry: not only do we not learn from our mistakes, we also don't learn from our successes. -- KeithBraithwaite
Unit Test First programming in PHP
I tried the above the other day, just for the heck of it ... It's actually far smoother than you'd expect. Here are the steps I went through:
-- DannyCook?
Am I missing the point somewhere? Unit testing always sounds like a good idea to me, until I come to actually start using it, at which point I get bogged down in problems. Allow me a real life example: I am currently writing a database application, and I have a function which simply adds a record to a table, using some details passed in as parameters. Very simple, but certainly possible to fail, so it needs a test. How do I test it? It seems that a test would need to relocate the record which has just been added, then read it and check that the correct information was written to it. This is actually more complicated than the original function, and more prone to errors. Which surely defeats the object! What should I be doing? -- Chris Sandow
An approach is to save the record that was added, do a get, and then compare the records. Depending on your system this can be made generic. -- AnonymousDonor
What about testing classes which produce graphical output which cannot be tested by simply comparing every pixel with a test image? For example, one might create a class which has only one publicly accessible function named "drawStuff()". drawStuff() uses OpenGL to draw some stuff. One cannot simply compare the pixels because the graphical output will most likely vary between various OpenGL implementations and will depend on the hardware. How do you write a unit test first for something like that? What if the output is very complex making it very hard to have a test image made prior to the test? -- tomek
Well, you can still guarantee a few things about the function: you know that under such-and-such conditions, it will produce an image, and will not throw an exception or some fatal error. Depending on your application, you may also be able to say that it should throw an error under certain other conditions and check that it does. You can also check some basic things about the image: it should have a size in a certain range (a 1x1 image is probably an error), it shouldn't be all one color, the image produced by input A should be different from the one produced by input B, etc.
Then of course you add to this basic unit test as bug reports come in - as soon as a bug is reproducible, write a unit test that fails because of the bug. Then fix the bug and re-run the test.
If you are in a situation in which defining the expected output seems equally difficult as actually producing it, maybe you are trying to test at the wrong level of abstraction (too much at once). Quite probably, your complex function can be split into multiple helpers, the output of which is easier to check. Take one further step and you will find yourself dealing with multiple objects. If these components all produce correct output given their inputs, and if the master object (formerly the complex function) composes them together correctly, then the whole test passes. Note that the correctness of the master (putting the components together appropriately) should be testable even without their concrete implementations, by relying on MockObjects. The only concern is that by making your code testable in this manner (following CodeUnitTestFirst) you might sacrifice the necessary performance. I guess you can always optimize (and obfuscate) your perfectly correct code later on, though.
CodeUnitTestFirst is (almost?) always technically possible, but it is difficult. Don't believe those who tell you it's "sooo easy". No, it takes extra time and consideration, and it takes practice to get there. Sometimes the time and consideration is better spent on other activities - my favorite example is learning/exploring/experimenting with code. Does XP advocate programming "spike solutions" with the CodeUnitTestFirst approach? I would argue that if you know that you are going to throw away your tests in the next hour together with your experimental code), don't write them. Writing tests slows you down in the short term. Admit it.
Everything I've read says don't write unit tests for spikes or prototypes. Maybe that was just KentBeck. Anyway you can probably just use the PairProgramming rule that says only do it with production code.
I think one should look at why he is doing the spike or prototype. If one is merely trying to learn about a particular development environment, language, etc., then tests are not necessary. In other words, don't write tests for "Hello World" programs. If, however, one is trying to prove or disprove something, write the test. The test is an explicit definition of what one is trying to prove, and it is invaluable when explaining to others the result of the spike. Everyone will know exactly what was verified and can reproduce the result.
Coding TestFirst is like climbing with protection. You wedge various pieces of gear into the face of the rock and you have your partner belay a rope attached to you so that when you fall off, as you will frequently, you won't waste your time with being injured.
A SpikeSolution is like free-climbing. No gear, no one to belay you, just you and the naked face of the rock. A really skilled climber can just zoom up the mountain, almost like skiing in reverse. But pride comes before ...
Something I want to try: after I write a Spike which does the job, or some piece of it, I want to refactor it into a set of tests, kind of like the FakeIt method of writing unit tests in reverse. Not sure how it'll turn out, anyone else ever try anything like this?
Actually, my usual development method (StopUsingTheWordMethodology) is like this. Except that I usually don't end up automating the tests (at least not very well); I just keep a set of things to test in mind, and run my "suite" with every recompile. This works well with games because of the "I know (in)correct behaviour when I see it" problem, but you end up with tests that are more RegressionTest than UnitTest in nature.
I do things like this because frankly, trying to write the tests first feels tedious to me - it slows me down artificially, because of psychological factors (i.e. the first test - as well as the process of refactoring towards a solution for the first few tests - seem to insult my intelligence, because I've already figured out how to generalize things. I'd like to start a new page about that, but I'm not sure what to call it...).
-- KarlKnechtel
I'm wondering about TestFirstAndFunctionalProgrammingSynergy, and if it is worth keeping in mind while writing tests and refactoring. -- DavidPlumpton
See TestDrivenDevelopmentForAggregateMethods
I think there may be something slightly naive about the idea that you could write a unit test for a non-trivial method before you write the code for it. I think this comes from the assumption that you know enough to write a unit test once you have the signature, or even the complete "contract" to its callers, of the method under test. In order to write a unit test you must also have a clear specification of all the objects and their methods which your particular method under test uses and how it uses them. In other words, you need to know the "calling out" contracts as well as the "calling in" one. Then your unit tests can use MockObjects? to simulate the call-outs and to check they are correctly used.
In my opinion, this extra "specification" of a method is probably best documented simply by writing the code for the method (see DesignFromTheInsideOut), and that actually ends up with you writing the code for a method before its unit test. RichardDevelyn
(I took this comment out of the main body of my text)
Solution: Don't write non-trivial methods.
Any method which uses another object is non-trivial in the context of the discussion above. It isn't possible to avoid these. -- RichardDevelyn
I can't see how to apply this to programs that produce complex PDF reports. I work for a document-processing company. We take data files from our customers, and print bills, statements, etc. to PDF. If we were doing plain text reports, we could create a test output file by hand and use diff to check the program-generated output. I actually tried creating a PDF mock-up by printing from Word to Acrobat, but even though the PDF looks a lot like the one generated out of my code, the files are totally different. -- KevinKleinfelter
I did something similar but wrote a system to generate word documents from data using the Aspose component. I found it was impossible to test the actual content of the document, as there is no way to get to it - but I could write tests for the framework that built the document. I also wrote tests for the gateway classes that retrieved the data for the document. Not perfect I guess, but there was no easy way to navigate through the document to verify the data. I suppose you could use mock objects and verify that all calls are being correctly made, then at least you would have some confidence in the interactions, if not the state. -- Gareth Jones
How should we go about ChoosingTheFirstTest?