Xp Test Faq

From XpFaq ...

How ya test a UI? ( GuiTesting )

You write the inside UnitTests first. And anything you change gets instrumented with full new unit function with coverage. I really need everyone to realize you write the "business layer" object first, _then_ the database and its archives and the UI. Leave that for last. Run with an icon on the desktop that says "runMyProject".

You are on site to join UI to an algorithm. The algorithm's the hard, original part that you are getting paid for. The UI will use a widget library finished by someone else; it's not original. Start with the algorithm.

Only after your logical module is sound do you tackle adding a UI. You need to add the ability to set any most interesting variable found deep inside said logic box, preferably via a log file. Then you trace that on a screen that the users can bind RealityVariables screens to.

Noted.

For each one, draw its card only first, if the OnsiteCustomer agrees you should. To be TestDrivenDesign you must always write test code before you write function code. Research it. When you "get" it, you will achieve XP.

Test the text before it crosses from the logic unit to the UI.

Add that to the text tests.

Then visually determine same by looking at them. How are you going to keep the nascent UI out of your face here but latest in the "phases".

Instrumenting the control that should have gone after that one.

By now the Unit Tests will have been invoked by any remaining suture bugs. You will probably have thought of 3 to 7 times as much testing.

CodeUnitTestFirst.

Then visually determine if... they don't bother to instrument at this level...

Yes, including using test-first on the primordial UI controls too.


Do you *literally* code the tests, or do you just pseudocode them? because eg. the specific names of the code to be tested haven't been defined yet.

You really code them.

Then you try to compile them, and feign surprise if the target things don't exist yet. This implies you are using one button on your IDE to verify you really finished coding the tests.

Repeat, fixing anything the compiler asks for, until all errors are gone. Then repeat against all assertion failures. Then repeat against the entire test suite.

At each point you are testing as close as possible to the point of change, thence the next larger region, thence the next larger territory. Recall that, traditionally, most new bugs appear where code was most recently changed. XP exploits this bias to conquer it.

At each point you use a tool, either the compiler or your testing code, to determine when to move to the next level, and not brain cells.

Brain cells are a much more precious resource - use them wisely.

-- PhlIp

PhlIp, on the XpMailingList, also wrote the snappy description below:


How do you write acceptance tests ? How are they different from unit tests ?

See AcceptanceTestExamples. From what I understand, AcceptanceTests are derived from UserStories, and are thus comparatively high-level. UnitTests are derived from EngineeringTasks, and are thus comparatively low-level. Both should be AutomatedTesting if at all possible.


How would one go about testing while prototyping? In my current (non-XP) project, I cannot know beforehand if what I create is accepted the way I imagined it or whether it needs to be radically changed after it has been presented to the customer. (As soon as they see it, they can decide whether they like it or not. Discussions without a working prototype tend to be unproductive in this research project.) Should I write a lot of tests for code that will probably be thrown away anyway? If I do, I am wasting resources and lowering my own motivation. If I don't, the chances are that I will move on and the code will continue to exist without tests.

I guess that it would be reasonable to write tests after a piece of code has been "accepted" -- it has been evaluated by the customer and has not changed substantially for some time. However, XP insists on the test-first-code-later approach and "test everything that can possibly fail" and "a user story is only finished when all the tests run 100%" etc. A more general and tricky question would be: is eager testing always good? Are there strong economic reasons to skip testing altogether for certain types of code or in certain stages of development? I believe there are, and that the decision should be derived from a comparison of effort and profit (rather than by following some "extreme rule of thumb"). Unfortunately, while the cost of testing is clearly measurable, the gains are not. Oh well, I guess the usual answers apply. "Use your own professional judgment" and "there is no silver bullet". Which is a profound way of stating "I don't know". --JPL


Moved from UnitTest

Two Questions:

1) If UnitTest running time gets too long, how does one approach trimming and optimizing them such that no coverage is lost. My understanding of unit test writing is from CodeComplete, which discusses writing the tests from both control and data points of view

2) Do UnitTest cases get refactored as well as the production code?

 -- RonGarcia

1) My experience is that very long running tests and those that require manual verification (GuruChecksOutput) are run less often than the other tests. The ExtremeProgramming advocates say that their tests run so fast that they can always run all of them at a whim. Others, like MicrosoftCorporation, split the tests into groups, running slow and expensive batches at night or over weekends.

2) Yes, it makes sense to refactor JavaUnit (for example) code from time to time to make it better meet your needs. But you should avoid redesigning the testing library and your production code at the same time: Ordinary humans, like us, can really only do one thing at a time. ;-> -- JeffGrigg

UnitTests need refactoring to improve their documentation value and to keep them prepared for changes that result from changes in production code. Note that there is a distinct set of bad smells, and an additional set of test-specific refactorings involved. See RefactoringTestCode. -- LeonMoonen

Are there any specific examples of these distinct CodeSmellsInUnitTestCode?


Question: If you code unit tests up front, how can you code them if you do not know the interface of the thing you're testing yet? -- Serge Beaumont <mailto:beaumose@iquip.nl>

You make it up as you go along, which is what you would be doing anyway. At least, it's what I would be doing anyway. --AlastairBridgewater

Question: How much overhead is there in moving unit tests when you refactor between classes? Is it a drag or does it not happen much? -- TomAyerst

It happens occasionally, but it only adds 20% to the effort of refactoring when it does. --KentBeck

Recall that one must refactor the UnitTests to prove the functionality moved from one class to another, and only then move the functionality. Such a Unit Test, like any XP Unit Test, pulls one in the correct direction. It is not make-work added after a fix effort. --PhlIp


Question: How should unspecified edge conditions be dealt with?

For example, a method that parses a line of CSV, given a blank string as input. It's an edge case, so we would usually want to add a test for it right away, but there's not yet any basis for whether this should parse as 0 fields or as 1 blank field. Should I...


See also UnUnitTestableUnits MockObject.


CategoryExtremeProgramming CategoryFaq


EditText of this page (last edited November 17, 2008) or FindPage with title or text search