Un Unit Testable Units

Are there UnUnitTestableUnits?

This question is asked in the context of someone who has already bought into the XP method. The question here is not 'are UnitTests a good thing to have', it's 'is this unit UnitTestable, and if so, how'. The current summary opinion of the respondents is that the first scenario below is UnitTestable and should have supporting UnitTests. Please feel free to add other scenarios if you want to muddy the water a little.

See BeltsAndBraces?.


UntestableUnits


Scenario 1

Consider cSerialPort, a class encapsulating an O/S handle and simplifying the interface to a serial port by narrowing it tightly around what the client code actually needs from a serial port. This class has a skinny interface and each implemented method is a call or two into the O/S's API.

It is tempting to ignore this simple unit and not write a UnitTest for it. These are the rationales:

Here are the considered replies:

Contributors: MichaelHill, EricUlevik, DaveBerkeley, and DonWells.

All code that may contain bugs should be tested. The conversion of C++ interface into OS API calls may be buggy. An easy way of testing this is to create a text log file of the generated API calls. This also has the benefit of removing dependencies on hardware from this unit test. -- EricUlevik

I think that one should be pragmatic in testing. There is value in having a unit test that relies on special hardware (I assume this is something like a loopback cable), though one would not include it as part of the automatically run set. One would have to have a plan to periodically run it, though. Depending upon the amount of work in the wrapper class, one might consider writing a stub for the OS calls. This may have to be specially compiled with an #if preprocessor (other solutions probably also exist). Finally, one absolutely must write a stub for this class to be used by higher level tests. The code in question is not untestable in the absolute sense, it just may not be cost effective to set up for an automatically run test as part of each developer's build cycle. This is really a risk issue and probably should be decided in conjunction with your team and not based on wiki opinions. Just do what makes sense. -- WayneMack


JeffGrigg often does unit testing with real hardware, rather than writing the stubs. Running the tests is slower, and requires the presence of the real hardware, but it's generally easier to do and gives greater confidence that the overall system will work right.

Example: There was lots of testing on the Hubble telescope, but not an end-to-end test.

For the serial port, you can start by wiring a "loopback cable" that physically connects transmit to receive pins, and similar connections for status pins. Later, if performance or hardware availability become an issue, consider writing testing stubs, so you can test most of the system, even when the special hardware is absent.

Contributors: JeffGrigg


I recently wrote another OO wrapper for OS functionality: This time an FTP wrapper. To test it, I point it at an FTP server I've set up to run on my desktop PC. (IE: IP address 127.0.0.1) Testing with the real OS interface helps test my understanding of the OS interface. -- JeffGrigg


Another approach is to have the unit test follow this algorithm

  if(isSpecialHardwareConnected())  // detects the special hardware
  {
return runCSerialUnitTest();  // either SUCCESS or FAIL
  }
  else
  {
cerr << "I didn't really do runCSerialUnitTest()" << endl;
return SUCCESS;
  }
Advantage is that any monkey that comes along can type 'make test' and when all test succeeds, they are happy. It avoids needless hunting around for the test that "failed".

Disadvantage is that someone has to remember to connect the hardware once in a while and run the tests - along with all the other peculiar tests that each have their own peculiarities. Otherwise you have a piece of software that gradually deteriorates.

I've considered a similar approach for a cross-platform test.

-- NickBishop



Thanks for the great discussion, guys! Meanwhile, back at the ranch, I've wasted more time wondering whether I should bother writing the UnitTest than I would have had I simply implemented the UnitTest. I'll do it first thing in the morning. -- MichaelHill

Where there is a will, there is a way to test. -- DonWells


Sorry if this is a stupid question, but are there any general guidelines for creating UnitTests for real-time and multi-threaded programs? We are having a mental block creating UnitTests for our TCP/IP socket and SNA interfaces aside from creating some sort of stub TCP/IP or SNA subsystem. Any assistance would be extremely valuable -- DavidHurt

Why can't you create the stubs and see if that works? I do the equivalent thing all the time for databases. You may find you have to change your design around considerably to make this possible. However, when I make my designs testable, I'm always much happier with the resulting designs, even if I wouldn't have designed it that way if I hadn't been testing. Perhaps some day I'll design for testability by habit. Someday. -- KentBeck

Stubs are good. Running your client and the server on one machine makes testing much easier. Writing required scaffolding code will help your design.

For real time and multi-threading software, I've always tested and profiled the entire machine. After the usual unit tests and code inspections, of course.

-- EricUlevik

How about spawning a server thread and a client thread and having them to talk to one another? In our XpLeiden project one of the students made separate executables. He had a simple server program which he started from the test executable with a spawnlp call. If you want to do this, you'll have to have the test executable ready. In Visual Studio the easiest way to do this would be by making the project for your server code a dependency of the project for your test code. But separate threads are probably easier. --MartijnMeijering


Scenario 2

Consider a system which is not multi-threaded, yet has more than one thing to do at the same time. Consequently, a "timer" class is written. When an object's "do some work" routine is called, it is given a timer telling it how long it can work before it must return. The object sometimes runs into situations where it is waiting for something to happen, but it can't return if it still has time, so it calls the timer class's "wait n milliseconds" function.

I'm not saying that the the "timer" unit can't be unit tested, but I am at a loss as to how I might do so. -- DonaldMcLean

First, wait(0). Then wait(n). Check the time to see if it waited.

Then wait(-1) and see if you want to change the interface. -- EricUlevik

-- I've written a few timers that fire events when they have counted down or whatever - I think that is what you want ?. Here is a method from a test for a timer which simply counts down and notifies its listener when it reaches zero. This test checks that this actually happens:

  long countdown = 500; //milliseconds
  Timer timer = new Timer(countdown, timerListener);
  timer.start();
  pause(700); //method on the unit test class
  assert("Timer finished", timerListener.wasNotified());
So the timer is constructed with the length of time to count down and a listener to notify when it has done so. It is started and then the test waits for 700 ms (longer than the countdown time). Then I assert that the listener was notified. There are lots of other tests to do (like pausing the timer and resuming).

Is this what you meant or have I completely misunderstood? :)

-- ChanningWalton


The only things that can't be properly UnitTested are:

-- NickBishop

or more pessimistically:


I recently stumbled across XP while looking for information on JUnit. Unfortunately my first attempt at adopting UnitTests is proving difficult.

My current project involves meta-programming in Java. The first task is to generate 1258 classes another 1258 interfaces from a description file. I wrote a parser using JavaCompilerCompiler (javacc) and a class generator (I ended up doing two scratch versions, I think XP call them SpikeSolutions and ended up using the VisitorPattern). It was at this point I went looking for JUnit.

I am currently trying to produce a simpler description file to allow me to write some simple UnitTests for the code generator, but I have no idea how to go about testing the code that has been created. Especially if I was to try and follow the XP principle of WriteTheTestsFirst?.

Any advice will be most welcome. -- AndraeMuys


You want to test that the compiler generates proper code for legal input, that it properly handles bad input and that the generated code runs within whatever runtime framework you provide. You can drive the compiler with a list of well known input files and compare the output to expected results (in files perhaps). You can also take a well known input, compile it and then run it in the runtime framework and compare the generated code's output with expected results. You may also want to drive the runtime with code not generated by the compiler so that you can know when you have messed up the front end versus the backend.

It is easier to generate the well known outputs from the compiler, verify that they are correct and then store them away. I would interpret "write the test case first" to mean that before adding a feature to the compiler you write the test that will use a well known input and compare it to an as yet uncreated oracular output. The test fails until you create, verify and save the oracular output.

Compilers can be stateful beasts and require lots of test cases.

-- DavidItkin

Also note that, if you include the tools.jar file from the JDK, you can instantiate an instance of that javac class, and give it stuff to compile. This lets you verify that your generated code compiles. If you pull out a dynamic class-loader of some sort, you can then load up the generated class that you just compiled, and run some more tests over it. -- RobertWatkins


Can JUnit be used to unit test a Java class with mostly private functions? -- PatrickParker

I tend to avoid private functions in preference for default package protection specifically so I can test those methods by adding the TestCases to the classes package. If this is unacceptable, java.lang.reflect permits the suppression of access controls on methods and so you could use that. -- AndraeMuys

Theres a big chat on egroups about this http://www.egroups.com/group/extremeprogramming (Whoever wrote this, please be more careful next time that you don't attach someone else's attribution to what you write.)


Scenario 3

A real experience (of inapplicability of XP - from WhenIsXpNotAppropriate) was of a shared local queueing component which had been thoroughly unit tested and which failed during a system stress test involving highly concurrent usage of the component combined with a shortage of virtual memory. As each bug was fixed, the stress test just got further and another bug popped up. Eventually, the underlying cause became apparent, a data structure was replaced, and the stress test found no more problems in the component. In retrospect, it would not have been practicable to code unit tests to find this class of bugs. Admittedly we were not using XP, but unless the project had been using XP from scratch, how could XP have worked for this component?

Why would XP have done any worse in this situation? If you feel your current approach was appropriate given the problems you describe, why would XP be any less appropriate?

It is not economically viable, in the above environment, to run a full stress test prior to integrating each set of changes. Effectively refactoring frequency has to be decreased to match the capability of testing to give confidence that changes haven't broken something. Maybe the philosophy is similar, but the rate of change is completely different. If you have to integrate changes before you get feedback, you have to apply measures other than unit testing to be sure you won't break other people's unit testing.


I could make the argument that a shared queue is something that takes a set of UnitTest that are inherently multithreaded. In that case, the "Stress test" is nothing more than a proper UnitTest in that regard. In general, I recommend that people do three types of testing for server-side (multiuser) systems before they ever let them see daylight:

Perhaps our definition of UnitTest is wrong, or perhaps we just need to make a distinction between appropriate testing strategies for server-side code and client-side code.

-- KyleBrown


It might be possible to unit test a component under stress, but often it is the interactions between multiple components that break under stress. In the shared queueing example, the queueing component wasn't expecting storage deallocation calls to result in a thread switch (the environment was not fully re-entrant - control was lost only on certain calls to the system).

For example, it is often when making calls between components that locking strategies succeed or fail. If they are too strong, you can get deadlock. If they are too weak, you can get races leading to unpredictable results.

So, in practice, I think you have to run stress tests on subsystems or whole systems. There are probably economic limits too as stress tests tend to consumes large amounts of CPU and other resources.

-- GlynNormington


I think you're being deceived by the word "Unit", Glyn. All along we've assumed that the kind of testing done in XP does not ONLY test components in isolation, but sets of components together... -- RonJeffries

I see. Would the unit be something akin to a (OO) subsystem, then? It clearly needs to be a logical grouping of components around which some boundary can be drawn for the purpose of testing. -- GlynNormington


... you're being deceived by the word "Unit" ... the kind of testing done in XP does not ONLY test components in isolation, but sets of components together...

I would also admit to being confused as to what a UnitTest implies in XP. Most of the discussions seem to be about testing each class/object in isolation. The other extreme says any test a developer writes is a UnitTest. Anyone care to expand on what an XP UnitTest actually is?

In XP there are two kinds of tests, called UnitTest and AcceptanceTest. The AcceptanceTests are specified and belong to the Customer. The UnitTests belong to the programmers. All of the UnitTests are required to run perfectly in the integrated system every time any developer releases any code. Whenever a defect is detected downstream (i.e., in AcceptanceTest or in actual usage), a UnitTest must be written to detect that defect. Then the defect is fixed. It follows that not all UnitTests test things in isolation. The point of the tests, as MichaelHill points out, is to provide mobility, the ability to change the code as needed. As such, the tests need to execute quickly, and to provide confidence. Sometimes isolation is the best way to do this. Sometimes it is not. Either way, XP teams have to do it.

Yes. For instance, I build UnitTests for my EJB systems at the level of the individual EntityBean, and at the level of the Factory and Value Objects, and at the level of the SessionBean (acting as a Facade) and then often at higher levels of client software beyond that... I've actually got a couple of good examples in the source code that ships with my new book... -- KyleBrown

See UnitTestsReconsidered for a discussion of UnitTesting that got sideswiped by the XP freight train. If you read enough, you'll see the difference between the two.


EditText of this page (last edited March 30, 2006) or FindPage with title or text search