Writing Testable Classes

Often the design of a class needs to be changed so that you can write good TestCases against it. Test classes are designed to test units of functionality. In order to test these units, I have two practices.

My development standard is to write my TestClass?es in the same package as the ClassUnderTest. This allows me to take advantage of variables and methods that are 'package' visible. I then use package variables to indicate if a function executed in the desired manner. I can then write my TestClass? to call my MethodUnderTest? and then interrogate the package level variable to ensure that the correct behavior happened.

My second practice is to decompose my functions into testable methods. This is nothing more than GoodProgrammingPractice?, however it allows me to accurately test the units. By having more decomposed methods (more granular) I can focus in on exactly the units that need to be tested.

For example, I might have a function on a Person that is called determineTaxPayment. If I am trying to test that my I can calculate the right numbers with the right precision I would decompose this method into two methods (at least). One called getTaxRate which would call my local IRS Tax bean and ask them my rate, and a separate method called calculateTaxPayment. The getTaxRate would most likely use a Tax bean of some sort so I am not too worried about the testing of this method (assuming that the Tax bean is well tested), so I make it private. I am however very interested in testing the calculation, so I make this package visible and test it with my TestCase.

If I was to test the method determineTaxPayment directly and the Tax bean started to behave poorly, my TestCase would start to show Failures. It's not the calculate unit that is in error show [so?] it should not fail when a coupled component starts to fail. [In other words, you can test just the Person-specific functionality, because it's in its own method, and stub out the Tax bean by just passing in the value it would have returned?]

Interestingly, I often find it hard to convince everybody that this type of programming is good. The most common argument is that I am breaking encapsulation by making my variables and methods package level. Obviously my answer is that writing good TestCases is more important that protecting other classes in your package from seeing your data and calling your methods.

I am interested in hearing from the group on what you all think.

-- MikePorter


This is a case where C++'s friend idea is very useful. Your test classes can access internals, but anyone else can't. Personally I don't like including test anything with the classes. -- AnonymousCoward


I've been putting a lot of my automated tests inside their subject classes. Usually I follow each method that I'm testing with the block of code that tests it. It's simple, convenient, and nicely sidesteps the whole private vs. public issue. -- CurtisBartley


Another good reason to separate out stuff-that-this-class-does from queries-to-other-classes is to make your tests more resilient in the face of changes to the other classes. A war story should make things clear:

There we were, deep in the jungles of a PHP web application (this isn't terribly fair to real jungles, which typically have bug counts of under a billion). We needed a way to hack through the brush quickly and effectively - so we wrote some primitive UnitTests, which matched up with our primitive Units: entire pages.

The technique was simple: open a page as an URL (passing in the parameters as GET parameters). Remember what was returned. If it ever changes, that UnitTest fails. After the first few, they were very simple to write.

The problem came when we started messing with a header file, which most of the pages included verbatim. Any change to the header file would break all dozen or so (page) UnitTests. It was easy to inspect any single failure and update the test, but doing so a dozen times every time we tweaked the header was a drag. We oscillated between wasting lots of time updating the tests and taking lots of risk by letting them slide.

The solution was to make each UnitTest actually test its own unit - and nothing more (most especially not the header). For each page, we:

  1. Put the part between the header include and the footer include into its own generating function, which we called from where it used to be. We passed in the HTTP POST and GET parameters (and any globals) as normal parameters.
  2. Tested using the existing whole-page UnitTests, fixing any deviation.
  3. Changed each UnitTest to call the generating function, instead of requesting the whole page through the HTTP server. Since we weren't changing the code, and each generating function had been tested against the initial whole-page UnitTests, we knew the initial behavior was the desired behavior, and saved it to compare later runs against.

So our initial, primitive UnitTests helped us move to newer, better-decomposed UnitTests (and, of course, better-decomposed production code).

I really, really, really wish I'd started doing this sooner. Right after defining the set of UnitTests for each page would have been the right time. Right after the first time the header changed would have been the obvious time. Right after the third time the header changed was really, really dumb.

The moral: be fanatical about separating out what your class is doing itself from what it gets from other classes. This lets you write UnitTests that don't have to change when the other classes change. And that saves you from an awful lot of boring grunt work.


Doesn't the usual distinction between black box and white box testing apply here? Mike with his tests that require access to package private variables is clearly talking about white box testing. Such tests probably should reside in the package because they are part of the implementation and vulnerable to implementation changes.

A black box unit test, on the other hand, is logically outside the implementation; it tests the contract of the unit. Hypothetically, black box tests are more stable than white box tests because contracts are more stable than implementations. I don't mean to imply that therefore black box tests are "better". When lots of refactoring or just plain rework is going on, both kinds of tests take a hit.

White box tests shouldn't be used instead of black box tests. Some tests should test the contract independent of the implementation. At the same time, white box testing makes it easier to test "units" of finer granularity, to grind down to the method level, so one doesn't get in the situation of writing reams of code before any of it can be tested. It is often the case that a class has only one or two public methods but dozens of private methods. If you could only test the contract, you'd have to write a lot of code before doing any testing.

I know of at least one use case where white box tests seem indispensable. A parser may need only one public method, parse(), that accepts an input stream and returns a parse tree. If you try to test only at the contract level and do a good job of it, your test inputs and outputs grow ever-larger in size and multiply combinatorially. You are guaranteed to wear yourself out before testing every possible combination and the tiny subset of all possible inputs you do test gives little confidence you won't find a test case tomorrow that doesn't work. On the other hand, a top-down parser is often composed of small methods representing each language rule. If you test from "leaf" methods upward, you can have good confidence you are covering all possible paths in each method. You don't hit the combinatorial wall because tests of lower-level methods don't have to be repeated in higher-level tests. The black box contract tests, therefore, are free to use mostly realistic inputs and simple (too long, too short) end cases.

-- Bob Foster


EditText of this page (last edited March 24, 2010) or FindPage with title or text search