How should you go about retrofitting UnitTests to legacy code? Is it better to try to add blanket tests to the whole system, or just piecemeal, as you come into contact with parts of the LegacyCode?
What do you do if visual inspection betrays the presence of bugs in a particular piece of legacy code? Obviously, you should fix them. However, what if you don't know if other pieces of the code rely on these "features"? Should you add unit tests that actually check for these bugs?
Specific Techniques:
High-level Approaches follow...
One unofficial XPers response:
Don't be afraid to write UnitTests that test larger parts of the system, similar to AcceptanceTests. At least when you get started, it's better than no tests at all.
You can always add unit tests for the low level functions. Sometimes it seems pointless, because most of the code and most of the bugs are in the big hairy mess-of-code classes. But it's a place to start. And once you've started, it's a lot easier to put more and more things under test over time. -- JeffGrigg (a war-scarred veteran)
Building the function level unit tests may even help you determine some of the initial refactoring. After all, if I have 6 functions that have identical unit tests (assuming the test suite for each function is complete/thorough), I most likely have a pattern running in the background. The key is to determine if how this pattern is implemented violates the OnceAndOnlyOnce rule. -- WyattMatthews
For bringing old code into maintenance, I only know of two choices, for areas you need to begin changing: write the tests, painfully, and know whether your maintenance worked. Or don't write the tests, and be uncertain. Comprehensive AcceptanceTests can help, but will be slower and less immediate. Sorry. -- RonJeffries
When I first heard about some of these practices, I wanted to apply them but I was in the same boat: large code base, no UnitTests. It is easy to get frustrated and think that it is all or nothing... that you either go back, refactor and test the whole thing or you are doomed. The fact is, by starting testing, even late, you are better off than many other people who haven't. One step at a time... As Ron says, just do it as you need it in maintenance. I've discovered that going back and attempting to write UnitTests for legacy code is a good impetus for refactoring. True enough, it is tough sometimes to get enough of a handle on something to start testing it.
I had one case where I had some logic in a GUI COM control which really should have been in a domain class. Unit testing a GUI COM control is pointless, so I ended up doing ArchitecturalSubstitution. I duplicated the logic in a new class by writing UnitTests first, and I kept compiling the new class into the application also, calling the methods on the new object and the control in parallel and verifying that they did the same thing. When I saw, through interaction with the live app, that the new class was doing what I wanted, I dropped the calls to the control and deleted the methods, leaving the calls to the new class. I now had a new class with tests. At each point in the process, I asked myself: "What is the smallest step that I can take towards the goal that keeps everything working?" One could say that the legacy code was the spec for the new code. Of course, now that the tests exist, I can refactor the hell out of it fearlessly.
Recently I also tried to introduce UnitTests in my current project, and I encountered something like a "ChickenAndEggProblem". Basically, most of the code was contained in three big C++ classes, each consisting of several thousand LOC. That wasn't testable at all, and so I had to ReFactor. But, on the other hand, it was hard to ReFactor without having a set of UnitTests to tell me that I haven't broken anything.
What I finally did was do most of the refactorings without UnitTesting, then, when the code started looking like testable, support all further refactorings by tests.
Any other ideas on what would be better to do in such a situation? -- DmitryJemerov
Here is one solution that has worked for me. Initially when you don't have unit tests only do the trivial refactorings which are unlikely to break your code. For example: Rename classes and methods. Break large classes into multiple smaller classes with fewer methods. Break large methods into smaller methods by just splitting them up mindlessly. Don't do anything too clever that might get you into trouble. Run a few quick AcceptanceTests after every refactoring to make sure you didn't break anything. -- AsimJalis
I also wanted to say something about coming upon a huge system with no UnitTests written. This describes the VcapsProject exactly. I will tell you what I told Ford. "Unit test suites evolve over time. You constantly add tests for things you could never imagine going wrong. If you do not start adding UnitTests today then one year from now you will still not have a good unit test suite." By adding unit tests for 40% of the VcapsProject we reduced the bug reports by 40%, coincidence? Just add unit tests whenever you touch part of the old system, nurture them and they will grow on their own. Prove there is a better way and rewrite the old system when you get management support. -- DonWells
Moved from UnitTest
I am currently trying to integrate UnitTests into my favorite training application, a chess program. I find it very difficult to tease the "Units" apart well enough to test them separately. Especially when trying to test chess pieces for legal moves, I find myself needing to use the board configuration as part of the scaffold. Does anyone have any good advice, links or experience on how to tease units apart for testing?
Also, in an application with as many legal situation as chess, I find myself wanting to test a lot of "useless cases". I.e. for each piece in a given position, I give the moves that should be legal for each piece, and then I try moving the piece to every square on the board, comparing the moves to the input. Is this overkill? Is it impractical? How can I do effective testing when there are as many legal board configurations as there are in chess? -- JohannesBrodwall
I would take it as a sign that how the pieces move should be decoupled from what they move on: The board configs would make for good high level tests, but the tests for how a piece can move should probably be teased apart from the tests which keep pieces on the board and off of each other, etc.
The methodology for writing tests for existing code may be slightly different than CodeUnitTestFirst. BrianMarick's book TheCraftOfSoftwareTesting? uses a test catalog to produce small numbers of high quality tests for a particular specification. In this case, your existing code is your specification. Brian's catalog lists the common boundary tests for various constructs, such as parameter types, conditionals, and loops. The danger of using the code vs. a real spec is not catching defects of omission, but I think it is at least a good starting point.
Applied to your example, it might reduce your Bishop tests to checking that: 1. it searches a diagonal, 2. it stops before the top/side edge of the board, 3. it stops before one of your pieces, 4. it stops on an enemy piece (and is registered as a capture). Also, in my chess programming experience, it would be acceptable to treat the board state like a global input parameter for your other routines, since it is used so ubiquitously. Part of your TestFixture would be some methods for setting the board configuration to a known state before running each test. -- IanOsgood (I'm currently researching TestingFrameworks and working up the nerve to suggest adding a TestSuite to our current mature product.)
Try using a mechanism like the CyclomaticComplexityMetric to determine the sufficient level of testing for a method.
Ahhhhh!!! -- RobertDiFalco (running like hell from CCM's)
CCM-style test theory uses a crippling assumption, which XP blows away. This theory assumes the tester is not empowered to change the tested code. XP theory states that if metrics reveal your code is too complex to test, you SIMPLIFY THE CODE FIRST. -- PCP
I am not sure why you state that CCM makes any assumption about the role of tester vs developer. I find that it meshes quite nicely with the concept of refactoring code and provides some theory about why refactoring improves code. CCM provides a fairly objective measure of whether code complexity is reduced and is a good way to show the benefit of refactoring to doubters. -- WayneMack
One of the problems that I come across time and again when trying to unit test and/or refactor legacy code is the code itself. Let's take the example of a Struts Action class, which will probably have links to external services, be they persistence, whatever. In this case (persistence) this means not only having a web container, but also a database which isn't going to change for the lifetime of the tests. You've probably also go a problem with having to set up a load of test data to even get the tests up and running.
One of the previous contributors said that 'you only change things like function names etc.', but this doesn't get rid of the fundamental problems of having to have those services in place before you can do proper refactoring.
Does anyone have any thoughts on how to get round this? -- MatthewFarwell
Spring makes it easier. If the Struts Action relies on Spring to get its db info (connections, DAOs, whatever), the test can inject MockObjects via Spring. -- EricHodges
Which is good if you're using Spring. Coming back to this after a bit more thought and experience, I think you should change some hopefully small bit of code to allow insertion of MockObjects, at the point where it communicates with an outside system. For instance, creating a DatabaseHelper.getConnection() method, which you change so that the legacy code uses, and you can then insert a MockObject in your unit tests. So, if you have an Action class which kicks off two or three queries to a database, you can at least compare the SQL called against those expected, check that it's being called in the same order, etc. It's not very fine-grained, I know. Has anyone else had experience of doing this sort of thing? -- MatthewFarwell
See UnitTestingLegacyCodeExample, UnitTest