Redundancy Is Inertia

Do you like where you're going? If so, redundancy is your friend. If not, redundancy is your enemy.

Ways inertia is bad:

Ways inertia is good:


Complaint: "Redundancy is Inertia" seems to be assumed here, without any critical defense.

I believe the whole concept to be subtly fallacious. If there is inertia in code, it must be defined by cost of change (akin to physical inertia determining how much effort it takes to stop or redirect an object in motion). Redundancy, then, can only be cause of inertia when, as a consequence of the redundancy, changes must be duplicated across the code - e.g. for the modification of a general behavior. In these cases, OnceAndOnlyOnce will reduce inertia. However, in those cases where changing the behavior requires specialization, OnceAndOnlyOnce will increase inertia, as it requires breaking apart abstractions and refactorings that had already been performed in order to acquire a finer grain of control.


Some discussion originally on OnceAndOnlyOnce that makes sense in this light:

OnceAndOnlyOnce sounds like a nice principle but, when taken at face value, would lead to untested code. Even XP advocates stating each fact at least twice, preferably three times. (See also RequirementsAndDesign)

The three places where are fact is stated are:

There will usually be some slack between the functional tests and unit tests, the reality will be that some facts are stated many more times. I can usually manage OnceAndOnlyOnce in program code, but I find it very hard to avoid duplication between tests. Its very tempting to say "Well, I think I've tested X elsewhere, but it can't hurt to assert it again". This results in one error causing multiple test failures, which some people consider to be a bad thing (personally it doesn't bother me too much).

If you have to put each fact in at least 2 places, then you can become creative in where those 2 places are. "Code + test" is one solution; "Code + DesignByContract" is another. Between these, you get "code + (test | static typing)". Using declarative systems and AI, it might be possible to reduce the need for the "code" fact. Fundamentally though, if you want reliability, you must state each fact more than one representation.

In hardware design, we have an extra consideration: simulating the design takes a long time. So we don't want a test to fail due to errors in the test. So it is common to develop the tests on a software model (with each fact in 3 places) to get confidence in the tests, and only then commit to the thousands of hours of CPU time needed for the simulations.

-- DaveWhipp

This is a good point, Dave. I have another rule I use when information must be duplicated: When you must duplicate information make sure you will automatically detect if the duplicated information falls out of sync. Tests do this implicitly, and by definition. There are other common cases where the duplicated information is not self-testing, though. Assertions are a classic tool for handling this kind of problem.

-- CurtisBartley

This point is valid only if the code is perfect. Otherwise the two types of test add information to the overall system, and hence are not duplicating knowledge.

Also, to be fair, what you're critiquing here is not really OAOO (which is about code refactoring), but DontRepeatYourself, which is wider ranging. -- DavidThomas


The OAOO principle in XP refers specifically to the program. The program should express each idea once and only once - there should be no duplicate code.

Further, comparing the code:

 square(NUMBER x) {
return x*x;
 }

with the test:

 assertEquals(4, square(2));
 assertEquals(9, square(3));
 assertEquals(4, square(-2));
 assertEquals(9, square(-3));

we see no duplication of fact (though the test could be optimized OAOO-wise). As for duplication of tests of specific things, there's no inherent objection, except, of course, that you have to find and change all the places when a change is needed. -- RonJeffries

OnceAndOnlyOnce applies ever-so-much-more-so when it comes to the automated tests themselves. Automated tests by their very nature have a high degree of similarity from one test to the next. As in Ron's example above. A good way to doom a testing regime to failure is to not require refactoring of the test code. I have seen this nearly kill several automated test efforts.

In one case, a testing effort would have produced our company's largest software product ever without some rude intervention on my part. After doing a little math I was able to project the size of the testware by projects-end, and it really opened some eyes. They started refactoring, and reduced the size of each test on average by an order of magnitude! -- EricRunquist


EditText of this page (last edited April 15, 2008) or FindPage with title or text search