Journalling Pattern

The Journalling pattern is a testing pattern (See TestingPatterns and CategoryTesting). It is described in DesignPatternsForDistributedObjects.

Record a sequence of events, and ensure that:

I have used this pattern in at least two ways. In the first, I was testing some code which invoked objects with a particular interface which had open, close, flush and doStuff methods. If my code was working correctly, it would call open, then doStuff and flush 0 or more times, then close. I wrote an implementation of the interface which recorded which methods had been called in an ordered list. After the test scenario was completed, I was able to check that the order of the method calls conformed to the acceptable grammar. (This case was easy, but I could imagine in more complex situations I would like some support from the test framework to ensure that the call sequence conformed to a more complicated grammar, e.g. an XML parser invoking a SaxDriver.)

The second use of this pattern is in testing units which generate events, e.g. a publish and subscribe service. In this case, the events are arbitrary objects, and I wasn't particularly concerned about the order they arrived in. I implemented a subscriber which recorded events sent to it, and at the end of the test scenario, checked that the list contained equivalent elements to those that were published.


Discussion involving defining correct output with a grammar moved to VerifyOutputWithGrammar.


There's a small pattern language called PatternsForLoggingDiagnosticMessages. It seems to cover this ground rather well. (See PatternLanguagesOfProgramDesign 3, Part4, chapter 16, page 277.)


When I HaveThisPattern, I usually DoTheSimplestThingThatCouldPossiblyWork, which is to "diff" the output with the output of the previous successful run. (That is, I use the Unix "diff" utility, which displays the differences between two files. On Windows, see "windiff", which comes with VisualCeePlusPlus.) Sometimes I have to "tweak" the input or output files to minimize the output of meaningless differences.

When this approach stops working, I go back to the source code and put the checks there: If method calls must be done in a particular sequence, then there is a lot to be said for introducing a state machine and checking for correct usage in the object. I often do state checking in objects with CeeLanguage/CeePlusPlus "assert()" statements, ensuring validation in DEBUG releases, with no run-time overhead in RELEASE versions. I often use...

   enum { Constructed, Open, ..., Closed } eState;
in DEBUG versions of C++ programs. Catch errors as early as possible - like during the offending call.

-- JeffGrigg


Actually comparing the files byte for byte is often even simpler. In Smalltalk, for example, you might do:

 self should: 
  [self outputString = (File named: 'savedResult.txt') contentsOfEntireFile]
-- RonJeffries


This pattern is a balance between intrusive testing, where you change the source, and non-intrusive testing where you don't. Making an object record a journal is kind-of minimally intrusive. Once you've got the journal, you can analyse it in various ways.

Dave, I have not yet used it where the object being tested recorded the journal. As I am using it to check output of objects, I record the journal in the object to which the output is sent. DesignForTestability suggests that you should be able to change that object easily. I have always made the journalling object is part of the unit test, not part of the implementation. -- JF

It's a common pattern with profilers. At least in the C++ world, profilers often write out a journal while the program is running that contains information about calls and times, and then post-process it to produce totals and counts, maybe even graphs and dynamic call-hierarchies. So this is not just a TestingPattern. -- DaveHarris


The more common use of the term "journaling" is to make an audit trail that can be used to reconstruct the state of the system. While you can use this for testing, it is more general than that. This pattern gives a narrow meaning to a term that is already widely used in the software community. -- RalphJohnson

Hmmm, can you think of a better name? That's the trouble with the English language, too few words :-).

Journalling for Testing, perhaps. Pattern names don't have to be one word long.

KentBeck's TestDrivenDevelopment has a similar TestingPattern called LogString.


I think you need to be careful with many of the comments on this page - in my experience, all of the tests that we wrote that did comparisons to known outputs were extremely brittle when changes took place, and took a reasonable amount of time to fix. We had many comparisons to known HTML strings for servlets, and we found these were a nuisance to fix when output requirements changed. The original intent of this page says "expected events occurred" and "they occurred in a valid sequence" - I think the author who mentions the SaxDriver gives a very good example that's worth remembering. For a while I've been trying to get us to use "Smart Results" - e.g. not return strings, or write directly to streams, but instead write something specific to a result object. e.g. result.writeIntro(someText); result.writeTotal(aTotal). Some colleagues recently pointed out that this fits nicely with the LawOfDemeter - as you pass in the result object to which something writes out its known values. This makes testing very easy, as you can pass in the test version of the result object that conforms to the same interface but logs the orders of the calls, or makes sure that specific things get written out. You also avoid having to parse or scan your output which can be another nasty little chore. Sometimes you do have a specific output to match - and for this maybe a diff is appropriate, but remember the SaxExample? -- TimMackinnon


See also: UseTracing


EditText of this page (last edited April 10, 2012) or FindPage with title or text search