Test Fixture

Behavior reused by tests.

For example, because RubyOnRails invests (ModelViewController) modular behavior in its database records, quickie systems that assemble said records into a light fast database can propel tests. (See Also CommitRollbackDatabaseAutoTests)


One example is the TestSuite itself:

A class whose primary purpose is to host one or more TestMethods and their related data.

Sets up and tears down scenarios for TestCases to test.

In electronics testing a test fixture is a device or setup used to perform a test case on. Once the circuit board or component is loaded into the test fixture, tests are run, either automatically or manually.


UncleBob (Robert C. Martin) wrote (probably in FitNesse's test cases):

 checkWord(true , "dot separated",       "WidgetRoot?.ChildPage?");
 checkWord(true , "three level",         "GrandPa?.FatheR.SoN"  );
 checkWord(true , "Dot at start",        ".RootPage?.ChildPage?" );
 checkWord(false, "Lower case at start", "lowerCaseAtStart");
Finally, but perhaps most important: What exactly did you mean by "dot separated"? Perhaps there is a definition somewhere outside the tests? -- AG (the usual "I document more than thou" argument)

I suspect the answer is in checkWord. -- MichaelFeathers

checkWord() isn't the testee function, it's a TestFixture. (I'm using the term at a smaller scale than Fit and FitNesse use it.)

Start with this, using off-the-shelf *Unit abilities (switching to RubyLanguage):

  verified = parseWord("WidgetRoot?.ChildPage?")

if false == verified then puts "WidgetRoot?.ChildPage?"

puts "dot separated text failed in parseWord, and returned " + verified.to_s() assert(false) end
We'll need to type that more than twice, to test all kinds of words, and whether our parser likes or dislikes them.

So we roll it into a Test Fixture, which is "behavior re-used by a test". The parts that vary are the input string, the polarity (true or false), and the diagnostic text element "dot separated text".

The Truth about parseWord()'s dot affinity lies within its source, in the lower-level tests that forced its source to exist, and in these tests. Documents may indicate these facts, but they are not the Truth.

Such document-worthy facts exist nearly anywhere. Some, on any project, will also be in documents. But any project contains too many small assumptions for documenting all of them to automatically be worthwhile. No project documents every stinkin' one, and no document will ever "read my mind and answer any question I have". Code will.

The best we should hope is those assumptions which cause friction get more tests, more e-mails, more Wiki traffic, etc. And more diagnostic text elements in our test fixtures.

The best CustomerAcceptanceTest fixture is one that lets customers play "what if" to let the program document its own behavior, accurately.

-- PhlIp


Here's another FT example. Suppose our tokenizer must skip over comments. So one test looks like this:

    TEST_(TestCase, elideComments)
    {

Source aSource("a b\n //c\n d");

string token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("a", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("b", token); // token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("c", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("d", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("", token); // EOF! Source bSource("a b\n//c \n d");

token = bSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("a", token); token = bSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("b", token); // token = bSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("c", token); token = bSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("d", token); token = bSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("", token); // EOF!

Source cSource("a b\n // c \"neither\" \n d");

token = cSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("a", token); token = cSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("b", token); // token = cSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("c", token); token = cSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("d", token); token = cSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("", token); // EOF!

}
That breaks the style rule "don't do too much at the same time", as well as OnceAndOnlyOnce. Refactor:

        struct
    TestTokens?:  TestCase
    {

void test_a_b_d(string input) { Source aSource(input); string token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("a", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("b", token); // token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("c", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("d", token); token = aSource.pullNextToken(); CPPUNIT_ASSERT_EQUAL("", token); // EOF! }

};

TEST_(TestTokens?, elideComments) { test_a_b_d("a b\n //c\n d")); test_a_b_d("a b\n//c \n d")); test_a_b_d("a b\n // c \"neither\" \n d")); }
Now the test inputs are different, but the assertions are the same. The inputs are covering the boundary conditions -- comment at end of line, comment at beginning of line, etc.

But the assertions don't couple to eliding comments. They only claim that any 'c' must be missing from the token stream. And that permits us to reuse the fixture with any token streams that must contain a, b, and d. This test shows the tokens skip preprocessor statements:

    TEST_(TestTokens?, elidePreprocessorStatements)
    {
        test_a_b_d("a b\n #c\n  d");
        test_a_b_d("a b\n#c \n  d");
        test_a_b_d("a b\n # c \"neither\" \n  d");
        test_a_b_d("a b\n #\n  d");
        test_a_b_d("a b\n#\n  d");
    }


See TestResource


CategoryTesting


EditText of this page (last edited June 27, 2013) or FindPage with title or text search