Parameterized Test
The book has now been published and the content of this chapter has likely changed substanstially.Please see page 607 of xUnit Test Patterns for the latest information.
How do we reduce Test Code Duplication when we have the same test logic in many tests?
We pass the information needed to do fixture setup and result verification to a utility method that implements the entire test lifecycle.
Sketch Parameterized Test embedded from Parameterized Test.gif
Testing can be very repetitious not only because we must run the same test over and over again but also because many of the tests are only slightly different. For example, we might want to run essentially the same test with slightly different system inputs and verify that the actual output varies accordingly. Each of these tests would consist of the exact same steps. While having this many tests is excellent for ensuring good code coverage, it is not so good for test maintainability. Any change made to the algorithm of one of these tests must be propagated to all the similar tests.
A Parameterized Test is a way to reuse all the test logic in many Test Methods (page X).
How It Works
The solution, of course, is to factor out all the commonality into a utility method. When the logic which is factored out includes all parts of the entire Four-Phase Test (page X) life-cycle (fixture setup, exercise SUT, result verification and fixture teardown), we call the resulting utility method a Parameterized Test. This gives us the best coverage with the least test code to maintain and makes it very easy to add additional tests as they are needed.
A number of Test Methods each consist of a single call to a Test Utility Method (page X) that includes all parts of the entire Four-Phase Test life-cycle (fixture setup, exercise SUT, result verification and fixture teardown). The Test Methods pass in as parameters any information that the Parameterized Test requires to run and which varies from test to test.
A test that would otherwise require a series of complex steps can be reduced to a single line of code if the right utility method is available to call. As we detect similarities between our tests, we can factor out the commonalities into utility methods that take only that which differs from test to test as it's arguments.
When To Use It
We can use a Parameterized Test whenever we have Test Code Duplication (page X) caused by several tests that implement the same test algorithm but with slightly different data. The data that differs becomes the arguments passed to the Parameterized Test and the logic is encapsulated by it. Parameterized Test also helps avoid Obscure Test (page X); by reducing the number of times the same logic is repeated it can make the Testcase Class (page X) much more compact. Parameterized Test is also a good stepping stone to a Data-Driven Test (page X); the Parameterized Test's name maps to the verb or "action word" and the parameters are the attributes.
If our extracted utility method doesn't do any fixture setup, it is called a Verification Method (see Custom Assertion on page X) and if it doesn't exercise the system under test (SUT) it is a Custom Assertion.
Implementation Notes
We need to make sure we give the Parameterized Test an Intent Revealing Name[SBPP] so that readers of the test will understand what it is doing. The name should imply that it includes the whole lifecycle to avoid any confusion. One convention is to start or end the name in "test"; the presence of parameters conveys that fact that the test is parameterized. Most members of the xUnit family that implement Test Discovery (page X) will only create Testcase Objects (page X) for "no arg" methods that start with "test" so this shouldn't prevent us from starting our Parameterized Test names with "test". At least one member of the xUnit family implements Parameterized Test at the Test Automation Framework (page X) level: MbUnit. Extensions are becoming available for other members of the family with DDSteps for JUnit being the first one I have encountered. I expect more to appear in the near future.
Testing zealots would advocate writing a Self-Checking Test (see Goals of Test Automation on page X) to verify the Parameterized Test. The benefits of doing so are obvious: increased confidence in our tests; and in most cases it isn't that hard to do. It is a bit harder than writing unit tests for a Custom Assertion because of the interaction with the SUT. We will likely need to mock out the SUT so that we can control what it returns.
Variation: Tabular Test
Several early reviewers wrote to me about a variation they use regularly: the Tabular Test. The essence of this is the same as doing a Parameterized Test except that the entire table of values is in a single Test Method. Unfortunately, this makes the test an Eager Test (see Assertion Roulette on page X) because it verifies many test conditions. This isn't a problem when all the tests are passing but it does lead to a lack of Defect Localization (see Goals of Test Automation) when one of the rows fails.
Another potential problem is that "row tests" may depend on each other either on purpose or by accident because they are running on the same Testcase Object; see Incremental Tabular Test.
Despite there being a number of potential issues, it can be a very effective way to test. At least one member of the xUnit family implements Tabular Test at the framework level: MbUnit provides an attribute [RowTest] to indicate that a test is a Parameterized Test and another attribute [Row(x,y,...)] to specify the parameters to be passed to it. Perhaps it will be ported to other members of the family? (Hint, Hint!)
Variation: Incremental Tabular Test
This is the variant of Tabular Test where we deliberately build upon the fixture left over by the previous rows of the test. This is a deliberate form of Interacting Tests (see Erratic Test on page X) called Chained Tests (page X) except that all the tests are within the same Test Method. The steps within the Test Method act somewhat like the steps of a "DoFixture" in Fit but without individual reporting of failed steps(This is because xUnit typically terminates the Test Method on the first failed assertion.).
Variation: Loop-Driven Test
When we want test with all the values in a list or range, we can call the Parameterized Test from within a loop that iterates over all the values in the list or range. By nesting loops within loops, we can verify the behavior of the SUT with combinations of input values. The main requirement for doing this type of test is that we can either enumerate the expected result for each input value (or combination) or use a Calculated Value (see Derived Value on page X) without introducing Production Logic in Test (see Conditional Test Logic on page X). A Loop-Driven Test has many of the issues of a Tabular Test because we are hiding many tests inside a single Test Method (and therefore Testcase Object.
Motivating Example
The following is an example of some of the runit (Ruby Unit) tests from the web site publishing infrastructure I built in Ruby while writing this book. All the Simple Success Tests (see Test Method) for my cross-referencing tags went through the same sequence of steps defining the input XML, the expected HTML, mocking out the output file, setting up the handler for the XML, extracting the resulting HTML and comparing it with the expected HTML.
def test_extref # setup sourceXml = "<extref id='abc'/>" expectedHtml = "<a href='abc.html'>abc</a>" mockFile = MockFile.new @handler = setupHandler(sourceXml, mockFile) # execute @handler.printBodyContents # verify assert_equals_html( expectedHtml, mockFile.output, "extref: html output") end def testTestterm_normal sourceXml = "<testterm id='abc'/>" expectedHtml = "<a href='abc.html'>abc</a>" mockFile = MockFile.new @handler = setupHandler(sourceXml, mockFile) @handler.printBodyContents assert_equals_html( expectedHtml, mockFile.output, "testterm: html output") end def testTestterm_plural sourceXml ="<testterms id='abc'/>" expectedHtml = "<a href='abc.html'>abcs</a>" mockFile = MockFile.new @handler = setupHandler(sourceXml, mockFile) @handler.printBodyContents assert_equals_html( expectedHtml, mockFile.output, "testterms: html output") end Example TestCodeDuplicationRuby embedded from Ruby/CrossrefHandlerTest.rb
Even though we have already factored out much of the common logic into the setupHandler method, I still have some Test Code Duplication and since I had at least twenty tests that followed this same pattern (with lots more on the way), I felt it was worth making these tests really easy to write.
Refactoring Notes
Refactoring to Parameterized Test is a lot like refactoring to Custom Assertion. The main difference is that we include the call(s) to the SUT made as part of the exercise SUT phase of the test within the code to which we apply the Extract Method[Fowler] refactoring. Because these tests are pretty much identical once our fixture and expected results are defined, the rest can be extracted in the Parameterized Test.
Example: Parameterized Test
In the following tests, we have reduced each test to initializing two variables and calling a utility method that does all the real work. This utility method is a Parameterized Test.
def test_extref sourceXml = "<extref id='abc' />" expectedHtml = "<a href='abc.html'>abc</a>" generateAndVerifyHtml(sourceXml,expectedHtml,"<extref>") end def test_testterm_normal sourceXml = "<testterm id='abc'/>" expectedHtml = "<a href='abc.html'>abc</a>" generateAndVerifyHtml(sourceXml,expectedHtml,"<testterm>") end def test_testterm_plural sourceXml = "<testterms id='abc'/>" expectedHtml = "<a href='abc.html'>abcs</a>" generateAndVerifyHtml(sourceXml,expectedHtml,"<plural>") end Example ParamterizedTestUsage embedded from Ruby/CrossrefHandlerTest.rb
The succinctness of these tests is made possible by defining the Parameterized Test as follows:
def generateAndVerifyHtml( sourceXml, expectedHtml, message, &block) mockFile = MockFile.new sourceXml.delete!("\t") @handler = setupHandler(sourceXml, mockFile ) block.call unless block == nil @handler.printBodyContents actual_html = mockFile.output assert_equal_html( expectedHtml, actual_html, message + "html output") actual_html end Example ParamterizedTestMethod embedded from Ruby/HandlerTest.rb
What distinguishes this Parameterized Test from a Verification Method is that it contains the first three phases of the Four-Phase Test (from setup to verify) whereas the latter only does the execute and verify phases. Note that our tests did not need the tear down phase because we are using Garbage-Collected Teardown (page X).
Example: Independent Tabular Test
Here's an example of the same tests coded as a single Independent Tabular Test:
def test_a_href_Generation row( "extref" ,"abc","abc.html","abc" ) row( "testterm" ,'abc',"abc.html","abc" ) row( "testterms",'abc',"abc.html","abcs") end def row( tag, id, expected_href_id, expected_a_contents) sourceXml = "<" + tag + " id='" + id + "'/>" expectedHtml = "<a href='" + expected_href_id + "'>" + expected_a_contents + "</a>" msg = "<" + tag + "> " generateAndVerifyHtml( sourceXml, expectedHtml, msg) end Example SimpleTabularTest embedded from Ruby/CrossrefHandlerTest.rb
Isn't this a nice compact representation of the various test conditions? I simply did an Inline Temp[Fowler] refactoring on the local variables sourceXml and expectedHtml in the argument list of generateAndVerify and "munged" the various Test Methods together into one. Most of the work was something we won't have to do in real life: squeeze the table down to fit within the page width limit for this book. That forced me to abridge the text in each row and rebuild the HTML and the expected XML within the row method. I chose the name row to better align with the MbUnit example below but it really could have been called anything.
Unfortunately, from the Test Runner's (page X) perspective, this is a single test unlike the earlier examples. Because the tests are all within the same Test Method, a failure in any row other than the last will cause a loss of information. In this example we won't have to worry about Interacting Tests because generateAndVerify builds a new test fixture each time it is called but we have to be aware of that possibility.
Example: Incremental Tabular Test
Because a Tabular Test is defined in a single Test Method, it will run on a single Testcase Object. This opens up the possibility of building up series of actions. Here's an example provided by Clint Shank on his blog:
public class TabularTest extends TestCase { private Order order = new Order(); private static final double tolerance = 0.001; public void testGetTotal() { assertEquals("initial", 0.00, order.getTotal(), tolerance); testAddItemAndGetTotal("first", 1, 3.00, 3.00); testAddItemAndGetTotal("second",3, 5.00, 18.00); // etc.} private void testAddItemAndGetTotal( String msg, int lineItemQuantity, double lineItemPrice, double expectedTotal) { // setup LineItem item = new LineItem( lineItemQuantity, lineItemPrice); // exercise SUT order.addItem(item); // verify total assertEquals(msg,expectedTotal,order.getTotal(),tolerance); } } Example IncrementalTabularTest embedded from java/com/xunitpatterns/misc/TabularTest.java
Note how each row of the Incremental Tabular Test builds on what was already done by the previous row.
Example: Tabular Test with Framework Support (MbUnit)
Here's an example from the MbUnit documentation that shows how to use the [RowTest] attribute to indicate that a test is a Parameterized Test and another attribute [Row(x,y,...)] to specify the parameters to be passed to it.
[RowTest()] [Row(1,2,3)] [Row(2,3,5)] [Row(3,4,8)] [Row(4,5,9)] public void tAdd(Int32 x, Int32 y, Int32 expectedSum) { Int32 Sum; Sum = this.Subject.Add(x,y); Assert.AreEqual(expectedSum, Sum); } Example ParameterizedMbUnitRowTest embedded from CSharp/MbUnitExamples/ParameterizedMbUnitRowTest.cs
Except for the syntactic sugar of the [Row(x,y,...)] attributes, this sure looks similar to the previous example but it doesn't suffer from the loss of Defect Localization because each row is considered a separate test. It would be pretty simple to convert the previous example to this format using "find and replace" in a text editor.
Example: Loop-Driven Test (Enumerated Values)
Here is an example of a test that uses a loop to exercise the SUT with various sets of input values.
public void testMultipleValueSets() { // Setup Fixture: Calculator sut = new Calculator(); TestValues[] testValues = { new TestValues(1,2,3), new TestValues(2,3,5), new TestValues(3,4,8), // special case! new TestValues(4,5,9) }; for (int i = 0; i < testValues.length; i++) { TestValues values = testValues[i]; // Exercise SUT: int actual = sut.calculate( values.a, values.b); // Verify result: assertEquals(message(i), values.expectedSum, actual); } } private String message(int i) { return "Row "+ String.valueOf(i); } Example LoopingTest embedded from java/com/xunitpatterns/misc/LoopingTest.java
In this case we enumerated the expected value for each set of test inputs. This avoids Production Logic in Test.
Example: Loop-Driven Test (Calculated Values)
This next example is a bit more complex:
public void testCombinationsOfInputValues() { // Setup Fixture: Calculator sut = new Calculator(); int expected; // TBD inside loops for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { // Exercise SUT: int actual = sut.calculate( i, j ); // Verify result: if (i==3 & j==4) // special case expected = 8; else expected = i+j; assertEquals(message(i,j), expected, actual); } } } private String message(int i, int j) { return "Cell( " + String.valueOf(i)+ "," + String.valueOf(j) + ")"; } Example ProductionLogicInTest embedded from java/com/xunitpatterns/misc/LoopingTest.java
Unfortunately, it suffers from Production Logic in Test.
Further Reading
See the documentation for MbUnit for more information on the [RowTest] and [Row()] attributes. Likewise, see http://www.ddsteps.org for a description of the DDSteps extension for JUnit; while its name suggests a tool that supports Data-Driven Testing, the examples given are Parameterized Tests. More arguments for Tabular Test can be found on Clint Shank's blog at http://clintshank.javadevelopersjournal.com/tabulartests.htm.
Copyright © 2003-2008 Gerard Meszaros all rights reserved