Minimal FixtureThe book has now been published and the content of this chapter has likely changed substanstially.
Please see page 302 of xUnit Test Patterns for the latest information.
Also known as: Minimal Test Fixture
What fixture strategy should we use?
Use the smallest and simplest fixture possible for each test.
Sketch Minimal Fixture embedded from Minimal Fixture.gif
Every test needs some kind of test fixture. A key part of understanding a test is understanding the test fixture and how it influences the expected outcome of the test. Tests are much easier to understand if the fixture is small and simple.
Why We Do This
Minimal Fixture is important for achieving Tests as Documentation (see Goals of Test Automation on page X) and for avoiding Slow Tests (page X). A test that uses a Minimal Fixture will always be easier to understand than one which uses a fixture that contains unecessary or irrelevant information. This is true whether we are using a Fresh Fixture (page X) or a Shared Fixture (page X) although the effort to build a Minimal Fixture is typically higher with a shared fixture. Defining the Minimal Fixture is much easier for a Fresh Fixture because it need only serve a single test.
We design a fixture that includes only those objects that are absolutely necessary to express the behavior that the test verifies. Another way to phrase this is "If the object is not important to the test, it is important not to have it included in the fixture"
To build a Minimal Fixture, ruthlessly remove anything from the fixture that does not help the test communicate how the SUT should behave.
There are two forms of "minimization" that can be considered.
- We can eliminate objects entirely. That is, don't even build the objects as part of the fixture. If the object isn't necessary to prove something about how the system under test (SUT) behaves, just don't include it at all.
- We can hide unnecessary attributes of the object when they don't contribute to the understanding of the expected behavior.
A simple way to find out whether an object is necessary as part of the fixture is to remove it. If the test fails as a result, it was probably necessary in some way. However, it may have only been necessary as an argument to some method we are not interested in or as an attribute of an object that is never used. Including these kinds of objects as part of fixture setup definitely contributes to Obscure Test (page X). We can eliminate them either by hiding them (see below) or by eliminating the need for them by passing in Dummy Objects (page X) or by using Entity Chain Snipping (see Test Stub on page X). But if the object is actually accessed by the SUT as it is executing the logic under test, we may have no choice but to include it as part of the test fixture.
Having determined that it is necessary for the execution of the test, we must now ask ourself whether it is helpful in understanding the test. If we were to initialize it "off-stage" would that make it harder to understand the test? Would it lead to Obscure Test by acting as a Mystery Guest (see Obscure Test)? If so, we want to keep it visible. Boundary value are a good example of a case where we do want to keep the objects and attributes visible.
If we have established that the object or attribute isn't necessary for understanding the test, we should make every effort to eliminate it from the Test Method (page X) (but not necessarily from the test fixture.) Creation Methods (page X) are a common way of achieving this goal. We can hide the attributes of objects that don't affect the outcome of the test but which are needed for constructing the object by using Creation Methods to fill in all the "don't care" attributes with meaningful default values. We can also hide the creation of necessary depended-on objects within the Creation Methods. A good example of this occurs when writing tests that require badly formed objects as input (for testing a SUT with invalid inputs). In this case we don't want to confuse the issue by showing all the valid attributes of the object being passed to the SUT; these could be many. We want to focus on the invalid attribute. We can do this by using the One Bad Attribute (see Derived Value on page X) pattern to build malformed objects with a minimum of code by calling a Creation Method to construct a valid object and then replacing a single attribute with the invalid value that we want to verify the SUT will handle correctly.
Copyright © 2003-2008 Gerard Meszaros all rights reserved