.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Behavior Verification

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 468 of xUnit Test Patterns for the latest information.
Also known as: Interaction Testing

How do we make tests self-checking when there is no state to verify?

We capture the indirect outputs of the SUT as they occur and compare them to the expected behavior.

Sketch Behavior Verification embedded from Behavior Verification.gif

A Self-Checking Test (see Goals of Test Automation on page X) must verify that the expected outcome has occurred without manual intervention by whoever is running the test but what do we mean by "expected outcome"? The system under test (SUT) may or may not be "stateful" and if it is, it may or may not be expected to end up in a different state after it has been exercised. The SUT may also be expected to invoke methods on other objects or components.

Behavior Verification involves verifying the indirect outputs of the SUT as it is being exercised.

How It Works

Each test specifies not only how the client of the SUT interacts with it during the exercise SUT phase of the test but also how the SUT should interact with the components on which it should depend. This ensures that the SUT really is behaving as specified rather than just ending up in the correct post-exercise state.

Behavior Verification almost always involves interacting with or replacing a depended-on component (DOC) that the SUT interacts with at run time. The line between Behavior Verification and State Verification (page X) can get a bit blurry when the SUT stores its state in the DOC because both are layer-crossing tests. We can distinguish the two cases by whether we are verifying the post-test state in the DOC (State Verification) or verifying the method calls made by the SUT on the DOC (Behavior Verification).

When To Use It

Behavior Verification is primarily a technique for unit tests and component tests. We can use Behavior Verification any time the SUT calls methods on other objects or components; we must use Behavior Verification whenever the expected outputs of the SUT are transient and cannot be determined simply by looking at the post-exercise state of the SUT or the DOC. This forces us to monitor these indirect outputs as they occur.

A common application of Behavior Verification is when we are writing our code "outside-in". This approach, often called need-driven development, involves writing the client code before we write the DOC. This is a good way to find out exactly what the interface provided by the DOC needs to be based on real, concrete examples rather than on speculation. The main objection to this approach is that we need to use a lot of Test Doubles (page X) to write these tests. That could result in Fragile Tests (page X) because each test knows so much about how the SUT is implemented. Since tests specify the behavior of the SUT in terms of its interactions with the DOC, a change in implementation of the SUT could break a lot of tests. This Overspecified Software (see Fragile Test) could lead to High Test Maintenance Cost (page X).

The jury is still out on whether this is a better approach than State Verification. In most cases it is clear that State Verification is necessary; in some cases it is also clear the Behavior Verification is necessary. What has yet to be determined is whether Behavior Verification is necessary in all cases.

Implementation Notes

Before we exercise the SUT by invoking the methods of interest, we ensure that we have a way of observing its behavior. In some cases this is made possible by the mechanisms used by the SUT to interact with the components surrounding it; when this is not the case we must install some sort of Test Double to monitor its indirect outputs.

There are two fundamentally different ways to implement Behavior Verification, each with its own proponents. The Mock Object (page X) community has been very vocal about the use of "mocks" as an Expected Behavior Specification so it is the more commonly used approach but it is not the only way of doing Behavior Verification.

Variation: Procedural Behavior Verification

We capture the method calls made by the SUT as it executes and later get access to them from within the Test Method (page X). Then we use Equality Assertions (see Assertion Method on page X) to compare them with the expected results.

The most common way of trapping the indirect outputs of the SUT is to install a Test Spy (page X) in place of the the DOC during the fixture setup phase of the Four-Phase Test (page X). During the result verification phase of the test we then ask the stub how it was used by the SUT during the exercise SUT phase. The alternative is to ask the real DOC how it was used. This is not always possible but when it is, it avoids the need to use a Test Double and the degree to which we have Overspecified Software.

We can reduce the amount of code in the Test Method (and avoid Test Code Duplication (page X)) by defining Expected Objects (see State Verification) for the arguments of method calls or by delegating the verification of them to Custom Assertions (page X).

Variation: Expected Behavior Specification


Also known as: Expected Behavior

Expected Behavior Specification is a different way of doing Behavior Verification. Instead of waiting until after the fact to verify the indirect outputs of the SUT using a sequence of assertions, we load the Expected Behavior Specification into a Mock Object and let it verify that the method calls are correct as they are received.

We can use an Expected Behavior Specification when we know exactly what should happen ahead of time and we want to remove all the Procedural Behavior Verification from the Test Method. This tends to make the test shorter (assuming we are using a compact representation of the expected behavior) and can be used to cause the test to fail on the first deviation from the expected behavior if we so choose.

One distinct advantage of using Mock Objects is that there are Test Double generation tools available for many members of the xUnit family. They make implementing Expected Behavior Specification very easy since we don't need to build a hand-built Test Double for each set of tests.

Motivating Example

The following test is not a Self-Checking Test because it does not verify that the expected outcome has actually occurred; it contains no calls to Assertion Methods nor does it set up any expectations on a Mock Object. Because we are testing the logging functionality of the SUT the state we are interested in is actually stored in the logger rather than within the SUT itself. The writer of this test hasn't found a way to access the state we are trying to verify.

   public void testRemoveFlightLogging_NSC() throws Exception {
      // setup
      FlightDto expectedFlightDto = createARegisteredFlight();
      FlightManagementFacade facade = new FlightManagementFacadeImpl();
      // exercise
      facade.removeFlight(expectedFlightDto.getFlightNumber());
      // verify
      // have not found a way to verify the outcome yet
      //  - Log contain record of Flight removal
   }
Example NonSelfCheckingTest embedded from java/com/clrstream/ex8/test/FlightManagementFacadeTest.java

To verify the outcome, whoever is running the tests will have to access the database and the log console and compare what was output to what should have been output.

One way to make the test Self-Checking is to enhance the test with Expected State Specification (see State Verification) of the of the SUT as follows:

   public void testRemoveFlightLogging_ESS() throws Exception {
      // fixture setup
      FlightDto expectedFlightDto = createAnUnregFlight();
      FlightManagementFacadeImplTI facade = new FlightManagementFacadeImplTI();
      // exercise
      facade.removeFlight(expectedFlightDto.getFlightNumber());
      // verify
      assertFalse("flight still exists after being removed",
                  facade.flightExists( expectedFlightDto.getFlightNumber()));
   }
Example ExpectedStateSpecificationSUT embedded from java/com/clrstream/ex8/test/FlightManagementFacadeTestSolution.java

Unfortunately, this test does not verify the logging function of the SUT in any way and this illustrates one of the reasons why Behavior Verification came about: some functionality of the SUT is not visible within the end state of the SUT itself; it can only be seen if we intercept the behavior at an internal observation point between the SUT and the DOC or if we express the behavior in terms of state changes on the objects with which the SUT interacts as it executes.

Refactoring Notes

When starting with the first example we aren't really refactoring because we are adding verification logic to make the tests behave differently. There are, however, several refactoring cases that are worth discussing.

To refactor from State Verification to Behavior Verification we must do a Replace Dependency with Test Double (page X) refactoring to gain visibility of the indirect outputs of the SUT via a Test Spy or Mock Object.

To refactor from Expected Behavior Specification to Procedural Behavior Verification, we install a Test Spy instead of the Mock Object and after exercising the SUT, we code assertions on values returned by the spy against the expected values that were used as arguments when configuring the formerly Mock Object that we converted to a Test Spy.

To refactor from Procedural Behavior Verification to Expected Behavior Specification , we configure a Mock Object with the expected values from the assertions on values returned by the Test Spy and install the Mock instead of the "spy".

Example: Procedural Behavior Verification

The following test verifies the basic functionality of creating a flight but it uses Procedural Behavior Verification to verify the indirect outputs of the SUT. That is, it uses a Test Spy to capture the indirect outputs and then verifies they are correct by using calls to Assertion Methods inline within the test.

   public void testRemoveFlightLogging_recordingTestStub() throws Exception {
      // fixture setup
      FlightDto expectedFlightDto = createAnUnregFlight();
      FlightManagementFacade facade = new FlightManagementFacadeImpl();
      //    Test Double setup
      AuditLogSpy logSpy = new AuditLogSpy();
      facade.setAuditLog(logSpy);
      // exercise
      facade.removeFlight(expectedFlightDto.getFlightNumber());
      // verify
      assertEquals("number of calls", 1, logSpy.getNumberOfCalls());
      assertEquals("action code", Helper.REMOVE_FLIGHT_ACTION_CODE,
                   logSpy.getActionCode());
      assertEquals("date", helper.getTodaysDateWithoutTime(), logSpy.getDate());
      assertEquals("user", Helper.TEST_USER_NAME, logSpy.getUser());
      assertEquals("detail", expectedFlightDto.getFlightNumber(),
                   logSpy.getDetail());
   }
Example ProceduralBehaviorVerification embedded from java/com/clrstream/ex8/test/FlightManagementFacadeTestSolution.java

Example: Expected Behavior Specification

In this version of the test, we're using the JMock framework to define the expected behavior of the SUT. The method expects on mockLog is used to configure it with the Expected Behavior Specification (specifically, the expected log message.)

   public void testRemoveFlight_JMock() throws Exception {
      // fixture setup
      FlightDto expectedFlightDto = createAnonRegFlight();
      FlightManagementFacade facade = new FlightManagementFacadeImpl();
      // mock configuration
      Mock mockLog = mock(AuditLog.class);
      mockLog.expects(once()).method("logMessage")
               .with(eq(helper.getTodaysDateWithoutTime()), eq(Helper.TEST_USER_NAME),
                     eq(Helper.REMOVE_FLIGHT_ACTION_CODE),
                     eq(expectedFlightDto.getFlightNumber()));
      // mock installation
      facade.setAuditLog((AuditLog) mockLog.proxy());
      // exercise
      facade.removeFlight(expectedFlightDto.getFlightNumber());
      // verify
      // verify() method called automatically by JMock
   }
Example ExpectedBehavior embedded from java/com/clrstream/ex8/test/FlightManagementFacadeTestSolution.java


Page generated at Wed Feb 09 16:39:36 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Result Verification Patterns"
State Verification
Behavior Verification
--Interaction Testing
--Procedural Behavior Verification
--Expected Behavior Specification
--Expected Behavior
Custom Assertion
Delta Assertion
Guard Assertion
Unfinished Test Assertion