.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

A Roadmap to Effective Test Automation

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

The previous chapter on Testing With Databases introduced a set of patterns specific to testing applications that have a database. These built on the techniques described in the chapters on Test Automation Strategy, Using Test Doubles and Persistent Fixture Management.

We don't become experts in test automation overnight. The skills take time to develop. It takes time to learn the various tools and patterns at our disposal. This chapter provides something of a roadmap for how to learn the patterns and acquire the skills. In it I introduce the concept of "Test Automation Maturity" loosely based on the SEI's Capability Maturity Model (CMM).

Test Automation Difficulty

Some kinds of tests are harder to write than others. This is partly because the techniques are more involved and partly because they are less well known and the tools to do it are less readily available. The following common kinds of tests are listed in approximate order of difficulty, from easiest to most difficult:

  1. Simple entity objects(Domain Model[PEAA])
    • Simple business classes with no dependencies
    • Complex business classes with dependencies
  2. Stateless service objects
  3. Stateful service objects
  4. "Hard to test" code
    • User interface logic exposed via Humble Dialog (see Humble Object on page X).
    • Database logic
    • Multi-threaded software
  5. Object-oriented legacy software (software built without any tests)
  6. Non-object oriented legacy software

As we move down this list the software gets harder and harder to test. The irony is that many teams "get their feet wet" by trying to retrofit tests onto an existing application. This puts them in one of the last two categories in this list which is where the most experience is required. Unfortunately, many teams fail to test the legacy software successfully and that may prejudice them against trying automated testing, with or without test-driven development. If you find yourself trying to learn test automation by retrofitting tests onto legacy software I have two pieces of advice for you: First, hire someone who has done it before to help you through. Second, read Michael Feathers excellent book [WEwLC]; he covers many techniques specifically applicable to retrofitting tests.

Roadmap to Highly Maintainable Automated Tests

Given that some kinds of tests are much harder to write than others, it makes sense to focus on learning how to write the easier tests first before moving on to the more difficult kinds of tests. When teaching our automated testing course to developers, I teach the techniques in the following sequence. This roadmap is based on Maslov's Hierarchy of Needs. [HoN] which says that we only strive to meet the higher level needs once we have satisfied the lower levels.

  1. Exercise the happy path code
  2. Verify direct outputs of happy path
  3. Verify Alternate Paths
    • Vary SUT method arguments
    • Vary pre-test state of SUT
    • Control Indirect Inputs of SUT via Test Stub (page X)
  4. Verify indirect outputs
  5. Optimize Execution & Maintainability
    • Make the tests run faster
    • Make tests easy to understand and maintain
    • Design the SUT for testability
    • Reduce Risk of Missed Bugs

This ordering of needs isn't meant to imply that this is the order in which we might think about implementing any specific test. (Although it can also be used that way, I find it better to always write the assertions first and work back from there.) Rather, it is likely to be the order in which a project team might reasonably expect to learn about the techniques of test automation.

Let us look at each of these points in more detail:

Exercise the Happy Path Code

To run the happy path through the SUT we must automate one Simple Success Test (see Test Method on page X) as a simple round trip test through the SUT'sAPI. To get this test to pass we might simply hard-code some of the logic in the SUT, especially where it might call other components to retrieve information it needs to make decisions that would drive it down the happy path. Before exercising the SUT we need to set up the test fixture by initializing the state of the SUT to the pre-test state. As long as the SUT executes without raising any errors, we consider the test as having passed; at this level of maturity we don't check the actual results against the expected results.

Verify Direct Outputs of the Happy Path

Once we have the happy path executing successfully, we can add result verification logic to turn this into a Self-Checking Test (see Goals of Test Automation on page X). This involves adding calls to Assertion Methods to compare the expected results with what actually occurred. We can easily do this for any objects or values returned to the test by the SUT (e.g. "return values", "out parameters", etc.). We can also call other methods on the SUT or use public fields to access the post-test state of the SUT; we can then call Assertion Methods on these values as well.

Verify Alternate Paths

At this point the happy path through the code is reasonably well tested. The alternate paths through the code are still Untested Code (see Production Bugs on page X) so the the next step is to write tests for the alternative paths (whether we have already written then or are striving to automate the tests that would drive us to implement them.) The question to ask here is "What causes the alternate paths to be exercised?" The most common causes are:

The first case can be tested by varying the logic in our tests that calls the SUT methods we are exercising and passing in different values as arguments. The second case involves initializing the SUT with a different starting state. Neither of these require any "rocket science". The third case is where things get interesting.

Controlling Indirect Inputs

Because it is the responses from other components that is supposed to cause the SUT to exercise the alternate paths through the code, we need to get control over these indirect inputs. We can do this using a Test Stub that returns the value that should drive the SUT into the desired code path. As part of fixture setup we must force the SUT to use the stub instead of the real component. The Test Stub can be built two different ways: a Hard-Coded Test Stub (see Test Stub) contains hand-written code that returns the specific values while a Configurable Test Stub (see Test Stub) is configured by the test to return the desired values. In both cases, the SUT has to use the Test Stub instead of the real component.

Many of these alternate paths result in "successful" outputs from the SUT; these tests are considered Simple Success Tests and we use a style of Test Stub called a "Responder". Some of these paths may be expected to raise errors or exceptions; these are considered Expected Exception Tests (see Test Method) and we use a style of stub called a Saboteur (see Test Stub).

Making Tests Repeatable and Robust

The act of replacing a real depended-on component (DOC) with a Test Stub has a very desirable side effect; it makes our tests more robust and repeatable. (See Robust Test (see Goals of Test Automation) and Repeatable Test (see Goals of Test Automation) for a more detailed description.) This is because we have replaced a possibly non-deterministic component with one that is completely deterministic and under test control. This is a good example of the Isolate the SUT principle (see Principles of Test Automation on page X) principle.

Verify Indirect Output Behavior

Thus far we have focused on getting control of the indirect inputs of the SUT and verifying it is easily visible direct outputs by inspecting the post-state test of the SUT. This style of result verification is known as State Verification (page X). There are times, however, when we cannot verify that the SUT has behaved correctly simply by looking at the post-test state. In these cases we may still have some Untested Requirements (see Production Bugs) that can only be verified by doing Behavior Verification (page X).

We can build on what we already know how to do by using one of the close relatives of the Test Stub to intercept the outgoing method calls from our SUT. Test Spys "remember" how they are called so that the test can later retrieve the usage information and use Assertion Method calls to compare it to the expected usage. Mock Objects can be loaded with expectations during fixture setup which they compare with the actual calls as they occur while the SUT is being exercised.

Optimize Test Execution and Maintenance

At this point we should have automated tests for all the paths through our code. We may, however, have less than optimal tests.

Make the tests run faster

Slow Tests is often the first behavior smell we need to address. We can make tests run faster by reusing the test fixture across many tests by using some form of Shared Fixture (page X) but this typically results in a lot of other problems. Replacing a DOC with a Fake Object (page X) that is functionally equivalent but executes much faster is almost always a better solution.

Make tests easy to understand and maintain

We can make Obscure Tests easier to understand and remove a lot of Test Code Duplication by refactoring our Test Methods to call Test Utility Methods that contain any frequently used logic instead of doing everything inline. Creation Methods (page X) and Custom Assertions (page X) are two of the most common examples. Finder Methods (see Test Utility Method) and Parameterized Tests (page X) are two others.

If our Testcase Classes (page X) are getting too big to understand we can reorganize the Testcase Classes around fixtures or features. We can also improve communication of intent by using a systematic way of naming Testcase Classes and Test Methods that exposes the test conditions we are verifying in them.

Reduce Risk of Missed Bugs

If we are having problems with Buggy Tests or Production Bugs, we can reduce the risk of false negatives (tests that pass when they shouldn't) by encapsulating complex test logic. This should be done using intent revealingly named Test Utility Methods. We should verify the behavior of non-trivial Test Utility Methods using Test Utility Tests (see Test Utility Method).

What's Next?

This concludes the narratives part of this book. In these narratives I have given you an overview of the goals, principles, philosophies, patterns, smells and coding idioms related to writing effective automated tests. The next section of this book contains detailed descriptions of each of the patterns and smells complete with code samples.



Page generated at Wed Feb 09 16:39:28 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation