.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

A Brief Tour

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

There are a lot of principles, patterns and smells in this book and even more patterns that I did not have room for. Do you need to learn them all? Do you need to use them all? Probably not! In this chapter I provide an abbreviated introduction to the bulk of the material in the entire book. You can use it as a quick tour of the material before diving into particular patterns or smells of interest. You can also use it as a warm up before reading the more detailed narrative chapters.

The Simplest Test Automation Strategy That Could Possibly Work

There is a simple test automation strategy that will work for many, many projects. In this section I describe this minimal test strategy. The principles, patterns and smells referenced in this chapter are the core patterns that will serve us well in the long run. If we learn to apply them well we will probably be successful in our test automation endeavors. If we find that we really cannot make it work on our project using these patterns, we can fall back to the alternative patterns listed in the full descriptions of these patterns and in the other narratives.

I have laid out this simple strategy in five parts:

Development Process

First things first. When do we write our tests? Writing tests before we write our software has several benefits. It gives us an agreed upon definition of what success looks like. (If our customer says they cannot define the tests before we have built the software, we have every reason to be worried!)

When doing new software development, we strive to do storytest-driven development by first automating a suite of customer tests that verify the functionality provided by the application. To ensure all our software is tested, we augment these tests with a suite of unit tests that verify all code paths, or at a minimum, all the code paths that are not covered by the customer tests. We can use code coverage tools to find out what code is not being exercised and retrofit unit tests for the untested code.(We will likely find fewer Missing Unit Test (see Production Bugs on page X) if we are practicing test-driven development than if we are doing "test last" but there is still value in running the coverage tools with TDD.)

We organize the unit tests and customer tests in separate test suites so that we can run just the unit tests or just the customer tests. The unit tests should always pass before we check them in; this is what we mean by the phrase "keep the bar green". We can ensure they are run frequently by including them in the Smoke Tests[SCM] run as part of the Integration Build[SCM]. Many of the customer tests will fail until the corresponding functionality is built but it is useful to run all the passing customer tests as part of the integration build but only if this does not slow the build down too much. In that case, we can leave them out of the check-in build and only run them every night.

We can ensure our software is testable by doing test-driven development. That is, we write the unit tests before we write the code and we use the tests to help us define the design. This will help concentrate all the business logic that needs verification in well-defined objects that can be tested independently of the database. We should also have unit tests for the data access layer and the database but we try to keep the dependency on the database to a minimum in the unit tests for the business logic.

Customer Tests

We define our customer tests to capture the essence of what the customer wants the system to do. Enumerating the tests before we do development is an important step whether or not we actually automate them because it helps the development team understand what the customer really wants; they define what success looks like. We can automate the tests using Scripted Tests (page X) or Data-Driven Tests (page X) depending on who is preparing the tests; customers can take part in test automation if we use Data-Driven Tests. On rare occasions we might even use Recorded Tests (page X) for regression testing an existing application while we refactor it to improve its testability but we usually discard these tests once we have other tests that cover the functionality because Recorded Tests tend to be Fragile Tests (page X).

We strive to make our customer tests representative of how the system is really used; this is often in conflict with trying to keep the tests from being too long since long tests are often Obscure Tests (page X) and tend not to provide very good Defect Localization (see Goals of Test Automation on page X) when they fail part way through. We can also use well-written Tests as Documentation (see Goals of Test Automation) of how the system is supposed to work. To keep the tests simple and easy to understand, we bypass the user interface by doing Subcutaneous Testing (see Layer Test on page X) against one or more Service Facades[CJ2EEP] that encapsulate all the business logic behind a simple interface that is also used by the presentation layer.

Every test needs a starting point and we ensure that each test sets up this starting point, known as the test fixture, each time the test is run. This Fresh Fixture (page X) helps us avoid Interacting Tests (see Erratic Test on page X) by ensuring that tests do not depend on anything they did not set up themselves. We avoid using a Shared Fixture (page X) (unless it is an Immutable Shared Fixture (see Shared Fixture)) to avoid starting down the slippery slope to Erratic Tests.

If our application normally interacts with other applications, we may need to isolate our application from any that we do not have in our development environment by using some form of Test Double (page X) for the objects that act as the interface to the other application(s). If the tests run too slowly because of database access or other slow components, we can replace them with functionally equivalent Fake Objects (page X) to speed up our tests so developers will run them more regularly. If at all possible, we avoid using Chained Tests (page X) as these are just the test smell Interacting Tests in disguise.

Unit Tests

To make our unit tests effective, we need to make sure that each one is a Fully Automated Test (see Goals of Test Automation) that does a round trip test against a class through its public interface. We can strive for Defect Localization by ensuring that each test is a Single Condition Test (see Principles of Test Automation on page X) that exercises a single method or object in a single scenario. We should also write our tests so that each part of the Four-Phase Test (page X) is easily recognizable so that we can use the Tests as Documentation.

We use a Fresh Fixture strategy so that we do not have to worry about Interacting Tests or fixture teardown. We can start off by creating a Testcase Class for each class we are testing (Testcase Class per Class (page X)) with each test being a separate Test Method on that class. Each Test Method can use Delegated Setup (page X) to build a Minimal Fixture (page X) that makes the tests easily understood by calling well-named Creation Methods (page X) to build the objects we need in each test fixture.

To make the tests self-checking (Self-Checking Test (see Goals of Test Automation)), we express the expected outcome of each test as one or more Expected Objects (see State Verification on page X) and compare them with the actual objects returned by the system under test (SUT) using the built-in Equality Assertions (see Assertion Method on page X) or Custom Assertions (page X) that implement our own test-specific equality. If several tests are expected to result in the same outcome, we can factor out the verification logic into an outcome-describing Verification Method (see Custom Assertion) so this can be easily recognized by the test reader.

If we have Untested Code (see Production Bugs on page X) because we cannot find a way to cause the path through the code to be executed, we can use a Test Stub (page X) to gain control of the indirect inputs of the SUT. If there are Untested Requirements (see Production Bugs) because not all the behavior of the system is observable via its public interface, we can use a Mock Object (page X) to intercept and verify the indirect outputs of the SUT.

Design for Testability

Automated testing is much simpler if we adopt a Layered Architecture[DDD,PEAA,WWW]. As a minimum we should separate our "business logic" from the database and the user interface so that we can test it easily using either Subcutaneous Tests or Service Layer Tests (see Layer Test). We can minimize any dependence on a Database Sandbox (page X) by doing most (if not all) of our testing using in-memory objects only. This lets the runtime environment implement Garbage-Collected Teardown (page X) for us automatically and we will not find ourselves writing complex, error-prone tear down logic (a sure source of Resource Leakage (see Erratic Test)). It will also help avoid Slow Tests (page X) by reducing disk I/O which is much slower than memory manipulation.

If we are building a GUI, we should try to keep the complex GUI logic out of the visual classes. Using a Humble Dialog (see Humble Object on page X) that delegates all decision-making to non-visual classes allows us to write unit tests for the GUI logic such as enabling/disabling buttons without having to instantiate the graphical objects or the framework on which they depend.

If the application is complex enough or if we are expected to build components that will be reused by other projects, we can augment the unit tests with component tests that verify the behavior of each component in isolation. We will probably need to use Test Doubles to replace any components our component depends on. We can make it possible to install the Test Doubles at run time by using either Dependency Injection (page X), Dependency Lookup (page X) or a Subclassed Singleton (see Test-Specific Subclass on page X).

Test Organization

If we end up with too many Test Methods on our Testcase Class, we can consider splitting the class either by what methods or features the tests are verifying or based on their fixture needs. These patterns are called Testcase Class per Feature (page X) and Testcase Class per Fixture (page X) respectively. Testcase Class per Fixture allows us to move all the fixture setup code into the setUp method, an approach I call Implicit Setup (page X). We add each resulting Testcase Class to a Test Suite Object (page X) that is in turn added to the Test Suite Object for the containing package or name space resulting in a Suite of Suites (see Test Suite Object). This allows us to run all the tests or just a subset relevant to the area of the software in which we are working.

What's Next?

This whirlwind tour of the most important goals, principles, patterns and smells is just an introduction. The remaining narrative sections give a more detailed overview of each area I barely touched on here. If you have already spotted some patterns or smells you want to learn more about, you can certainly proceed directly to the detailed descriptions in the reference section. Otherwise, the next step is to proceed to the subsequent narratives that provide an overview of these patterns and the alternatives to them in a bit more detail. First up is the Test Smells narrative in which I describe some common "test smells" that motivate much of the refactoring we do on our tests.



Page generated at Wed Feb 09 16:39:28 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation