.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Test Logic in Production

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 217 of xUnit Test Patterns for the latest information.

The code that is put into production contains logic that should be exercised only during tests.

The system under test (SUT) may contain logic that cannot be run in a test environment. Tests may require that the SUT behave in specific ways to allow full test coverage.

Symptoms

There is logic in the SUT that is only there to support testing. This logic may be "extra stuff" that the tests need to gain access to internal state of the application for fixture set up or result verification purpose. It may also be changes to the logic of the system when it detects that it is being tested.

Impact

We would prefer not to end up with Test Logic in Production as that can make the SUT more complex and it opens the door to additional kinds of bugs that we'd just as soon not have. A system that behaves one way in the test lab and an entirely different way in production is a recipe for disaster!

Causes

Cause: Test Hook

Conditional logic within the SUT determines whether the "real" code or test-specific logic is run.

Symptoms

As a code smell there may be no behavioral symptoms at all or something may go wrong in production. We may see snippets of code in the SUT that look something like:

if (testing) { 
   return hardCodedCannedData; 
} else { // the real logic ... 
   return gatheredData; 
} 
Inline code sample

Impact

Code that was not designed to work in production and which has not been verified to work properly in the production environment could accidently be run in production and cause serious problems.

The maiden flight of the Ariane 5 rocket blew up 37 seconds after takeoff because a piece of code that was only used while the rocket was on the ground was left running for the first 40 seconds of flight and it tried to assign a 64-bit number representing the sideways velocity of the rocket to a 16-bit field which then convinced the rocket's navigation computer that it was going the wrong way. (See the the sidebar Ariane (page X)
Include the sidebar 'Ariane' on opposite page.
for more details.) While we believe the test code should never be hit in production, do we really want to take this kind of chance?

Root Cause

In some cases, the Test Logic in Production is introduced to make the behavior of the SUT more deterministic by returning known (hard-coded) values. In other cases Test Logic in Production may have been introduced to avoid executing code which cannot be run in a test environment. Unfortunately, this can result in not executing that code in the production environment if something is misconfigured.

In some cases, tests may require that the SUT execute additional code which would otherwise be executed by a depended on component. For example, code run from a trigger in a database won't be run if the database is replaced by a Fake Database (see Fake Object on page X) therefore the test needs to ensure that the equivalent logic is executed from somewhere within the SUT.

Possible Solution

Instead of adding test logic into the production code directly, we can move logic into a substitutable dependency. We can put code that should only be run in production into a Strategy[GOF][GOF] object which is installed by default and replaced by a Null Object[PLOPD3] in our tests. Code which should only be run during tests can be put into a Strategy[GOF] object which is configured as a Null Object by default. When we want to have the SUT execute extra code during testing, we can replace the Strategy object with a test-specific version. We need to ensure that we have a Constructor Test (see Test Method on page X) that verifies that any variables that hold references to Strategy objects are initialized correctly when not overridden by the test.

It may also be possible to override specific methods of the SUT in a Test-Specific Subclass (page X) if the production logic we want to circumvent is well localized in overridable methods. This is enabled by Self-Calls[WWW].

Cause: For Tests Only

Code exists in the SUT strictly for use by tests.

Symptoms

Some of the methods of the SUT are only used by tests. Some of the attributes are public when they really should be private.

Impact

Software that is added to the SUT For Tests Only makes the SUT more complex. It can confuse potential clients of the software's interface by introducing additional methods that are not intended to be used by any code other than the tests. This code may have been tested only in very specific circumstances and might not work in the typical usage patterns used by real clients.

Root Cause

The test automater may need to add additional methods to a class to expose information needed by the test or they may add methods to provide more control of initialization (such as for the installation of a Test Double (page X).) Test-driven development will cause these additional methods to be developed even though they aren't really needed by clients. When retrofitting tests onto legacy code, the test automater may need access to information or functionality that is not already exposed.

For Tests Only can also be caused by a SUT that is used asymmetrically in real life. Automated tests (especially a round trip test) typically use software in a more symmetric fashion and hence may need methods that the real clients do not need.

Possible Solution

We can provide tests with access to private information by creating a Test-Specific Subclass of the SUT which has methods to expose the needed attributes or initialization logic. The test needs to be able to create instances of the subclass instead of the SUT class for this to be possible.

If for some reason the extra methods cannot be moved to a Test-Specific Subclass, they should be clearly labeled For Tests Only. This can be done by adopting a naming convention such as starting the names with "FTO_".

Cause: Test Dependency in Production

Production executables depend on test executables.

Symptoms

We cannot build only the production code code; some test code must be included in the build to allow the production code to compile. Or, we might notice that we cannot run the production code if the test executables are not present.

Impact

Even if there is no test code in production modules, problems can arise if any of the production modules depend on test modules. At minimum, this makes the executable larger even if none of the test code is actually used in production scenarios. It also opens the door to test code being executed accidentally during production.

Root Cause

Test Dependency in Production is usually caused by a lack of attention to inter-module dependencies but it may also be caused by a built-in self test requiring access to test automation infrastructure such as Test Utility Methods (page X) or the Test Automation Framework (page X) to report test results.

Possible Solution

We must manage our dependencies carefully to ensure that no production code depends on test code even for innocuous things like type definitions.

Anything required by both test and production code should live in a production module or class that is accessible to both.

Cause: Equality Pollution

Test Logic in Production is the implementation of test-specific equality in the equals method of the SUT.

Symptoms

Equality Pollution can be hard to spot once it has occurred because what is notable is that the SUT doesn't actually need the equals method to be implemented. In other cases there may be behavioral symptoms such as tests starting to fail when the equals method is modified to support the specific needs of a test or when the definition of equals is changed within the SUT as part of a new feature or user story.

Impact

We may end up writing unnecessary equals methods simply to satisfy tests. Or we may change the definition of equals such that it no longer satisfies the business requirements.

Equality Pollution may make it difficult to introduce the equals logic prescribed by some new requirement if it already exists to support test-specific equality for another test.

Root Cause

Equality Pollution is caused by a lack of awareness of the concept of test-specific equality. Some early version of dynamic Mock Object (page X) generation tools forced us to use the SUT's definition of equals which lead to Equality Pollution.

Possible Solution

When a test requires test-specific equality we should use a Custom Assertion (page X) instead of modifying the equals method just so that we can use a built-in Equality Assertion (see Assertion Method on page X).

When using dynamic Mock Object generation tools we can make sure that we use a Comparator[WWW] rather than relying on the equals method supplied by the SUT. We can also implement the equals method on a Test-Specific Subclass of an Expected Object (see State Verification on page X) to avoid adding it to a production class directly.

Further Reading

For Tests Only and Equality Pollution were first introduced in a paper at XP2001 called "Refactoring Test Code" [RTC].



Page generated at Wed Feb 09 16:39:51 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Test Smells"
Code Smells
--Obscure Test
--Conditional Test Logic
--Hard-to-Test Code
--Test Code Duplication
--Test Logic in Production
----For Tests Only
----Test Dependency in Production
----Equality Pollution
Behavior Smells
--Assertion Roulette
--Erratic Test
--Fragile Test
--Frequent Debugging
--Manual Intervention
--Slow Tests
Project Smells
--Buggy Tests
--Developers Not Writing Tests
--High Test Maintenance Cost
--Production Bugs