.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Slow Tests

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 253 of xUnit Test Patterns for the latest information.

The tests take too long to run.

Symptoms

The tests take long enough to run that developers don't run them every time they make a change to the system under test (SUT). They wait until the next coffee break or other interrupt before running them. Or, whenever they run the tests they walk around and chat with other team members (or play Doom or surf the Internet or ...)

Impact

Slow Tests obviously have a direct cost because they reduce the productivity of the person running the test. When we are test driving the code, we'll waste precious seconds every time we run our tests; when it is time to run all the tests before we commit our changes, we'll have an even more significant wait time. Slow Tests also have many indirect costs. These include:

A common reaction to Slow Tests is to immediately go for a Shared Fixture (page X) but this almost always results in other problems including Erratic Tests (page X). A better solution is to use a Fake Object (page X) to replace slow components (such as the database) with faster ones. However, if all else fails and we must use some kind of Shared Fixture, we should try making it immutable if at all possible.

Trouble-Shooting Advice

Slow Tests can be caused by either the way the SUT is built and tested or by the way the tests are designed. Sometimes the problem is completely evident just watching the green bar grow as we run the tests. There may be obvious pauses in the execution; we may see explicit delays coded in a Test Method (page X). If, however, the cause is not obvious, we can try running different subsets of tests (or sub-suites) to see which ones run quickly and which ones take a long time to run.

A profiling tool can come in handy to see where were are spending all the time but xUnit gives us a simple means to build our own mini profiler: we can edit the setUp and tearDown methods of our Testcase Superclass (page X) and write out the start/end time or duration into a log file along with the name of the Testcase Class (page X) and Test Method. Then we import this file into a spreadsheet, sort by duration and voila, we have found the culprits. The tests with the longest execution times are the ones where it will be most worthwhile focusing our efforts.

Causes

The specific cause of the Slow Tests could be either in the way we built the SUT or in how we coded the tests themselves. Sometimes, the way the SUT was built forces us to write our tests in a way that makes them slow. This is particularly a problem with legacy code or code that was built "test last".

Cause: Slow Component Usage

A component of the SUT has high latency.

Root Cause

The single most common cause of Slow Tests is interacting with a database in many of the tests. Tests that have to write to a database to set up the fixture and read a database to verify the outcome (a form of Back Door Manipulation (page X)) take about 50 times longer to run than the same test running against in-memory data structures. This is a special case of the more general problem of using slow components.

Possible Solution

We can make our tests run much faster by replacing the slow components with a Test Double (page X) that provides near-instantaneous responses. When the slow component is the database, the use of a Fake Database (see Fake Object) can make the tests run on average fifty times faster! See the sidebar Faster Tests Without Shared Fixtures (page X) for other ways to skin this cat.

Cause: General Fixture

Symptoms

Tests are consistently slow because each test builds the same fixture.

Root Cause

Each test is constructing a large General Fixture each time a Fresh Fixture (page X) is built. Because a General Fixture contains many more objects than a Minimal Fixture (page X), it naturally takes longer to construct. Since Fresh Fixture involves setting up a brand-new instance of the fixture for each Testcase Object (page X), multiply "longer" by the number of tests to get an idea of the magnitude of the slowdown!

Possible Solution

The first inclination is often to implement the General Fixture as a Shared Fixture to avoid rebuilding it for each test but unless we can make this Shared Fixture immutable, this is likely to lead to Erratic Tests and so should be avoided. A better solution is to reduce the amount of fixture being set up by each test.

Cause: Asynchronous Test

Symptoms

A few tests take inordinately long to run; those tests contain explicit delays.

Root Cause

Delays included within a Test Method will slow down test execution considerably. This may be necessary when the software we are testing spawns threads or processes (Asynchronous Code (see Hard-to-Test Code on page X)) and the test needs to wait for them to launch, run and verify whatever side effects they were expected to have. Because of the variability in how long it takes for processes (etc.) to be started, the test usually needs to include quite a long delay "just in case" to ensure it passes consistently. Here's an example of a test with delays:

public class RequestHandlerThreadTest extends TestCase {
   private static final int TWO_SECONDS = 3000;
  
   public void testWasInitialized_Async() throws InterruptedException {
      // Setup:
      RequestHandlerThread sut = new RequestHandlerThread();
      // Exercise:
      sut.start();
      //    Verify:
      Thread.sleep(TWO_SECONDS);
      assertTrue(sut.initializedSuccessfully());
   }
  
   public void testHandleOneRequest_Async() throws InterruptedException {
      // Setup:
      RequestHandlerThread sut = new RequestHandlerThread();
      sut.start();
      // Exercise:
      enqueRequest(makeSimpleRequest());
      // Verify:
      Thread.sleep(TWO_SECONDS);
      assertEquals(1, sut.getNumberOfRequestsCompleted());
      assertResponseEquals(makeSimpleResponse(), getResponse());
   }
}
Example SlowAsynchronousTests embedded from java/com/xunitpatterns/dft/RequestHandlerThreadTest.java

Impact

A two second delay may not seem like a big deal. But consider what happens when we have a dozen such tests. It would take us almost half a minute to run these tests. Contrast this with normal tests; we can run several hundred of those each second.

Possible Solution

The best way to address this is to avoid the asynchronicity in tests by testing the logic synchronously. This may require us to do a an Extract Testable Component (page X) refactoring to implement Humble Executible (see Humble Object on page X).

Cause: Too Many Tests

Symptoms

There are so many tests that they are bound to take a long time to run regardless of how fast they execute.

Root Cause

The obvious cause is having so many tests. This could be because we have such a large system that there really are that many tests or it could be because we have too much overlap between tests.

The less obvious cause is that we are running too many of the tests too frequently!

Possible Solution

We don't have to run all the tests all the time! The key is to ensure that all the tests get run regularly. If the entire suite is taking too long to run, consider creating a Subset Suite (see Named Test Suite on page X) with a suitable cross-section of tests and run this before every commit. The rest of the tests can be run regularly, but less often, by scheduling them to run overnight or at some other convenient time. Some people call this a "build pipeline". For more on this and other ideas see the sidebar Faster Tests Without Shared Fixtures.

If the system is large in size, it is good to break it into a number of fairly independent subsystems or components. This allows teams working on each component to work independently and to only run the tests for their own component. Some of those tests should act as proxies for how the other components would use this component; they will need to be kept up to date if the interface contract is changed. Hmmm, Tests as Documentation (see Goals of Test Automation on page X); I like it! There would also be some end-to-end tests that exercise all the components together (likely a form of story tests) but these don't need to be included in the pre-commit suite.



Page generated at Wed Feb 09 16:39:50 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Test Smells"
Code Smells
--Obscure Test
--Conditional Test Logic
--Hard-to-Test Code
--Test Code Duplication
--Test Logic in Production
Behavior Smells
--Assertion Roulette
--Erratic Test
--Fragile Test
--Frequent Debugging
--Manual Intervention
--Slow Tests
----Slow Component Usage
----Asynchronous Test
----Too Many Tests
Project Smells
--Buggy Tests
--Developers Not Writing Tests
--High Test Maintenance Cost
--Production Bugs