Faster Tests Without Shared Fixtures
The common first reaction to having Slow Tests (page X) is to switch to a Shared Fixture (page X) approach. There are several other solutions available. This sidebar relates experiences on several projects.
Fake Database
On one of our early XP projects we were writing a lot of tests that accessed the database. At first we used a Shared Fixture but when we encountered Interacting Tests (see Erratic Test on page X) and later Test Run Wars (see Erratic Test), we changed to a Fresh Fixture (page X) approach. Because the tests needed a fair bit of reference data the tests were taking a lot of time to run. We found that on average, for every read or write the system under test (SUT) did to/from the database, the test did several more. It was taking 15 minutes to run the full test suite of several hundred tests and this was greatly impeding our ability to integrate quickly and often.
We were using a data access layer to keep the SQL out of our code. We discovered that this allowed us to replace the real database with a Fake Database (see Fake Object on page X) that was functionally equivalent. First, we started out by using simple HashTables to store the objects against a key. This allowed us to run many of our simpler tests "in memory" rather than against the database. And that bought us a significant drop in test execution time.
Our persistence framework supported an object query interface. We were able to build an interpreter of the object queries that ran against our HashTable database implementation and that allowed us to get the majority of our tests working entirely in memory. On average, our tests run about 50 times faster in memory that with the database. For example, a test suite that takes 10 minutes to run with the database takes 10 seconds to run in memory.
This approach was so successful that we have reused the same testing infrastructure on many of our subsequent projects. And the faked out persistence framework means we don't have to bother building a "real database" until our object models stabilize which can be several months into the project.
Incremental Speedups using Various Techniques
Ted O'Grady and Joseph King are agile team leads on a large (50+ developers, subject matter experts and testers) eXtreme Programming project. Like many project teams building a database-centric application, they were suffering from Slow Tests. They related these experiences to me in an e-mail and conversations. As of late 2005, their check-in test suite ran in under eight minutes compared to about eight hours for a full test run against the database. That is a pretty impressive speed difference. Here is their story:
Currently we have about 6700 tests that we run on a regular basis. We've actually tried a few things to speed up the tests and they've evolved over time:
- In January 2004, we were running our tests directly against a database via Toplink.
- In June 2004 we modified the application so we could run tests against an in-memory-in-process java database (HSQL). This cut the time to run in half.
- In August 2004, we created a test-only framework that allowed Toplink to work without a database at all. That cut the time to run all the tests by a factor of 10
- In July 2005, we built a shared "check-in" test execution server that allowed us to run tests remotely. This didn't save any time at first but it has proven to be quite useful nonetheless.
- In July 2005, we also started using a clustering framework that allowed us to run tests distributed across a network. This cut the time to run the tests in half.
- In August 2005, we removed the GUI and Master Data (reference data CRUD) tests from "Check-in Suite" and ran them only from Cruise Control. This cut the time to run by approximately 15-20%
Since May 2004 we have also had Cruise Control run all the tests against the database at regular intervals. The time it takes Cruise Control to complete has grown with the number of tests from an hour to nearly eight hours now.
When a threshhold has been met that prevents the developers from: A. running them frequently when developing; and B. creating long check-in queues as people wait for the token to check-in; we have adapted by experimenting with new techniques. As a rule we try to keep the running of the tests under 5 minutes with anything over 8 minutes being a trigger to try something new.
We have resisted thus far the temptation to run only a subset of the tests and instead focused on ways to speed up running all the tests. Although as you can see we have begun removing the tests developers must run continuously (e.g. 'Master Data' and 'GUI' test suites are not required to check-in as those are run by Cruise Control and are areas that change infrequently).
Two of the most interesting solutions recently (aside from the in-memory framework) are the test server and the clustering framework.
The test server (named the 'check-in' box here) is actually quite useful and has proven to be reliable and robust. We bought an Opteron box that is roughly twice as fast as the development boxes (really, the fastest box we could find). The server has an account set up for each development machine in the pit. Using the unix tool 'rsynch', the Eclipse workspace is synchronized with the user's corresponding server account filesystem. A series of shell scripts then recreates the database on the server for the remote account and runs all the development tests. When the tests have completed a list of timings to run each test is dumped to the console along with a MyTestSuite.java class containing all the test failures which the developer can use to run locally to fix any tests that have broken. The biggest advantage the remote server has provided is that it makes running a large numer of tests feel fast again as the developer can continue working while he or she waits for the results of the test server to come back.
The clustering framework (based on Condor) was quite fast but had the defect that it had to ship the entire workspace (11MB) to all the nodes on the network (x20) which had a significant cost, especially when a dozen pairs are using it. In comparison the test server uses 'rsynch' which only copies the files that are new or different in the developers workspace. The clustering framework also proved to be less reliably than the server solution, frequently not returning any status of the test run. There were also some tests that would not run reliable on the framework. Since it gave us roughly the same performance as the "check-in" test server, we have put this solution on the back burner.
Further Reading
A more detailed description of the first experience can be found in this paper: http://FasterTestsPaper.gerardmeszaros.com
Copyright © 2003-2008 Gerard Meszaros all rights reserved