Production Bugs
The book has now been published and the content of this chapter has likely changed substanstially.Please see page 268 of xUnit Test Patterns for the latest information.
We find too many bugs during formal test or in production.
Symptoms
We have put all this effort into writing automated tests and yet the number of bugs showing up in formal (a.k.a. system) testing or production is too high.
Impact
It takes longer to trouble-shoot and fix bugs found in formal test than those found in development and even longer for bugs found in production. This may force us to delay shipping the product or putting the application into production to allow time for the bug fixes and retesting. This time and effort translates directly to monetary cost and it consumes resources that might otherwise be spent adding more functionality to the product or building other products. The delay may affect the credibility of the organization in the eyes of its customers. Poor quality also has an indirect cost because it lowers the value of the product or service we are supplying.
Causes
Bugs get through to production for one of several reasons. They can be caused Infrequently Run Tests or by Untested Code. The latter can be cause by Missing Unit Tests or Lost Tests.
By "enough tests", I'm not referring to the count but rather the test coverage. Changes to Untested Code are more likely to result in Production Bugs because there are no automated tests to tell the developer when they have introduced a problem. Untested Requirements aren't being verified every time the tests are run. So we don't know for sure that it is working. Both of these are related to Developers Not Writing Tests (page X).
Cause: Infrequently Run Tests
Symptoms
We hear that our developers aren't running the tests very often. We ask some questions and discover that running the tests take too long (Slow Tests (page X)) or produces too many extraneous failures (Buggy Tests (page X).)
We are seeing test failures in the daily Integration Build[SCM]. When we dig deeper, we find that developers often commit their code without running the tests on their own machines.
Root Cause
Once they've seen the benefits of working with the safety net of automated tests, most developers will keep doing it unless something gets in the way. The most common impediments are Slow Tests that slow down integration or Unrepeatable Tests (see Erratic Test on page X) that force them to restart their test environment or do Manual Intervention (page X) before running the tests.
Possible Solution
If the root cause is Unrepeatable Tests we can try switching to a Fresh Fixture (page X) strategy to make the tests more deterministic. But if the cause is Slow Tests we'll have to put more effort into speeding up the test run.
Cause: Lost Test
Symptoms
The number of tests being executed in a test suite has dropped (or has not increased as much as expected). We may notice this directly if we are paying attention to test counts or we may find a bug that should have been caused by a test that we know exists but upon poking around we discover that the test has been disabled.
Root Cause
Lost Tests can be caused by either a Test Method (page X) or a Testcase Class (page X) that has been disabled or has never been added to the AllTests Suite (see Named Test Suite on page X).
Tests can be accidently left out (i.e. never run) of test suite by:
- forgetting to add the [test] attribute to the Test Method or using a method name that doesn't match the naming convention used by Test Discovery (page X),
- forgetting to add a call to suite.addTest to add the Test Method to the Test Suite Object (page X) when automating tests in a Test Automation Framework (page X) that only supports Test Enumeration (page X).
- forgetting to add a call to the Test Method explicitly in Test Suite Procedure (see Test Suite Object) in procedural language variations of xUnit.
- forgetting to add the test suite to the Suite of Suites (see Test Suite Object) or forgetting to add the [Test Fixture] attribute to the Testcase Class.
Tests that used to be run may have been disabled by:
- renaming the Test Method to not match the pattern that causes Test Discovery to include the test in the test suite (.g. method name starting with "test...").
- addition of an [Ignore] attribute in variants of xUnit that use method attributes to indicate Test Methods
- commenting out (or deleting) the code that adds the test (or suite) to the suite explicitly.
Typically, this occurs when a test is failing and someone disables it to avoid having to wade through the failing tests when running other tests although it may also occur accidently.
Possible Solution
There are a number of ways to avoid introducing Lost Tests.
We can use a Single Test Suite (see Named Test Suite) to run a single Test Method instead of disabling the failing or slow test. We can use the Test Tree Explorer (see Test Runner on page X) to drill down and run a single test from within a test suite. Both of these techniques are made difficult by Chained Tests (page X) -- a deliberate form of Interacting Tests (see Erratic Test) -- so this is just one more reason to avoid them.
If our variant of xUnit supports it, we can use the provided mechanism to ignore(E.g. NUnit lets us put the attribute [Ignore] on a Test Method to keep it from being run.) a test. This will typically remind us of the number of tests not being run so we don't forget to re-enable them. We can also configure our continuous integration tool to fail the build if the number of tests "ignored" is above a certain threshold.
We can compare the number of tests we have after check-in with the number that existed in the code branch immediately before we started integration. We simply verify that it has increased by the number of tests we have added.
We can implement or take advantage of Test Discovery if our programming language supports reflection.
We can use a different strategy for finding the tests to run in the Integration Build. Some build tools (such as Ant) let us find all files that match a name pattern (such as ending in "Test"). We won't lose entire test suites if we use this capability to pick up all the tests.
Cause: Missing Unit Test
Symptoms
All the unit tests pass but a customer test is still failing. At some point, the customer test was made to pass but no unit tests were written to verify the behavior of the individual classes. Then, a subsequent code change modified the behavior of one of the classes and that broke the functionality.
Root Cause
Missing Unit Tests often happen when a team focuses on writing the customer tests but fails to do test-driven development using unit tests. They got enough functionality built to pass the customer tests but a subsequent refactoring broke it. Unit tests would likely have prevented the code change from making it into the Integration Build.
Missing Unit Tests can also happen during test-driven development when someone gets ahead of themselves and writes some code without having a failing test to guide them.
Possible Solution
The trite answer is to write more unit tests but this is easier said than done and isn't always effective. Doing true test-driven development is the best way to avoid having Missing Unit Tests without writing unnecessary tests merely to get the test count up.
Cause: Untested Code
Symptoms
We may just "know" that some piece of code in the system under test (SUT) is not being exercised by any tests. This may be because we have never seen it execute or we may have used code coverage tools to prove it beyond a doubt. In the following example, how can we test that when the timeProvider throws an exception it is handled correctly?
public String getCurrentTimeAsHtmlFragment() throws TimeProviderEx { Calendar currentTime; try { currentTime = getTimeProvider().getTime(); } catch (Exception e) { return e.getMessage(); } // etc. Example UntestedCode embedded from java/com/clrstream/ex7/TimeDisplay.java
Root Cause
The most common cause of Untested Code is that the SUT has code paths that react to particular ways that a depended-on component behaves and we haven't found a way to exercise those paths. Typically, the depended-on component is being called synchronously and either returns certain values or throws exceptions. During normal testing, only a subset of the possible equivalence classes of indirect inputs are actually encountered.
Another common cause is incompleteness of the test suite caused by incomplete characterization of the functionality exposed via the SUT's interface.
Possible Solution
If the Untested Code is caused by an inability to control the indirect inputs of the SUT, the most common solution is to use a Test Stub (page X) to feed the various kinds of indirect inputs into the SUT to cover all the code paths. Otherwise, it may be sufficient to configure the depended-on component to cause it to return the various indirect inputs required to fully test the SUT.
Cause: Untested Requirement
Symptoms
We may just "know" that some piece of functionality is not being tested. Or, we may be trying to test a piece of software but cannot see any visible functionality that can be tested via the public interface of the software. All the tests we have written do pass.
When doing test-driven development we know we need to add some code to handle a requirement but we cannot find a way to express the need for code to log the action in a Fully Automated Test (see Goals of Test Automation on page X) such as this:
public void testRemoveFlight() throws Exception { // setup FlightDto expectedFlightDto = createARegisteredFlight(); FlightManagementFacade facade = new FlightManagementFacadeImpl(); // exercise facade.removeFlight(expectedFlightDto.getFlightNumber()); // verify assertFalse("flight should not exist after being removed", facade.flightExists( expectedFlightDto.getFlightNumber())); } Example UntestedRequirementTest embedded from java/com/clrstream/ex8/test/FlightManagementFacadeTest.java
Note that this test does not verify that the correct logging action has been done. It will pass regardless of whether the logging was implemented correctly or at all. Here's the code that the test is verifying complete with the indirect output of the SUT that has not been implemented correctly.
public void removeFlight(BigDecimal flightNumber) throws FlightBookingException { System.out.println(" removeFlight("+flightNumber+")"); dataAccess.removeFlight(flightNumber); logMessage("CreateFlight", flightNumber); // Bug! } Example UntestedRequirement embedded from java/com/clrstream/ex8/FlightManagementFacadeImpl.java
If we plan to depend on the information captured by logMessage when maintaining the application in production, how can we ensure that it is correct? Clearly, it is desirable to have automated tests for this functionality.
Impact
Part of the required behavior of the SUT could be accidently disabled without causing any tests to fail. Buggy software could be delivered to the customer. The fear of introducing bugs could discourage ruthless refactoring or deletion of code suspected to be unneeded (dead).
Root Cause
The most common cause of Untested Requirements is that the SUT has behavior that is not visible through its public interface. It may have expected "side effects" that cannot be directly observed directly by the test (such as writing out a file or record or calling a method on another object or component.) We call these side effects indirect outputs.
When the SUT is an entire application, the Untested Requirement may be a result of not having a full suite of customer tests that verify all aspects of the visible behavior of the SUT.
Possible Solution
If the problem is missing customer tests, we need to write at least enough customer tests to ensure that all components are integrated properly. This may require improving the design-for-testability of the application by separating the presentation layer from the business logic layer.
When we have indirect outputs that we need to verify, we can do Behavior Verification (page X) through the use of Mock Objects (page X). Testing of indirect outputs is covered in the Using Test Doubles narrative.
Cause: Neverfail Test
Symptoms
We may just "know" that some piece of functionality is not working but the tests for that functionality are passing nonetheless. When doing test-driven development we have added a test for functionality we have not yet written but we cannot get it to fail.
Impact
If a test won't fail even when the code to implement the functionality doesn't exist, how useful is it for Defect Localization (see Goals of Test Automation)? (Not very!)
Root Cause
This can be caused by improperly coded assertions such as assertTrue(aVariable, true) instead of assertEquals(aVariable, true). Another cause is more sinister:
When we have asynchronous tests, failures thrown in the other thread or process may not be seen or reported by the Test Runner.
Possible Solution
We can implement cross-thread failure detection mechanisms to ensure that asynchronous tests do indeed fail but a better solution is to refactor the code to support Humble Executable (see Humble Object on page X).
Copyright © 2003-2008 Gerard Meszaros all rights reserved