Introduction
Testing is an important part of all software development, yet a part that is so often overlooked or skimped on. Why is this? Perhaps it's because testing software is not considered exciting, or perhaps it's because it's not trivial, and if we're honest with ourselves it's impossible to write a set of tests that's perfect, the only way of knowing a test works is if it shows that the software doesn't. Passing all the tests does not mean the software is perfect, it may mean your tests just aren't good enough.
As the title suggests I'm going to look solely at unit testing, as it's currently something I'm focused on, having adopted Extreme Programming (XP) [ XP ]. So what is unit testing? To us it's testing individual software units to prove that they work as specified. In our case, tests may well form part of that specification, as XP is pro Test First programming, whereby we write our tests before writing the code, and we stop writing the code when all the tests pass. As such tests are both a way of ensuring quality and a way of measuring the status of our development. It's always worth remembering (and reminding the boss) that the sooner problems are found, the cheaper they are to fix. Tests formalise the testing process, they move it from just running the debugger and seeing what happens, to a structured, repeatable process. To encourage regular running of the tests, they should be quick and easy to use, in other words, fully automated. This will help you to get unit testing accepted as part of your personal and company development culture.
Prerequisites
Ok, so you see a benefit to unit testing, or at least I assume you do otherwise you'd probably have stopped reading by now. What do we need to begin developing effective unit tests? Happily not much, consider the following example:
bool isZero(long value) { return value == 0; }
Which we can test with the following code:
#include <iostream> int main() { if ((isZero(0)) && (!isZero(1))) { std::cout << "Passed." << std::endl; } else { std::cout << "Failed." << std::endl; } }
However not all code is as simple to test, nor do we want to have to repeat the basic features that all tests share. The ability to report the success or failure of the test is the most obvious feature that we'll be requiring again. We'll also probably want to test the function with more than two inputs, and a massive conditional statement is not clean, maintainable and readable. It would be very nice to know which part of the test failed, and which resulted in an error. It would be useful if we were able to run the tests on application start-up, choosing to run all the tests or a specified subset. Finally, unlike many C++ testing frameworks, we won't assume that the user will only ever test classes.
Building our Framework
Let's start by changing the code to allow for our first prerequisite, which is to easily allow testing with multiple inputs. Firstly we'll add a function to determine if the test succeeded, and record the result:
void Test(bool eval, bool& result) { if (result) { result = eval; } }
I've chosen to implement it this way so a pass after a failure will not overwrite the failure, in other words if result is already false we don't lose the failure when the next test passes. We can then change main to the following:
int main() { long const max_test = 100; bool result = false; Test(isZero(0), result); for (int i = 1; i < max_test; ++i) { Test(!isZero(i), result); } if (result) { std::cout << "Passed." << std::endl; } else { std::cout << "Failed." << std::endl; } }
Which allows us to test the function for potentially all possible positive values of a long (if we changed max_test to be std::numeric_limits<long>::max() ). Ok, so we're now able to run multiple tests, but should it fail, it would be very helpful to know which part failed. So how could we do that? Well, we could stop on the first failure, but we probably don't want to stop the entire unit test, so it's time to break our test down a little and practice some of our procedural/object-oriented design. We can start by changing the result to a structure as follows:
struct test_result { bool passed; unsigned long line; std::string file; };
We will then change the test function to set these additional values:
void Test(bool eval, const char* file, unsigned long line, test_result& result) { result.passed = eval; if (!result.passed) { result.file = file; result.line = line; } }
We'll also need to call the function differently (I've changed it to only print out failures, to reduce the clutter, and introduced a failed test) so our final program becomes:
#include <iostream> #include <string> // . . . code . . . int main() { long const max_test = 100; test_result results[max_test]; Test(!isZero(0), __FILE__, __LINE__, results[0]); // fails 0 is zero! for (int i = 1; i < max_test; ++i) { Test(!isZero(i), __FILE__, __LINE__,results[i]); } for (int i = 0; i < max_test; ++i) { if (!results[i].passed) { std::cout << "Test failed in file " << results[i].file << " on line " << results[i].line << std::endl; } } }
Ok, so far so good, however it's rather tedious to have to add __FILE__, __LINE__ to each call, and not terribly pretty either, so I'm going to pull them out, and use a macro (don't look so horrified) to save us the effort. We'll call the macro ASSERT_TEST() , just because that's a common naming style in testing frameworks. We'll define it as so:
#define ASSERT_TEST(condition, result) Test(condition, __FILE__, __LINE__, result)
However, having decided to use a macro we can now use a bit of magic to get the actual code that failed and print that as part of our diagnostics, so here's the new macro, with the test_result structure changed to accommodate the new information and the Test() function renamed to assertImpl() :
#include <iostream> #include <string> struct test_result { bool passed; unsigned long line; std::string file; std::string code; }; bool isZero(long value) { return value == 0; } void assertImpl(bool eval, char* code, char* file, unsigned long line, test_result& result) { result.passed = eval; if (!result.passed) { result.file = file; result.line = line; result.code = code; } } #define ASSERT_TEST(condition, result) \ assertImpl(condition, #condition, __FILE__, __LINE__, result) int main() { long const max_test = 100; test_result results[max_test]; ASSERT_TEST(!isZero(0), results[0]); // fails 0 is zero! for (int i = 1; i < max_test; ++i) { ASSERT_TEST(!isZero(i), results[i]); } for (int i = 0; i < max_test; ++i) { if (!results[i].passed) { std::cout << "Test " << results[i].code << " failed in file " << results[i].file << " on line " << results[i].line << std::endl; } } }
Having used our little bit of macro magic (for those that feel it's voodoo read [ KandR88 ] page 90), it's time to start thinking about how we can scale this up.
Refactoring into Classes
At this stage it is worth reviewing our requirements for a C++ testing framework:
-
We need to know if a test failed.
-
If a test failed we want the code for the test, the name of the file it is in, and the line it is on.
-
We want to be able to test multiple conditions in each test, failure of any single condition is a failure for the test.
-
Failed test(s) should not stop the rest of the test running.
-
We want a report on the test results, after all tests have run.
-
We may need to be able to set up some data before a test, and destroy it afterwards.
-
We should cope with exceptions.
-
The testing framework must be easy to use, as part of which we will implement our code in the test namespace.
We might also want the following information at some stage in the future:
-
Duration of each test.
-
Free/Used memory before and after the test.
-
Log results to file.
Ok, lets take these one at a time starting with the easiest, the test result. We've already solved this the easy way with our test_result structure. So we'll dive right in and do the whole lot:
class TestResultCollection { public: void error(const std::string& err); void fail(const std::string& code, const char* file, size_t line); unsigned long failedCount() const; void reportError(std::ostream& out) const; void reportFailures(std::ostream& out) const; private: class TestResult { public: explicit TestResult(const std::string& code, const char* file, size_t line); void report(std::ostream& out) const; private: std::string code_; std::string file_; unsigned long line_; }; typedef std::list<TestResult> results_collection; typedef results_collection::iterator iterator; typedef results_collection::const_iterator const_iterator; results_collection results_; std::string error_; };
To save space I'm not going to detail each function here, as a full description is available in the code [ CppUTF ].
As each test class may evaluate multiple expressions we're going to need to store more than one result for each test, as such we're going to use the class TestResultCollection to provide the interface for a test's results. It in turn will store a TestResult class for each failure, or error (we don't record passes; they are determined by the absence of a failure).
An error is defined as an exception that escapes the test code, or a failure of the setUp() or tearDown() methods, which are explained later. A failure is an expression that evaluates to false inside an IS_TRUE() , or an expression that evaluates to true within an IS_FALSE() . IS_TRUE and IS_FALSE are explained later.
Next we need a base class for tests, which will define our common interface to each test:
class Testable { public: Testable(const std::string& name); virtual ~Testable() = 0; virtual bool setUp(); virtual void run(); virtual bool tearDown(); virtual std::string name(); };
The constructor requires a name argument, this name will be returned by the member function name() and should be a human friendly name for this test class, as it is only used for reporting.
The four member functions have been designed to allow overriding in order to allow the client to perform appropriate action for each class. setUp() should be used to prepare any data required for the test, the body of the tests should be in run() , and tearDown() should tidy up any resources allocated in setUp() . The function name() returns the name provided to the constructor. Each function has a default implementation provided.
The class also contains the protected member test_out_ , intended to allow tests to write out a stream of data during the test. Note however that it is implemented via a std::ostringstream , and as such is not printed to the screen immediately.
As we can only ever have one instance of the TestCollection , it is implemented as a Singleton [ Singleton ]. This has the additional benefit of allowing us to be able to register each test with the collection through the public static method TestCollection::Instance() . The Testable class's default constructor is implemented as so:
Testable::Testable(const std::string& name) : name_(name) { TestCollection::Instance().addTest(this); }
Now any class deriving from Testable is automatically registered with the testing framework. The great benefit of this is that it allows us to add a test without changing any of the code already in the test build; all we need to do is add the new test's implementation file to our build list (or makefile). I believe this is important as I've several times seen code go untested as someone has forgotten to add the call to the test suite to the test driver code.
The header testable.h also contains the following macros:
#define IS_TRUE(exp) test::isTrue(exp, #exp, __FILE__, __LINE__) #define IS_FALSE(exp) test::isFalse(exp, #exp, __FILE__, __LINE__)
which call the following helper functions, ensuring we capture the line of code being tested, and the details of the file and line where the test can be found:
void isTrue(bool val, const char* code, const char* file, size_t line); void isFalse(bool val, const char* code, const char* file, size_t line);
These functions evaluate the result of the test, and ask the test collection to log a failure if they are not true or false, respectively.
Our tests are then gathered up into the main body of the framework; the class TestCollection . The main body of the testing framework is contained in the function TestCollection::run() which looks like this:
void TestCollection::run() { const iterator end = tests_.end(); run_number_ = 0; for (iterator current = tests_.begin(); current != end; ++current, ++run_number_) { try { test::Testable& test = *current->first; test::TestResultCollection& test_result = *current->second; if (test.setUp()) { try { test.run(); } catch(...) { error("Error occurred while running test"); test.tearDown(); continue; } if (test_result.failedCount() == 0) { ++pass_count_; } else { ++fail_count_; } if (!test.tearDown()) { error("Error occurred while tearing down the test"); } } else { error("Setup failed"); } } catch (std::exception& e) { error(e.what()); } catch (...) { error("Unexpected error"); } } }
The main purpose of the function is to iterate through the list of tests, calling setUp() , run() and tearDown() , for each test in turn, catching any exceptions thrown, and checking that we only run tests that have been successfully set up, or recording tests that have failed to properly tear themselves down.
The final thing the test framework does is call TestCollection::report() which iterates through the TestResultCollection , reporting any passed tests, or failed tests along with any associated failures or errors.
Reading the Code
The actual code supplied on my website contains a few more comments than are displayed in this article, and is documented with Doxygen [ Doxy ], so you can generate HTML, RTF, Latex, or man pages from it.
Using the Testing Framework
To demonstrate the use of the unit testing framework, I've chosen to use a modified version of the calculator Bjarne Stroustrup presents in The C++ Programming Language. I've modified it slightly, changing it into a class, and making it perhaps a little more reusable and testable.
The Calculator is intended to compute the result of simple formulae, possibly reusing the result of an earlier expression to calculate a more complex one such as: (a + b) * (c - 0.147) + d ;
Calculator is a simple class defined as follows:
class Calculator { public: Calculator(); double evaluate(const std::string expression); // ... private members ... };
A nice simple interface for us to test! So we'll create a CalculatorTest.cpp file and we can begin writing our test:
#include "Testable.h" #include "Calculator.h"
We need the definition of Testable , as we're going to inherit from it, and we need the definition of Calculator , as that's what we're going to test. Next we need to define our unit test class:
class CalculatorTest : public test::Testable { public: explicit CalculatorTest(const std::string& name) : Testable(name) {} // ... rest of class follows ...
Remember that we need to pass the human readable form of our test name down to the base class. Next we'll want to provide the code to set up and tear down any classes or data we'll need for the test, in this case I've decided to dynamically allocate the Calculator here:
virtual bool setUp() { try { calc_ = new Calculator; } catch(std::bad_alloc& e) { std::cerr << "Error setting up test: " << e.what() << std::endl; return false; } return true; } virtual bool tearDown() { delete calc_; return true; }
The teardown() method assumes delete calc_ ; will always succeed. Last but not least we need to implement the run() method:
virtual void run() { testBasics(); testVariables(); testCompound(); }
in which I've chose to break my tests down into related groups and run each group in turn. So let's look at the simple tests in testBasics() :
void testBasics() { double result = calc_->evaluate("1 + 1"); IS_TRUE(equal_double(2.0, result)); result = calc_->evaluate("1 + 1"); IS_TRUE(equal_double(2.0, result)); result = calc_->evaluate("3 - 1"); IS_TRUE(equal_double(2.0, result)); result = calc_->evaluate("1 * 2"); IS_TRUE(equal_double(2.0, result)); result = calc_->evaluate("6 / 3"); IS_TRUE(equal_double(2.0, result)); }
Now we've gotten to the meat of it. We know calc_ must be valid for the framework to have called run() on our test class, so we can start using it, and what simpler test for a calculator than 1 + 1, so we create a double called result to store the result of the evaluate() call:
double result = calc_->evaluate("1 + 1");
Then we use the IS_TRUE() macro to compare the result to 2, our expected answer:
IS_TRUE(equal_double(2.0, result));
Finally after our class declaration we need to create a single instance of the class, we can do this as so:
static CalculatorTest the_test("Calculator Test");
You may prefer to place it in an anonymous namespace, as well as or instead of the static. We can then run the testing framework and hopefully we'll see something that looks like this:
C++ Testing Framework v1.0 Copyright 2001 Crickett Software Limited The latest version can be downloaded from http://www.crickett.co.uk - - - - - - - - - - - - - - - - - - - - - - - Ran 1 test(s) Passes: 1 Failures: 0 Errors: 0 - - - - - - - - - - - - - - - - - - - - - - - Test: Calculator Test Output: No failures. No errors. - - - - - - - - - - - - - - - - - - - - - - -
If we deliberately introduce a failure, say:
IS_TRUE(equal_double(7.0, calc_->evaluate("1 + 1")));
Then the framework will produce the following results:
C++ Testing Framework v1.0 Copyright 2001 Crickett Software Limited The latest version can be downloaded from http://www.crickett.co.uk - - - - - - - - - - - - - - - - - - - - - - - Ran 1 test(s) Passes: 0 Failures: 1 Errors: 0 - - - - - - - - - - - - - - - - - - - - - - - Test: Calculator Test Output: Failures: Failed IS_TRUE(equal_double(7.0, calc_->evaluate("1+1"))) at line 68 in file d:\dev\c++\testingframework\calculatortest.cpp No errors. - - - - - - - - - - - - - - - - - - - - - - -
Which gives us the location (file and line number) of the failed test, and the actual code for the test that failed. I suggest that you, dear reader, experiment with the supplied examples, perhaps undertaking the following exercises:
-
Extend the tests; there is scope for more comprehensive testing.
-
Extend Calculator to support function calls, for example sin, cos, and tan. Also write suitable tests for each.
-
Add another class to the project, and a suitable test class.
Logging
Sometimes it's just not that easy to test results directly, or we really might just want to log some text during the test, so the framework allows the test to log data. This is provided by the protected member variable test_out_ , and all you need to do to record text during the test is treat it exactly as you would std::cout . The text is then reported in the Output section of the tests report.
Taking It Further
This article has introduced the design and development of the C++ unit testing framework we use at Crickett Software. It's continually evolving as we need it to, and the latest version will normally be available from our website.
In a future article we will look at how to use unit testing to improve the quality of your code, by catching tests earlier, and automating as much testing as possible. I'll also look at test first design. Any questions on the article are most welcome to < john@crickett.co.uk > .
Thanks To
Jon Jagger ( http://www.jaggersoft.com ) for feedback on an earlier testing framework and some suggestions that got me started on this incarnation of a C++ unit testing framework. Mark Radford ( http://www.twonine.co.uk ) for comments on both the framework and this article. Paul Grenyer ( http://www.paulgrenyer.co.uk ) for comments on several drafts.
Other Interest
JUnit - a Java unit testing framework - http://www.junit.org , covered recently in CVu by Alan Griffiths
XP Unit Testing Frameworks - http://www.xprogramming.com/software.htm
XP Unit Testing - http://www.xprogramming.com/xpmag/expUniTestsat100.htm
References
[XP] Extreme Programming - http://www.xprogramming.com
[KandR88] The C Programming Language by Kernigan & Richie, 1988.
[CppUTF] C++ Unit Testing Framework - available from http://www.crickett.co.uk
[Singleton] Singleton - http://rampages.onramp.net/~huston/dp/singleton.html
[Doxy] Doxygen, an open source C++ documenting tool, similar to JavaDoc - http://www.doxygen.org