Assume failure by default. Chris Oldwood considers various fail cases.
Two roads diverged in a wood, and I –I took the one less travelled by,And that has made all the difference.
~ Robert Frost
Despite being written in the early part of the 20th century, I often wonder if Robert Frost’s famous poem might actually have been about programming. Unless you’re writing a trivial piece of code, every function has a happy path and a number of potential error paths. If you’re the optimistic kind of programmer, you’ll likely take the well trodden path and focus on the happy outcome and hope that no awkward scenarios turn up. This path is exemplified in the original version of that classic first program which displays “hello, world!” on the console:
main() { printf("hello, world\n"); }
A standards-conforming version of this classic C program requires the main
function to be declared with an int
return type to remind us that we need to inform the invoker of any problems, but luckily we get to remain optimistic as we can elide any return value (only for a function named main
) and happily accept the default – 0
. Consequently, irrespective of whether or not the printf
statement actually works, we’re going to tell the caller that everything was hunky-dory.
The classic C version relies on a spot of ‘legal slight-of-hand’ to allow you to put the program’s return value out of mind whereas C# and Java needed to find another way to let you ignore it so they allow you to declare main
without a return type at all:
public class Program { public static void Main(...) { // print "Hello World!" } }
Of course, these languages use exceptions internally to signal errors so it doesn’t matter, right? Well, earlier versions of Java would return 0
if an uncaught exception propagated through main
so you can’t always rely on the language runtime to act in your best interests [Wilson10]. Even with .Net you can experience some very negative exit codes when things go south which will make a mockery of that tried-and-tested approach to batch-file error handling everyone grew up with:
IF ERRORLEVEL 1
Unless you know that Main
can also return an exit code I don’t think it should be that surprising that people have resorted to silencing those pesky errors with an all-encompassing try
/catch
block:
public static void Main(...) { try { // Lots of cool application logic. } catch { // Write message to stderr. } }
I wonder if this pattern is more common than even I’ve experienced as PowerShell has taken the unconventional approach of treating any output on the standard error stream as a sign that a process has failed in some way. This naturally causes a whole different class of errors on the other side of the fence that could be considered worse than ‘the cure’.
Back in the world of C and C++ we can be pro-active and acknowledge our opportunity to fail but are we still being overly optimistic by starting out by assuming success?
int main(int argv, char **argv) { int result = EXIT_SUCCESS; // Lots of cool application logic. return result; }
It’s generally accepted that small focused functions are preferable to long rambling ones but it’s still not that uncommon to need to write some non-trivial conditional logic. When that logic changes over time (in the absence of decent test coverage) what are the chances of a false positive? When it comes to handling error paths, I’d posit that it’s categorically not zero.
The trouble with error paths are that they are frequently less travelled and therefore far less tested. A bug in handling errors where the flow of control is not correctly diverted can lead to other failures later on which are then harder to diagnose as you’ll be working on the assumption of some earlier step having completed successfully. In contrast, a false negative should cause the software to fail faster which may be easier to diagnose and fix. To wit, assume failure by default:
int result = EXIT_FAILURE;
The term ‘defensive programming’ is one which was well intentioned, and requires an acknowledgement of failure to allow robust code to be written, but it has also been used to cover a multitude of sins – counter-intuitively making our lives harder, not easier. It stems from a time when development cycles were long, releases were infrequent, and patching was expensive. In a modern software delivery process Mean Time to Recovery is often valued more highly than Mean Time to Fail.
Another area where I find an overly optimistic viewpoint is with test frameworks. Take this simple test, which does nothing:
[Test] public void doing_nothing_is_better_than_being_ busy_doing_nothing() { }
Plato once said that an empty vessel makes the loudest sound, and yet a test which makes no assertions is usually silent on the matter. Every test framework I’ve encountered makes the optimistic assumption that as long as your code doesn’t blow up, then it’s correct, irrespective of whether or not you’ve even made any attempt to assert that the correct behaviour has occurred. This is awkward because forgetting to finish writing the test (it happens more often than you might think) is indistinguishable from a passing test.
When practising TDD, the first step is to write a failing test. This is not some form of training wheels to help you get used to the process, it’s fundamental in helping you ensure that what you end up with is working code and test coverage for the future. Failing by default brings clarity around what it means to succeed, or in a modern agile parlance – what is the definition of ‘done’?
In those very rare cases where the outcome cannot be expressed as the absence of some specific operation occurring, there are always the following constructs to make it clear to the reader that you didn’t just forget to finish writing the test:
Assert.Pass(); Assert.That(. . ., Throws.Nothing);
The few mocking frameworks which I’ve had the displeasure to use also have a similar misguided level of optimism when it comes to writing tests – they try really hard to hide dependencies and just make your code work, i.e. they adopt the classic ‘defensive programming’ approach which I mentioned earlier. It’s misguided because exposing your dependencies to the reader is a key part of illustrating what the reader needs to know to understand what interactions the code might rely on. If this task is onerous then that’s probably a good sign you need to do some refactoring!
I’m being overly harsh on Hello World; it’s a program intended for educational purposes not a shining example of 100% error-free code (whatever that means). I’m sure a kitten dies every time an author writes ‘error handling elided for simplicity’ but maybe that’s an unavoidable cost of trying to present a new concept in the simplest possible terms. However, when it comes to matters of correctness perhaps we need to take the difficult path if we are going to provide the most benefit in the longer term.
Reference
Matthew Wilson (2010) ‘Quality Matters #6: Exceptions for Practically-Unrecoverable Conditions’ in Overload #99, October 2010, available at: https://accu.org/index.php/journals/1706
plush corporate offices the lounge below his bedroom. With no Godmanchester duck race to commentate on this year, he’s been even more easily distracted by messages.
is a freelance programmer who started out as a bedroom coder in the 80s writing assembler on 8-bit micros. These days it’s enterprise grade technology from