There are many aspects of Software Quality. Matthew Wilson introduces us to some of the concepts.
It's been a few years since I stopped my column-writing (for C/C++ Users Journal and Dr. Dobb's), and I've rather gotten out of the habit of disciplined writing, much to the chagrin of my long-suffering Addison-Wesley editor. (Sorry, Peter, but this stuff will help with the coming books, I promise.) Anyway, I plan to get back into the swing and hope to provide material that will cause some of you to ponder further as you read your issue of Overload every second month, and I'd like to thank Ric Parkin for giving me the opportunity to foist my opinions on a captive audience once again.
So, what's the deal with 'Quality Matters'? Well, as the cunning linguists among you may have realised, the title is a double-meaning pun, which appeals to me greatly (and no less so to Ric). But the overloading is apposite. In one sense it means that quality is important . Which it is. The other is that what is to be discussed will be issues of quality .
But what is quality? For sure, when you drive a nice car, stay in a nice hotel, eat a nice meal, watch a nice film (or movie, if you will), read a nice book, or have a nice conversation, you can feel the quality. But defining it is exceeding hard; it takes more than just a subjective 'nice'.
In all my programming related activities - author, consultant, programmer, trainer - quality is crucial, and having spent enough years doing these things I am able to sniff out quality (and its absence). I am comfortable to go into any client's development team and poke around the codebase with confidence, invariably producing useful analyses of what I find. But when asked to pontificate about 'software quality' absent context of a codebase or design documents, I find myself coming up a little dry. At such times I wonder at the abilities of those who have, or at least appear to have, a software dictionary in their heads.
My eldest son and I have recently taken up fencing - the one where you attempt to disembowel your opponent with a sword, rather than the one where you hit lumps of wood into the ground with a hammer (although that would also be fun, I think) - and it has brought into clear focus just how much we rely on training for a great many things. I've been cycling for more than 20 years, and the act of moving about a bike in response to where I want to go seems entirely subconscious. At least I hope my subconscious is handling it, because I know I'm not! Similarly, I've been programming professionally for 15+ years (and unprofessionally for 25+ if we count Vic-20s, ZX-81s, BASIC and 6502 assembler), and by now a huge proportion of what I do is also subconscious. It's only when training other programmers, or (attempting to) write books and articles that I get to glimpse some of the other 90% of the iceberg of software lore that has been accreted into my programming super-ego.
So, in part, this column will be a journey about codifying what my conscious mind has forgotten, in the hope of stringing together a cogent philosophy. Time will tell ...
Being a practical sort of chap - I'd rather write a software component than write a software component specification - the prognostications in this column will all be based around practical issues, usually, I predict, around some block of code that's offended or inspired me. I intend to examine successful (and some unsuccessful) software libraries and applications, and rip them apart to look at what has been done well, and what could be changed to improve them. I also plan to demonstrate how intrinsic and diagnostic improvements can be applied without damaging or detracting from the existing functionality, robustness or efficiency.
Despite these promises of greasy rags and dirty finger nails, we will need a theoretical framework on which to base the analyses. To that end I'm going to start the journey by identifying three groups of software quality related subjects, which reflect the method I bring to bear in my consulting work.
Most/all of the subjects mentioned in this introductory instalment will be given further treatment in later instalments.
A nomenclature for software quality
There are a multitude of possible ways of slicing and dicing the software quality landscape, and a multitude of software quality metrics offered by different thinkers on the subject. There are terms such as adaptability , cohesiveness , consistency , correctness , coupling , efficiency , flexibility , maintainability , modularity , portability , reliability , reusability , robustness , security , testability , transparency , understandability , and on and on it goes. What do they all mean? Are they all useful?
I couldn't hope to distil down all these different ideas down into a single set, and I won't pretend to try. What I'm going to talk about in this column are aspects of software quality that I understand and utilise in my consultancy, training and my own software development activities. They break down, more or less neatly, into three groups:
- Intrinsic characteristics
- (Removable) diagnostic measures
- Applied assurance measures
In this instalment I'll flesh out the definitions of the first group, since they'll occupy much of my interest in the next few instalments. I'll also offer brief discussions of the second and third groups now, and go into more detail about the individual items in later instalments when they're relevant (and when there's the space to give them adequate treatment).
Intrinsic software quality characteristics
To be of any use, software quality characteristics have to be definable, even when the definitions involve relativism and subjectivity. To this end, I spent a deal of effort when writing my second book -
- Correctness/reliability/robustness
- Efficiency
- Discoverability and transparency
- Modularity
- Expressiveness
- Flexibility
- Portability
Each of these characteristics is innate to a given software entity. Regardless of whether its authors or users know or care about such characteristics, and regardless of whether anyone takes the trouble to measure/assess it in respect of them, every component/library/ sub-system has a level of robustness, efficiency, discoverability and transparency, etc. that can be reasoned about.
For everyone who has not managed to get further than the prologue of Extended STL, volume 1 , I'll offer definitions of these again now. For those who have, you will probably benefit from reading them, as I've refined some ideas in the last couple of years.
Correctness, reliability and robustness: first pass
Forgetting for the moment all the other issues about how fast it runs, whether it can be easily used/re-used/changed, the contexts it can be used in, and so forth, the sine qua non for any piece of software is that it must function according to the expectations of its stakeholders.
Battle-hardened software developers will (hopefully) have bristled at the vagueness of that last phrase ' function according to the expectations of its stakeholders '. But I am being deliberately vague because I believe that this area of software quality is poorly defined, and I hope to come to a better definition than any I've found so far.
Three terms are commonly used when it comes to discussing the expected (or unexpected) behaviour of software: correctness , reliability and robustness . The first of these has an unequivocal definition:
Correctness is the degree to which a software entity's behaviour matches its specification.
Cunningly, the definition is able to avoid equivocation by passing off to the definition of 'specification'. And that's no small thing, to be sure. I am going to skip discussion of what form(s) specifications might take until the next article, for reasons that will become clear then.
I'm also going to skip out on discussing the issues of robustness and reliability , because there is a lot of equivocation on their definitions in the literature - I'm thinking mainly of McConnell [ CC ] and Meyer [ OOSC ] here, but they're not alone - and the only sense I can make of them is when dealing with the specification question.
I will, however, leave you to with something to ponder, which will inform the deliberations of the next instalment: I call it the Bet-Your-Life? Test (see sidebar).
The Bet-Your-Life? Test |
Assume a perfect operating environment of unfailing hardware and perfect implementations of all layers of software abstraction below the ones at which the following software entities are written. Would you bet your life on them being able to be written to ' function according to the expectations of its stakeholders '? (That these are all C is a reflection of the first C in ACCU and also of my need to keep the listings as small as possible. The choice of language is largely, though not completely, irrelevant, since we have already stipulated that the underlying layers of software abstraction are perfectly implemented.) // 1. A boolean inversion: return == !b bool invert(bool b); // 2. A string comparison int strcmp(char const* lhs, char const* rhs); // 3. A base-64 conversion [B64_ENCODE] size_t b64_encode( void const* src, size_t srcSize , char* dest, size_t destLen ); // 4. A recursive file-system search // [RECLS_SEARCH] RECLS_API Recls_Search( char const* searchRoot , char const* pattern , int flags , hrecls_t* phSrch ); Well, I'd bet my life, and those of my wife and sons, on my being able to implement (1) perfectly. I'd expect all of you to be comfortable to make a similar compact with your most precious lives. Conversely, I can tell you that I definitely would not ever be prepared to bet anything of grave importance on the implementation of (4). This is despite my having used my implementation of it, seemingly entirely successfully, probably tens of thousands of time over the last several years. Beyond those two definitive positions, I'm somewhat up in the air on the other two. Since I'm a programmer, I'm instinctively driven to believe that I could implement perfect implementations of both (2) and (3). And I have, in fact, implemented both before, numerous times in the case of strcmp(). And as far as I am aware, both are perfect. But I still wouldn't bet my life on it. Obviously, the interesting part of this thought experiment is why I hold those different positions, and the criteria I have considered in forming them. The key is in understanding the difference between correctness and robustness and/or reliability, and between contract specification and testing, all of which will be discussed in the next instalment. In the meanwhile, I'd be keen to hear from readers on their positions. |
Discoverability and transparency
Discoverability and transparency are a pair of software quality characteristics that pertain to the somewhat nebulous concept of how ' well-written ' a software entity might be. They are defined as follows [ XSTLv1 ]:
Discoverability is how easy it is to understand a component in order to be able to use it.
Transparency is how easy it is to understand a component in order to be able to change it.
I believe that it's self-evident that these two are hugely significant in the success of software libraries. Particularly so with discoverability, since people have very low tolerance for discomfort in the early stages of adoption of a software library. (This is one of the reasons why I write so many C++ libraries: I cannot discover the interface of many existing ones.)
The two characteristics have significant impact on the (uselessly vague and overly general, in my opinion) notion of maintainability . If something is hard to change, then the changing of it is going to be (i) unwillingly undertaken and (ii) of high risk. Furthermore, if something is hard to use, then its users will be poorly qualified to guide its evolution.
Consequently, I find mildly astonishing the absence of much concern over these two quality characteristics in commercial developments. In my opinion, discoverability and transparency have one of the biggest cost impacts on software projects, and we should aim to maximise them as much as is possible. The problem with that intent, however, is that, unlike correctness, discoverability and transparency are subjective and non-quantifiable. But there is hope, in the application of the idiom: ' when in Rome ...'
Efficiency
Efficiency is the degree to which a software entity executes using the minimum amount of resources. The resources we commonly think of are processing time and memory, but other resources, such as database connections and file handles, can be equally important, depending on application domain.
Efficiency may be primarily concerned with:
- Whether the algorithm chosen to implement the entity's functionality is implemented to use the minimum amount of resources, and
- Whether another algorithm may fulfil the entity's functionality using fewer resources
There are other factors that can influence efficiency, including:
- The compilation environment used to translate and optimise the code
- The execution environment used to execute the code (e.g. choosing one JVM over another, single vs. multi-core hardware)
Doubtless we've all heard of Hoare/Knuth's ' premature optimisation is the root of all evil ' quotation. The problem is, this has been misunderstood and seized upon by a generation of feckless and witless programmers who don't care about their craft and should instead be spending their days giving someone else's profession a bad name, egged on by commercially-driven mega-companies with giant frameworks and rich consulting services to be foisted on under-informed clients.
I've never worked on a commercial software project where performance was not important, and, frankly, I can't think of a serious software application where it would not be. The actual full quote was ' we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil ', which makes a whole lot more sense. It's not suggesting that programmers blithely ignore performance, rather it's an exhortation to focus on the most significant performance issues first, rather than to 'sweat the small stuff'. Now that makes a lot of sense.
As Herb Sutter has postulated [ FREE-LUNCH ], we have run out of the performance free-lunch, and it's time to start tightening our belts.
Expressiveness
Expressiveness is ' how much of a given task can be achieved clearly in as few statements as possible ' [ XSTLv1 ]. Expressiveness is also known as programming power (or just power), but it's a poorer term for a variety of reasons, and I won't mention it further.
Some code examples will illustrate the point more clearly than words. The first one involves file-system search on UNIX (and is parsimoniously lifted from the section on expressiveness in the prologue of Extended STL, volume 1 [ XSTLv1 ]):
Contrast Listing 1:
DIR* dir = opendir("."); if(NULL != dir) { struct dirent* de; for(; NULL != (de = readdir(dir)); ) { struct stat st; if( 0 == stat(de->d_name, &st) && S_IFREG == (st.st_mode & S_IFMT)) { remove(de->d_name); } } closedir(dir); } |
Listing 1 |
with Listing 2:
readdir_sequence entries(".", readdir_sequence::files); std::for_each(entries.begin(), entries.end() , ::remove); |
Listing 2 |
Here a C++ class - actually an STL Collection [ XSTLv1 ] - combined with a standard algorithm is used to provide a significantly more expressive means of tackling the problem of removing all files from the current directory. It achieves this by raising the level of abstraction - all possible files are treated as a single entity, a collection - and by relying on language facilities - deterministic destruction to handle resources, and namespacing rules to define constants with natural names.
I hope it's clear that the expressiveness of class + algorithm engenders a substantial increase in the transparency of the application code. This is not measured solely in the reduction of lines of code, but also in the removal of pointers, the S_IF??? constants, and explicit resource management, and in the ability to read the second statement as 'f or each item in entries, remove [it] '. However, before we get carried away, we must balance such gains in concision with the discoverability (or lack thereof) of the components. The first 'big thing to be known' is the use of STL iterator-pair ranges and algorithms. Certainly, to experienced C++ programmers this is now as straightforward as tying one's shoes. But STL is not , in my opinion, intuitive. Of less magnitude (since it's not a global idiom like STL collection + algorithm), but still significant, is the dialecticism of all abstractions, in this case the readdir_sequence class.
There's an obviousness to this that's teeth-grindingly painful to state, but it needs to be stated nonetheless. Every meaningful software component provides either a different interface or a different implementation, or both, to all others. If it doesn't, you find yourself the proud owner of a wheel the same size and specification as your neighbour, and there's precious little use in that.
If, as is the more common case, your component has a different interface to existing ones, then by definition you affect its discoverability. Users must familiarise themselves with the component's interface to be able to use it. The challenge in this case is to restrict the non-normative aspects to the minimum, without sacrificing other aspects of software quality or functionality. In the case of the readdir_sequence class, this is limited to the construction of instances, which involving specifying a search directory and/or search flags, and the collection's value type (which happens to be char const* ). The rest of the functionality of the class adheres to the requirements of an STL Collection [ XSTLv1 ] - it provides access to elements via begin() and end() iterator ranges - and therefore can be used in the same, idiomatic manner as any other STL collections.
Conversely, if your component provides a new implementation, you will have to reveal something about it to potential users, or they'll have no reason to use it. And like as not you'll have to reveal something more substantive than just saying 'it's faster', so you'll find yourself with a leaky abstraction [ LEAK , XSTLv1 ]. Whatever information that must be leaked adds to the sum of knowledge that must be mastered for your component to be used properly - it affects its discoverability. An example of this might be a fast memory allocator that works by using a custom heap that ignores all free() calls and simply dumps the memory pool at an established known point. Users will have to abide by the rules of when/where to allocate in order to establish that point.
As if all that wasn't enough, there are even cases where expressiveness can detract from transparency. Consider the following three chunks of C# code, using the new '100%' rewrite of the recls [ RECLS-100% ] recursive file-system search library (Listing 3).
IEnumerator en = FileSearcher.Search(directory, patterns).GetEnumerator(); while(en.MoveNext()) { IEntry entry = (IEntry)en.Current; if(!entry.IsReadOnly) { Console.Out.WriteLine( entry.SearchRelativePath); } } |
Listing 3 |
Thankfully no-one has to write code like that. Even in C# 1.0 you could let the compiler do some of the hard work for you, via foreach , as in Listing 4.
foreach(IEntry entry in FileSearcher.Search( directory, patterns)) { if(!entry.IsReadOnly) { Console.Out.WriteLine( entry.SearchRelativePath); } } |
Listing 4 |
Such loops are idiomatic in the programming world, not just to C#, and I can't imagine anyone arguing that the increased expressiveness of the second form incurs a cost to discoverability or transparency over the first.
With C# 3.0, it's possible to condense things even further by using the extension methods provided in the latest recls .NET library in combination with the new language facility of lambda constructs, giving a single statement:
FileSearcher.Search(directory, patterns) .Filter((entry) => !entry.IsReadOnly) .ForEach((entry) => Console.Out.WriteLine( entry.SearchRelativePath));
I don't think this improvement is quite so unequivocal. Certainly the concision appeals to C# power uses - it does to me - but I don't think anyone can argue, even when the use of lambda becomes second nature to all C# programmers, that such a statement is as transparent as the second loop.
I believe that expressiveness is a large factor in the preferences that programmers have for one language over another. It directly impacts productivity because programmers have to type less to express their intent and, importantly, read less when they come back to modify it. It also indirectly affects productivity by reducing defect rates, since many lower level defects simply don't occur. It's not just that housekeeping tasks are obviated: looking back to the file enumeration example, we see that there is no opportunity to forget to release the search handle (via closedir() ) because readdir_sequence does it for us. It's also that the amount of distraction from the main semantic intent of code is reduced: in the C version, the call to remove() is that much less discriminated from the boilerplate than in the C++ version, wherein it takes centre stage.
It's no coincidence, therefore, that many of the major languages appear to be making substantial moves to improve their ability to support expressiveness.
Flexibility
Flexibility is ' how easily a [software entity] lets you do what you need to do, with the types with which you need to do it ' [ FF1 ].
As you may remember from my recent series of articles on FastFormat [ FF1 , FF2 , FF3 ], flexibility is something I prize very highly in software libraries. To be able to translate your design clearly and correctly into code it is important to be able to express your program logic in terms of the types you deem appropriate to your level of abstraction, rather than the types appropriate to the level of abstraction of the component or sub-system in terms of which you're implementing. When you can't do this, you experience what I call abstraction dissonance . The following definitions are borrowed from my still-in-preparation book Breaking Up The Monolith: Advanced C++ Design Without Compromise , which I'm hoping to finish this year; the web-site [ BUTM ] contains a slowly growing list of concept/pattern/principle definitions, including:
Unit of Currency: the primary physical type with which client code represents a given conceptual type; the primary physical type by which a component or API communicates a given conceptual type to its client code.
Abstraction Dissonance: the condition whereby client code is written using units of currency that exist at a higher level of abstraction than those used by the libraries/APIs in terms of which the client code is written.
My signal case for abstraction dissonance can be composed from two of the most commonly used and well-understood components from the C++ standard library:
std::string path = "data-file"; std::ifstream stm(path); // DOES NOT COMPILE!
That this does not compile, and the user is forced to pollute the client code with the damnable .c_str() , is nothing less than ridiculous.
std::ifstream stm(path.c_str());
There are several things that can be done to avoid, or to obviate, situations like this, and I plan to cover them in future instalments. (They're also discussed at length in Monolith , should it ever get to the presses.)
Flexibility directly impacts expressiveness and transparency, and indirectly impacts discoverability, efficiency, modularity, and correctness/robustness. I intend to cover many instances of conflict/compromise between these characteristics in the coming articles.
Modularity
Modularity is about dependencies, usually unwanted ones. This tends to have two forms [ FF1 ]:
- What else do I need to do/have in order to work with the library
- What else do I need to do/have in order to use the library to work with other things
There's been a long and inglorious history of poor modularity in the programming pantheon. Windows programmers will remember (or may still be using) the vastness of the MFC libraries. Java and .NET programmers still experience the deployment hassles of their respective virtual support machinery (though many seem not to realise the problems they, or their users, face). But those are all easy pickings. Modularity problems are also to be found in far more subtle, though no less problematic, situations.
Portability
Portability is about how readily you can use a software entity in your chosen operating environment . The 'operating environment' may differ in any/all of the following:
- Operating system
- Processor architecture
- Compiler
- Libraries
- Feature modes (e.g. without exceptions and/or RTTI)
C was intended as a portable assembler, and as such it does a great, albeit partial, job of abstracting away disparate physical architectures. But even then, porting compiled C programs to different architectures is impossible, and porting C programs by source often involves troubling aspects, including, but not limited to, architecture differences (e.g. in the sizes of types and byte-ordering) and operating system services. (Anyone who's ported between UNIX and Windows will know the pain to which I allude.)
The same goes for C++, with the considerable additional difficulties resulting from the vastly different interpretations and offerings of the C++ language facilities by the compilers currently available. As I've mentioned in previous writings, I reckon that 90%+ of the time I spend on my open-source C++ libraries is in making the code play nice with all the compilers. It's not cool, and it's not fun.
But things are hardly perfect in the virtual machine languages. I have had my fair share of Java's 'write once, debug everywhere' misery, not to mention the sad irony of C#, a formerly-mediocre/now-good/looking-like-becoming-great language bound to a basket-case family of operating systems. I look forward to some smart people separating the language from Windows and the .NET runtime: that would produce something very interesting.
Even when one exits the world of compiled languages entirely, portability is still imperfect, in part due to the leaking up of abstractions of the underlying operating environments. Two obvious ones are the slash/backslash shemozzle, and the lack of globbing in some command-line interpreters. But it can be more profound, such as the relative costs of starting new processes and new threads.
I could go on and on, but I won't. Suffice to say that there is no perfect portability, but there are definitely things that can be done to improve it.
Quantifying quality: relativity and subjectivity
Almost every one of the above characteristics is relative and/or subjective. That does not stop them being useful, but it does mean that we should try to qualify observations about a particular characteristic in terms of the things that we can assess absolutely and objectively.
For example, transparency is highly subjective, but we may attempt to quantify it by enumerating the points of lore and law that must be known to understand a given chunk of code. Similarly, we can make some rudimentary measurement of expressiveness by counting lines of code, and number of sub-expressions in each line. I have my own ideas on these things, and there's plenty of wisdom in the canon, but I'm keen to hear from readers any of their own opinions on the matter.
Characteristics in concert ...
Programmers will always be biased towards one or more intrinsic software quality characteristics, although the particular characteristic(s) may differ in different contexts. When I'm doing C++ it's all about being 'fast', i.e. safe and quick. In Ruby it's all about expressiveness, and discoverabilty and transparency. But it's important to assess maturing components in terms of all relevant intrinsic software quality characteristics.
In many circumstances, just the act of examining a software component in terms of one or more intrinsic software quality characteristics can lead to easy changes that will enhance it in terms of others. It may also, obviously, highlight important deficiencies.
For example, I often find that a first version of a component will be written in terms of some other, useful, component from another library. But if it turns out that just this one piece of reuse incurs coupling to a large library that can cause substantial inconvenience (and hinder acceptance) in terms of modularity and portability, I will be inclined to eschew the third-party component and implement its functionality explicitly within the developed code.
But beyond incidental and independent improvements in respect of particular software quality characteristics, it is often the case that these software quality characteristics can be in conflict. I believe that these conflicts must be explicitly identified and considered, and documented for the benefits of authors (and future maintainers) and for its users.
For example, the FastFormat library, described in articles in the last three instalments of this journal, has a bunch of fairly clear design decisions:
- 100% type-safety, and the highest possible correctness/reliability/robustness
- Extremely high efficiency - no duplication of measurements or wasted allocations
- High flexibility (including infinite extensibility)
- Support for I18N/L10N
- Highest possible level of expressiveness that does not detract from 1-4.
- Highest possible levels of discoverability & transparency that do not detract from 1-5.
- Modular
- Portable
Users are thus able to make a judgement as to whether they can avail themselves of FastFormat's performance, robustness and flexibility advantages or, if they require the highest possible levels of expressiveness (for width-formatting of numeric types), choose an alternative.
(Removable) diagnostic measures
The foregoing characteristics are intrinsic. They are of the software, if you will. As we will shortly discuss, the next group are of the programmer(s). They are things done to (measure the) software by human beings, either entirely manually, or with the assistance of computers, or entirely by automated process but still operating as an agency of the programmer(s). In all cases, they are external to the software.
In between these two positions lies a group of measures that are in the software but are of the programmer. They are used for assessing or ensuring the quality of the software. But they have one important characteristic in common: in all cases they are removable. With the terminological assistance of beneficent and sagacious members of the ACCU general mailing list, I now call these (removable) diagnostic measures . They include:
- Code coverage constructs
- Contract enforcements
- Diagnostic logging constructs
- Static assertions
The parenthetical inclusion of 'removable' in the name serves as an important reminder of the principle of removability (again, from the Monolith website [ BUTM ] in lieu of the book; I have Christopher Diggins to thank for the nice wording):
The Principle of Removability: When applied to contract enforcement, the principle of removability states: A contract enforcement should be removable from correct software without changing the (well-functioning) behaviour.When applied to diagnostic logging, the principle of removability states: It must be possible to disable any log statement within correct software without changing the (well-functioning) behaviour.
The same thing goes for code coverage constructs, and for static assertions.
Obviously, there's a bit of circularity here insofar as we've already established correctness as only being definable in terms of contract enforcements or automated testing, and now we've saying that contract enforcements can be removed from correct software. Well, what can I tell you? Somewhere we've got to take a stand.
Applied assurance measures
This group of things are actions that are done by, or on behalf of, software developers, many of which are to be found as primary constituents of established development methodologies. More than half of the list is about testing.
- Automated functional testing
- Performance profiling and testing
- User acceptance testing
- Scratch testing
- Smoke testing
- Code coverage analysis/testing
- Review (manual and automated)
- Coding standards
- Code metrics (automated and manual)
- Multi-target compilation
- ... and more ...
Most of these should be well known to all competent and experiences programmers, and I don't need to say any more about them at this time.
The one thing I will comment on now is the use of the term measure . Just like the title of the column, this meaning of the term is helpfully overloaded: a measure can be a metric/assessment, and also an approach/policy.
Puzzling phenomena
Thanks to the Global Financial Crisis TM , I've recently had to devote serious effort to the business of attracting clients for the first time in a comfortably long while. In updating the company website and my own vitae, I've noted some surprising observations, including, in no particular order, the following:
- b64 is popular, and recls is not
- Pantheios is popular, and FastFormat is not so much
- No software (sub-)system developed by Synesis Software (my company) has ever had a failure in production. (Caveat: there's been one apoptotic episode, but that was a good thing. Something to examine when we talk about contract programming.)
The third fact is the one with the most commercial bite, but I assure you that my mentioning it is more than mere grandstanding. (Well, there's some grandstanding in there of course, and if any potential clients out there want some magic no-fail pixie dust sprinkled on their codebase, by all means get in contact.) But the main point is that even though I have always prized software quality - even from before I was experienced enough to properly detect or apply it - it was still something of a pleasant shock to realise that we've never had a production failure. Given that the various software (sub-)systems have handled billions of dollars of transactions, that's a pretty comforting thought. And it's nice to be able to trumpet that on the company website. But why should that be of interest to me or you, gentle readers?
Well, it's of direct interest to me because it's quite an improbable achievement, and realising it gives me confidence to attempt the undertaking of writing this column. And I hope it's of interest to you in that it might give you some confidence that some of what I say might be worth a read (assuming you can stomach my grandiloquent loquacity).
Anyway, I won't attempt to offer further convincing on my qualification for the post. If I bodge it, Ric will give me the flick, and rightly so.
Open-source library popularity
What of the relevance of the other two facts, pertaining to the relative 'popularity' of two pairs of my libraries. On the surface, the two popular libraries should be in the shadow of the two less-popular ones, and the consideration of why they're not has raised a number of issues in my mind pertaining to software quality. Let's look at some of the aspects of the puzzle.
Generality of purpose
The b64 library provides Base-64 encoding/decoding. The recls library provides platform-independent recursive file-system search facilities. The latter is surely more generally useful than the former.
Pantheios is a diagnostic logging API library. FastFormat is a formatting library. Although I will argue strongly later that it should not be so, I believe that formatting is to be found far more frequently than logging in C++ codebases.
Available languages
b64 provides a C and a C++ API. recls provides C, Ch, COM, C++ (and STL), C#/.NET, D, Java, Python and Ruby APIs.
Pantheios and FastFormat are both C++ libraries, although Pantheios does provide a C-API for logging C programs.
Promotion
I have not written any articles about b64, and beyond passing a link once or twice I have done nothing to promote it. Conversely, I wrote an extensive series of articles about recls for CUJ/DDJ in 2003-5. Unlike the other three, b64 doesn't even have its own domain, and just has a downloads page that hangs off an unremarkable, barely linked part of the Synesis website. Furthermore, apart from one commercial project, the only thing I've ever used b64 for is to implement the pantheios::b64 inserter class. And b64 is bundled with Pantheios, only adding to the puzzle of its (relatively) high independent downloads.
I have not (yet) written any articles about Pantheios, whereas I've written a recent series of three articles about FastFormat [ FF1 , FF2 , FF3 ], where I pretty much prove its superiority over the existing alternatives.
Frequency of release
Although not differing by orders of magnitude, the frequency of releases of recls is greater than that of b64, and FastFormat has been greater than that of Pantheios over the last few months. This serves to further highlight the disparity in ongoing level of downloads (and other activity) of the latter libraries.
Popularity
Over the past couple of years, b64 downloads have been steady at around 2200 per year, whereas the average for recls is around 500 per year. Even though it's not a huge number per se, I find it remarkable, given that Base-64 conversion is a niche area of functionality.
Similarly, Pantheios downloads tends to be several hundred per week, whereas FastFormat is around 50-80. And the SourceForge rankings, based on downloads, web page hits, forum and tracker activity, are similarly different: Pantheios tends to be in the top two hundred, FastFormat around 2000.
What gives?
Despite all these factors pushing in the favour of recls and FastFormat, there are clearly some important effects that are overriding them. I will expound on these in later instalments, but it's worth mentioning some now, I think:
- Satisfiction . There are several, very well-established formatting libraries available for C++ programmers, so FastFormat has a lot of mindshare to capture. Users of the existing libraries are satisfied with what suffices, in an effect I've previously called satisfiction [XSTLv1].
- Green pasture . In contrast, Pantheios has no serious competitors as a logging API library: the existing (and impressively feature-rich) logging libraries have APIs that are manifestly unfit for purpose.
- Language . I believe that, all other things being equal, a library implemented in C (such as b64) will be far more popular than one implemented wholly or partly in C++ (such as recls), due to concerns (well-founded or not) of performance, portability, and transparency.
- Modularity . b64 does not have any dependencies, not even on the C runtime library. FastFormat, Pantheios and recls all depend on the STLSoft libraries, which cause users more effort (even though, as 100% header-only, the effort involves nothing more than downloading and setting an environment variable).
Doubtless there's more to the situation than I have divined here, but it's enough to inform the analyses of these libraries that will start in the next instalment. I'm keen to hear opinions from readers their thoughts on this issue.
Column format
I don't know about you, but I find it very difficult to understand, or remember, concepts that are presented without examples. Similarly, I find it hard to write about concepts without using examples. So, the instalments of this column - except this first one - are going to be rich with example libraries, programs and code.
For most of these I'll be using my own code, largely because I am able to criticise it as much as is necessary without offending anyone else. The precise material will depend on what is uncovered as the articles progress, but I am confident we'll start in some of my open-source C and C++ libraries, and then move to particular algorithms, components and programs, including those in other languages. For example, I'm currently working with a colleague in updating the Synesis Software .NET libraries (and some .NET forms of several open-source libraries) for C# 2 and 3, and we're cooking. The contrast of what's superior and what's inferior to C++ is very thought-provoking.
In terms of subject areas, you can expect future columns to have diverse subjects, including some/all of the following:
- Correctness, robustness and reliability
- Contract programming: The principles of removability and irrecoverability
- Defining contracts: Identifying and defining software components
- Trade-offs in intrinsic software quality characteristics
- Attracting real users: It's the coupling, Stupid!
- Efficiency for real
- Packaging
- The logging conundrum
- Component vs unit-testing: A scratch and sniff approach
- Automated testing
- The evils of the Boolean type(s)
- Overloading vs overriding
- Defining clean methods
- Software quality measures for multithreaded programming
- Cracking the abstraction puzzle
- Conformance: Structural, semantic, explicit, intersecting, and all manner of foul beasts
- ... and lots of discussions of the differences in software quality approaches between different application areas, between different languages, between different layers of abstraction
Naturally, some of the material discussed will be a cheap rip-off from that already included in my books. The several in-progress book projects will likely overlap too. But the limited size and broad scope of the column will mean that there's plentiful opportunity to have unique content in each medium.
A quest for quality ...
As ably discussed in Code Complete [ CC ], organisations are only able to significantly improve their software quality by a combination of measures. Importantly, the combination has to involve both automated measures and human measures.
I would like to point out that, in my opinion, more important than all the individual measures is a requirement for the people who are writing the software to have the wit and will to seek out quality processes and apply them, against the twin obstacles of business imperatives and the apathy/heroism of 'programmers' who should be in a different career. One of the wonderful things about being a programmer is that it is fun, it is creative, and it can (and should) be beautiful. In this crucial respect, it is a true craft, and it is my aim with this column to help others improve their craftsmanship through the (limited) discussion of quality concepts and the (generous) application of practical quality measures. I invite you to join me. n
References and asides
[B64_ENCODE] This is one of the API functions from the b64 library, described at http://synesis.com.au/software/b64/doc/b64_8h.html#50a93e4f6a922c5314a9cb50befc2d13
[BUTM] http://breakingupthemonolith.com/
[CC] Code Complete, 2nd Edition, Steve McConnell, Microsoft Press, 2004
[FF1] An Introduction to FastFormat, part 1: The State of the Art, Matthew Wilson, Overload 89, February 2009
[FF2] An Introduction to FastFormat, part 2: Custom Argument and Sink Types, Matthew Wilson, Overload 90, April 2009
[FF3] An Introduction to FastFormat, part 3: Solving Real Problems, Quickly, Matthew Wilson, Overload 91, June 2009
[FREE-LUNCH] 'The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software', Herb Sutter, Dr Dobb's Journal, March 2005
[LEAK] http://en.wikipedia.org/wiki/Leaky_abstraction
[RECLS_SEARCH] This is one of the API functions from the recls library, described at http://www.recls.org/help/1.6.1/group__group__recls.html#a1
[RECLS-100%] I'm in the process of rewriting the recls library as 'recls 100%', whereby each implementation of recls for a given language will be implemented 100% in that language, rather than recls 1.0-1.8 where each language had a thinnish binding to the underlying C-API. See http://recls.org for progress.
[OOSC] Object Oriented Software Construction, 2nd Edition, Bertrand Meyer, Prentice-Hall, 1997
[XSTLv1] Extended STL, volume 1: Collections and Iterators, Matthew Wilson, Addison-Wesley, 2007