If your last project wasn't successful, you're not alone...
Apparently most software projects fail [ Standish ]. A scarily high statistic, but how do you decide if a project has indeed failed? What do 'fail' and 'succeed' mean?
At the most simplistic, a successful project has achieved its goals and a failing project hasn't to a greater or lesser degree. (Of course, if it doesn't have any goals then it's very hard to decide whether or not it has succeeded, but a project with no goals isn't very interesting.) So lets expand the concept of 'goals' a bit, and see how not meeting them causes failures.
The most general goal is 'deliver enough value to have been worth it'. This is still pretty vague, but it does start to produce some insights: the first thing is to deliver - a project is an obvious failure if it has never been finished, but it also won't have delivered if no one uses it, either because no users exist, or they've got it but don't use it (this latter is a bit more subtle if you sell your software - you might have got the money for the first sale, but you're unlikely to get repeat sales if the users aren't happy.)
But what of a project that is still being written? It's not yet delivering any value, so it cannot be thought of as successful or not, just 'in progress'. This is one of the problems of those over-long projects that go on for years - until the first user starts getting some value out of it, the project is just pure cost. This is one reason why early delivery of incomplete systems is such a good idea - apart from getting feedback on what you have done as soon as possible so you can adjust your future plans, the system can start earning its keep while you implement the next stage.
But what is this vague sounding word 'value'? Well, it is a measure of what is important to the person paying for the work. This could be the obvious monetary measure, but could just as easily be measured in improved performance, or just making the user's life a bit easier and making them happier with the system. But delivering a small amount of value at a large cost isn't a good trade-off, so there must be a minimum level to have made the effort worthwhile. To do that we must compare the value gained to the cost of achieving that gain.
That cost can comprise several aspects - again there's the obvious monetary cost of computers, tools and people's wages. A more subtle cost is the 'opportunity cost' - while you were implementing feature X, you have given up the opportunity to do feature Y instead. This gets really noticeable when you take the huge decision to scrap an existing system and reimplement - all the time and effort taken to redo it could have been spent enhancing the existing system with completely new features, or improving the performance and reliability. And starting from an existing system will generally mean you've an already stable, working codebase, which makes incremental improvements and delivery easier and more predictable.
"I didn't fail the test, I just found 100 ways to do it wrong" - Benjamin Franklin
So in what ways could a project fail? Remember, though, that failing is not an all-or-nothing idea, it just means that in some way it fell short of success, perhaps by only a little. A good starting point would be to consider the three classic aspects you try to manage: Features, Quality and Time.
Failing on features reduces the utility of what is delivered, and thus reduces the amount of value delivered. In project management, when something has to give, this is often the compromise of choice, especially if you make sure the vital features are done first and then you can drop the Nice-To-Haves and optional extras if things look tight.
There's a more insidious way to fail on features though - doing the wrong ones. Creating features that few or no people use is a waste of time, effort and opportunity. This is why it is vital to get a really good idea of what users actually need and their relative importance. Other problems are implementing features that you think might be useful, but no one really knows for sure. This is pretty common when you're creating something brand new, as you are having to predict based on very little evidence or experience.
Failing on the level of quality can have serious effects - buggy code can be an irritant that slows users down (so they get less value), really bad bugs can prevent the system working at all, and could even cause so much pain and effort to get to work that the net effect is negative over previous systems. Some software can be so critical that failures can cause fatalities.
Failing on time, in the form of not delivering by the promised date, is depressingly common in our industry, and I suspect accounts for most projects counted as failures. In the worst case, the project is cancelled outright, 50% to 100% overrun is not uncommon in my experience, but even delivering slightly late can have serious effects on a business, from loss of time to spend on other opportunities reducing the business' competitiveness, to contract penalty clauses, and ultimately going under because a competitor got their software out sooner.
As an aside, I suspect this effect can cause poor software to be common. Consider a market with two competitors. Company A takes six months and releases a limited and buggy product. Company B tries to do the 'right thing', and takes a year to ship a full featured, robust product. But product A has already been shipping for six months and has dominated the market, so product B sells poorly, and the company goes under. A sting in the tail is that there's no longer as large an incentive for Company A to further improve their product.
But it seems that being late is often not that serious an issue - many projects deliver late but are otherwise successes. As long as the company hasn't suffered majorly in the meantime - perhaps a previous version of the system is still selling well enough - then such delays can be absorbed.
So, what causes these failures?
Doing the wrong features can stem from poorly understanding the target market and its needs. And don't confuse what the user says they want with what they actually need - quite often a feature request is someone's idea of how they think a problem can be solved, and there can be a better solution if you can find out what is the real underlying problem. Another cause is a form of Priority Inflation - a suggestion that something might be considered is included as a Nice-To-Have, which mysteriously becomes a Priority and is then translated into a Show-Stopper. These sorts of misunderstandings could be caused by the marketing department not functioning well, having a brand new and unknown market where the customers don't even know what would work, or the developers not understanding what is needed - a lot of these are communication problems.
A hard issue to address is the natural tendency for developers to want to do something cool and new - it might well be interesting to rewrite everything in a brand new language, but can it really be justified that it is the best thing to do? Worst of all if the people suggesting such a move have the power to push it though - the hardest customer to say no to is your manager or architect who has a pet feature they want to do.
Poor quality has a myriad of causes, but the most pernicious is for the development system as a whole to not have a solid commitment to it. Even if individuals and teams want to do the right thing, it's very easy for corners to be cut especially in the face of an important deadline, or a customer offering a big order. In a sense the way that software is comparatively easy to change is a cause of this problem - quality can quickly be compromised because of a last minute 'can we just get this change in? It's a little one I promise!' Even in the best teams with the best of intentions, it's all too easy to put something in late in the development cycle with not enough time to verify that your (no doubt carefully considered) reasoning was in fact accurate and it is a safe change. Here I find Agile methods can cause problems - if your release cycle is very short then there is never enough time to flush out any subtle problems, so there's a pressure not to do anything unless it's trivial and important issues get postponed.
Being late is easy. All you have to do is promise too much and underestimate the time it takes to implement - and politically that might be done to 'sell' the project, deliberately or unconsciously. The real killer factor here is the interrelationships: the simple estimation technique I suggested last issue works well for small isolated tasks, and also extends well to combine strings of subtasks into a bigger estimate. But one thing it is easy to miss are the interactions between tasks and existing systems. If you do one thing in isolation, you have one estimate. Do two things and you have two estimates, plus one interaction that may generate a third small task to fix. Do three things, and you've three estimates, plus three interactions. And so on until the interactions dominate and you can't get anything done. This effect is why I think principles such as 'Separation Of Concerns', and 'Program to an Interface not an Implementation' work - they cause you to organise your design such that the interactions are reduced to a manageable level and things are isolated enough to get a grip on. In a similar manner, incremental implementation and delivery avoids biting off more than you can chew, and can let systems settle down, stabilise, and unforeseen problems fixed, before doing the next round.
But for all those 'failures', it is surprising just how many projects deliver something worthwhile, and are fantastic opportunities from which to learn lessons that can be applied to the next project. As Ralph Waldo Emerson put it, 'Our greatest glory is not in never failing, but in rising up every time we fail.'
Economic turmoil
While writing my last editorial I noted how things were going a little odd in the financial world, and how hopefully things would have settled down and the consequences had become clearer by the time you read it. Well, another editorial and things have got worse, with the outlook rather bleak for the next year or two (if we're lucky.) The big difference now is that this is no longer just a financial crisis, and everyday companies and people are now being affected. Otherwise healthy companies are deferring major investments in new projects, or cancelling until things become more stable; similarly with spending decisions - only buy when absolutely necessary, and so products aren't selling as well as forecast. All those nice optimistic sales forecasts are now out of date overnight, and no one has a clue what's going to happen next, and so companies understandably have to re-plan for an uncertain future.
But the personal cost of this can be great, as excellent people find themselves without a job. Having been made redundant myself in the wake of the dotcom bubble, I know how difficult it can be to know where to start sorting things out, but also how much the ACCU can be of help - just being a member looks good on a CV (and writing articles and giving talks even better!) and the networking aspects such as accu-contacts and the local meetings can really help to get the ideas and contacts that lead to that next job (and did you know there are also informal Facebook and LinkedIn groups?).
Le C++ nouveau est arrivé!
And to finish, some good news. Towards the end of September, the C++ committee met up and a major milestone happened: the draft of the new C++ standard [ WG21 ] was published for an international ballot. This is essentially the feature-complete release, sent out to the beta testers (ie the national bodies) ready for some bugfixing. The first review is now happening, a second will happen next year and then the standard will be ratified - the intention is for it to happen towards the end of 2009, so it will indeed be C++09. Several compilers are already implementing parts - GCC has various features as options, patches and branches [ GCC ], Microsoft's early preview of Visual Studio 10 [ Microsoft ] has some too, and Codegear [ CodeGear ] are also working hard at the new features. Time to get experimenting!
References
[ CodeGear] http://www.codegear.com/products/cppbuilder/whats-new/
[ GCC] http://gcc.gnu.org/projects/cxx0x.html
[ Microsoft] https://connect.microsoft.com/VisualStudio/content/content.aspx?ContentID=9790
[ Standish] http://www.standishgroup.com/ produce a series of CHAOS reports into software projects. Their results fairly consistently report around 70-80% of projects fail or are challenged.
[ WG21] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf