Friday, March 11, 2011

Evernote continued...

I'm really finding evernote is hitting the sweetspot: I've dumped reading and linklists in, and started tracking some task and TODO lists and it works pretty well. My only issue is that I'd like to use it at work (where we have no admin rights), and the web interface is somewhat clunky and slow (particularly when the corporate network performance is pokey), so this is hampering my take up and use of it. Otherwise, I'm finding it a great wee app.

Thursday, March 10, 2011

Software Reliability and Volatility

I've been working a reasonable amount of late in dealing with considerations of software defect management and metrics to indicate "software reliability". Personally, I always think that reliability is a continuum (like trust), and that a software is only as reliable as its last failure. Software which predictably fails big and often is obviously "unreliable", but much software runs fine most of the time until something "unexpected" happens...is this software then unreliable? Well probably not until it starts failing more frequently, and more predictably, and therein lies the problem: it is difficult to know how reliable a software system is until it stops to be reliable.

There are pretty much two approaches to reliability - one is based upon trending, and the other on prediction: while I much prefer the trending approach which says check how reliable your software could be considered now, then check at now+1, take a look at the rate of change and this is the indicator as to your vector of reliability, I also think that taking a look forward with a decent prediction model, can help to set a sense of expectation.

I think this sense of expectation is best summed up in the idea of "code smell" or "defect risk", which could be considered as "is your code smelling better or worse over time" (trend) and "is your code likely to smell worse at any point in the future" (prediction). There are obviously complexity, coverage, sizing, and defect tracking rules which can be applied, but I really like the concept and approach to "volatility" as set out in the article: "Software Volatility" by by Tim Ottinger and Jeff Langr in the latest Pragmatic Programmer Magazine, since this seems to blend trend and prediction together: the code which you touch most often is the place where code is most likely to break, and the more you touch it, the more likely this is to happen.

I strongly suggest reading the article, and look to applying a similar approach in a balanced reliability metrics scorecard and inspection process. I certainly don't think that it should replace coverage (which lets you know where you have unit tested), cyclomatic complexity (which lets you know how many tests you should write, and when you might want to consider refactoring), sizing (which just gives you a volume idea of what you are looking at), code analysis (sort of spellchecking for your code), defect classification and tracking (for failure rates and densities)... but it certainly represents a tool worth sharpening and adding to the toolbox.