Monday, March 30, 2015

Why estimate?

I have been reading with some interest the recent debate around #noestimates.
I think the movement is somewhat misnamed, in that it is not “estimates” which appear to be the problem, so much as the decisions which are taken around them. To some extent, I think this comes from a misunderstanding as to what estimation is (and is not). If the movement calls attention to this misunderstanding, then this is all to the good, and all critical thinking which drives deeper understanding as to how to handle project uncertainty, the better.
I am interested in estimation, mostly owing to the fact that - when left to my own devices - to come up with a single point estimate, I am more than often hugely wrong. But therein lie a couple of the major issues: estimation is not about being right, and it is not an individual effort.

The estimate is not the deliverable

“Here is my estimate.” Seductive words, yet nevertheless, dangerous words. Estimation is an ongoing activity, not a deliverable. Estimates are essentially a form of risk management, a way in which we can deal with uncertainty in order to set co-ordinates and move forward. An estimate can therefore be considered as an impulse to action.

Estimates are for decision-making and lead to commitments

An estimate is an input to a more important process: usually that of decision-making, and this is where I first start to deviate from some of the #noestimates thinking.
Estimates set bounds on the unknowable, but they are not the same thing as targets, nor the same thing as commitments. Communicating this single fact is vitally important in estimation activity.
I may estimate that a project is highly likely to be delivered in Q4 of the current calendar year. The internal Marketing Department may be targeting the beginning of Q4 for launch, and therefore my Project Management team may have committed to a Q3 delivery. Herein lies the actual problem: my high confidence estimate lies outside of the project commitment, so I need to now frame my level of confidence in delivering to the committed date (NOTE: this will of course not be “highly likely”).
Estimates are therefore a management issue, and a political hot potato, and this is doubtless where much of the annoyance with estimates comes from.

The future is uncertain: deal with it!

Estimates, like projects, live in the future, and the future is unknowable, so no one technique is going to help out for all projects, and what is key is to tailor the estimation process to the project in question. however, some general pointers can point everyone in the right direction:
  • Start estimation early: this way you can frame the commitment discussion.
  • Re-estimate often: this is just tracking actuals against baseline. The “cone of uncertainty” will slim as you approach your target: if you check your direction often, you can make sure you are still on target. You need to learn from your estimation activity, and continually correct for “error”.
  • Estimate from what you can measure: accuracy is less important than precision. Try to use the same measures in the same way to estimate. Raise the red flag: when you look like your delivery is straying from your commitment, raise the red flag (as soon as you notice). This will always hurt, but remember, estimates are an input to decision-making: the decision to continue or not is an important one, and relies upon good, timely data.
  • Don’t overcommit: this is the hardest line. At least make everyone aware of your level of confidence in any commitment (and if you can manage to have it formally recorded as a risk, then all the better). This is not a CYA: again, your level of confidence is an input to a decision-making process, so don’t edit yourself: speak now or forever hold thy tongue!
  • Don’t make an individual estimate: gather several individual estimates from several perspectives, and use these to decide upon best case, likely case, and worst case scenarios. Individual (and particularly expert) estimates have proven to be wrong, group estimates tend to be more accurate.
  • Don’t use one technique: again pick three or four techniques to make estimates and centre in on the most likely scenario - for instance, a wideband delphi estimate; an estimate using proxies of effort (e.g. number and length of requirements based upon similar historic projects); a bottom-up estimate using T-shirting all used together will give you some sense of directional likelihood.
  • Pick the right techniques for the right point in time/right type of activity: don’t use heavyweight methods for small, rapid projects. Cut your cloth according to the type of project you are working on , and its perceived risk/benefit profile. Don’t invest too much time in estimation - rapid, timely estimates will have more value than a deeply modeled, but effort-intensive approach. All estimates (even the accurate ones) cannot predict the future, they can just help to reduce the uncertainty.
  • Check for change: when estimating from historic data or re-estimating in a project, be aware of what has changed and what has stayed the same. Make sure to adjust your model for change.
  • When estimating for effort, also estimate for productivity: we all forget that productivity ebbs and flows. A manday of effort is not always equivalent to a one manday delivery: most of us fail to hit anywhere near 100% productivity, and our productivity varies wildly during a project. Maybe assume a 60% productivity rate is more likely or set out with a 100% productivity estimate and then adjust it as you see that delivery never reaches these levels. Don’t let overtime skew the picture: death march projects make ample use of overtime to swell a mayday from 8 hours to 10, 12 ore more hours. If you are using overtime, keep track and don’t let it skew the picture: people are (in general) less productive the more they work, so keep an eye on total hours worked.
  • Estimate range and confidence: try, try, try to not give single-point estimates. Try to give a tight range with a level of confidence attached: e.g. "We are 90% confident of hitting Q2 delivery, but only 75% confident of doing so before the end of May". If you are asked to commit to a single-point, remember not to overcommit...
My final recommendations would be to read up on estimation. I really like Steve McConnell’s “Software Estimation: Demystifying the Black Art” as a great overview.
Finally, read the pros and cons of the #noestimates argument: making an informed decision about the right approach to take, at least leads to a mindful project delivery approach. Some good articles I have seen so far are:

Wednesday, January 16, 2013

Whinelist: things users don't care about

Care of O'Reilly Radar, comes this great little post from PeteSearch: Things users don't care about. I'm in complete agreement with the list: no-one really cares about how much effort a project took, they are only interested in the end product and their own use of the product. Although I agree that singing "Nobody knows the troubles I've seen..." is not much help, I must confess that the list does break down into things users don't care about and shouldn't (e.g. your project effort), and things users don't care about and probably should (without necessarily having to be aware of them) into which bucket I would throw the "URPS" of FURPS: items like extensibility and architecture. Any way, it's a good post, and advice worth remembering...

Wednesday, October 24, 2012

A short film about bugs...

...which starts with an apology for the pretentious title.
  1. Anyone can identify a bug.
  2. Many fewer people can resolve a bug.
  3. Far fewer still can anticipate a bug and stop it from rearing its ugly head.
  4. Anticipation is a greatly under-estimated skill.

Sunday, March 18, 2012

Writing Good Specifications the Simple Way

When writing specification documents, whether requirements, functional specifications, software design specifications, use cases, user stories or even briefing documents and statements of work, it is easy to become caught up in technical detail: is this a functional or a non-functional item, do I draw up an activity diagram, or model state...?
As Schiller (very loosely) said,
Don't try so hard, pleasing everyone can be a bad thing.
After many years of specification writing, and diving deep into IEEE standards, UML, semantic models and a myriad of other fine, worthy, but complex approaches, I have come to the conclusion, that you should just think like a primary schooler (grade schooler).
Specification is about communication: whether from a client to a technical team, from one member of a technical team to another, or from a technical team back out to a client or a regulator. When you first learn to write, your teacher's initial focus is on having you communicate i.e. express your ideas clearly.
So - even though thorough, technically focused systems can be great - the cornerstone of a good specification is plain, understandable, concise English. A very simple structure can take you a long way and - like "progressive enhancement" in HTML - allows you to build a more complex structure around it. So:...

SIMPLE RULES

  1. Don't write passive sentences. Specifications are about a system or parts of a system doing and responding, so focus on active sentences - it's just good object orientation anyway...
  2. Do structure any specification with WHO, WHAT, WHY, WHERE, and WHEN.
  3. Then add to your structure with WHO NOT, WHAT NOT, WHY NOT, WHERE NOT, and WHEN NOT.
  4. When you've done this, give examples (for both paths)
  5. Then focus on the HOW: first of all HOW OFTEN, HOW MUCH.
  6. Next focus on the HOW TO (which is unfortunately where a lot of specification starts).
Keeping to these simple rules and practices is often all that is needed for a comprehensive, understandable and deliverable specification.

PUSHING FURTHER - JOINING IDEAS TOGETHER

Of course, if you want to push further, then:
  • If you need to link ideas up into activities and flows, try to keep in binary mode: it will keep the flow and decision points simple. 
    • (For example for inputs it is a good idea to specify for valid, invalid and null values, but rather than keeping a three-value decision point, split it into two binary decisions i.e. value provided - Y (not null)/N (null); then value valid Y (valid)/N (invalid).)
  • Break use cases into simple "SIPOC" (supplier, input, process, output, customer) stories by not mentioning SIPOC at all. Instead, just note down: 
    • what you "start with:", 
    • who this is "provided by:" will deliver you with a great set of preconditions for your... 
    • ...big "do: something" block. 
    • finishing up with "end with: (result)", 
    • and adding "given to:" will keep you results-focused and looking at the next use case in the value chain.
    • You can - if you are feeling adventurous - add some useful detail in a series of use cases by being clear about what "does change" (variables) and what "does not change" (constants) during the use case.
  • Never take anyone's model (including this one) at face value - there will always be cases when it does not quite fit: one format does not fit every type of specification item, so just pick the most appropriate item. For instance, "plain English" may not be the most appropriate specification language for communicating with physicists!
  • Know when to add complexity (by reading about and understanding all the different ways and means of specifying - see "Reading and Improving")
  • Know when to stop paper-specification and move to code (including tests and comments, for example JavaDocs) - or you will spend a great deal of time maintaining paper documents which will ALWAYS be slightly out-of-date and inaccurate.

READING AND IMPROVING

A short blog post talking about simplicity will never be comprehensive, so I would strongly recommend the following books and authors (just a short(!) selection):

  • Requirements Engineering Fundamentals (Pohl & Rupp)
  • Any of Karl Wieger's books on Software Requirements
  • The RSpec Book (Chelimsky et al)
  • Specification by Example (Adzic)
  • Writing Effective Use Cases (Cockburn)
  • Test Driven (Koskela)
If you are interested in what Schiller actually said, it is:

Kannst du nicht allen gefallen durch deine Tat und dein Kunstwerk,
mach' es wenigen recht; vielen gefallen ist schlimm.

I will try to follow up on this post with more detail on the SIMPLE RULES, but in most cases, just thinking about each of these super simple questions without any other background is a really helpful technique, as I hope will become evident in the example below.

Examples


Just to make things somewhat easier to understand, here is a completely invented example which is far from perfect, but shows how applying the simple rules can help to build up into a solid requirement or specification item.

1. Not "an email will be sent"... BUT "The software's email component will send an email" (active sentence)
2. Even better still is..

  •   "The software's email component will send an email to the designated recipient" (who)
  • "The software's email component will send a confirmation email to the designated recipient" (what)
  • "The software's email component will send a confirmation email to the designated recipient to allow him/her to know their request has been successfully processed" (why)
  • "When a submitted request has been confirmed as completely processed, the software's email component will immediately send a confirmation email to the designated recipient to allow him/her to know their request has been successfully processed" (where and when)

3. Even better still is...being very precise about "triggers" and "non-triggers".

  • "When a submitted request has been confirmed as completely processed, the software's email component will immediately send a confirmation email to the designated recipient to allow him/her to know their request has been successfully processed. A confirmation email will not be sent when the request has not been confirmed as completely processed or where a designated recipient has not provided his/her email address." (NOT)

4. ...and adding in the "how often, how much" detail helps also...

  • "When a submitted request has been confirmed as completely processed, the software's email component will immediately send a confirmation email to the designated recipient to allow him/her to know their request has been successfully processed. A confirmation email will not be sent when the request has not been confirmed as completely processed or where a designated recipient has not provided his/her email address. A confirmation email will only be sent once a week to any individual designated recipient regardless of how many requests have been submitted."


If you then start to add in examples of each of the elements and what output you expect for each example (such as where a request is in the submission process, or different designated recipient details, or expected content of a confirmation email, or designated recipients who have made single requests, multiple requests etc.), then you really start to get a coherent, complete and testable specification item.

Friday, March 11, 2011

Evernote continued...

I'm really finding evernote is hitting the sweetspot: I've dumped reading and linklists in, and started tracking some task and TODO lists and it works pretty well. My only issue is that I'd like to use it at work (where we have no admin rights), and the web interface is somewhat clunky and slow (particularly when the corporate network performance is pokey), so this is hampering my take up and use of it. Otherwise, I'm finding it a great wee app.

Thursday, March 10, 2011

Software Reliability and Volatility

I've been working a reasonable amount of late in dealing with considerations of software defect management and metrics to indicate "software reliability". Personally, I always think that reliability is a continuum (like trust), and that a software is only as reliable as its last failure. Software which predictably fails big and often is obviously "unreliable", but much software runs fine most of the time until something "unexpected" happens...is this software then unreliable? Well probably not until it starts failing more frequently, and more predictably, and therein lies the problem: it is difficult to know how reliable a software system is until it stops to be reliable.

There are pretty much two approaches to reliability - one is based upon trending, and the other on prediction: while I much prefer the trending approach which says check how reliable your software could be considered now, then check at now+1, take a look at the rate of change and this is the indicator as to your vector of reliability, I also think that taking a look forward with a decent prediction model, can help to set a sense of expectation.

I think this sense of expectation is best summed up in the idea of "code smell" or "defect risk", which could be considered as "is your code smelling better or worse over time" (trend) and "is your code likely to smell worse at any point in the future" (prediction). There are obviously complexity, coverage, sizing, and defect tracking rules which can be applied, but I really like the concept and approach to "volatility" as set out in the article: "Software Volatility" by by Tim Ottinger and Jeff Langr in the latest Pragmatic Programmer Magazine, since this seems to blend trend and prediction together: the code which you touch most often is the place where code is most likely to break, and the more you touch it, the more likely this is to happen.

I strongly suggest reading the article, and look to applying a similar approach in a balanced reliability metrics scorecard and inspection process. I certainly don't think that it should replace coverage (which lets you know where you have unit tested), cyclomatic complexity (which lets you know how many tests you should write, and when you might want to consider refactoring), sizing (which just gives you a volume idea of what you are looking at), code analysis (sort of spellchecking for your code), defect classification and tracking (for failure rates and densities)... but it certainly represents a tool worth sharpening and adding to the toolbox.

Saturday, February 26, 2011

Too many O'Reilly Books?

I was just looking through my O'Reilly digital bookcases and wondering whether it is possible to have too many tomes from this publisher:
(NB: I think the answer is no...)

Here is the list of what I found:

97 Things Every Programmer Should Know
Beautiful Data
Beautiful Visualization
Data Analysis with Open Source Tools
HTML5: Up and Running
Learning Rails
Making Software
Making Things Happen
The New Community Rules
The Productive Programmer
The Social Media Marketing Book
Web 2.0: A Strategy Guide
The Art of SEO
Beautiful Testing
Complete Web Monitoring
CSS Pocket Reference, Third Edition
Designing Web Interfaces
Facebook Cookbook
Getting Started with Flex 3
Grails
Information Architecture for the World Wide Web, Second Edition
Programming Collective Intelligence
Search Engine Optimization for Flash
SEO Warrior
Universal Design for Web Applications
Website Optimization

Thursday, February 24, 2011

Au Revoir MyBlogLog

As part of the fallout of the Yahoo thinning out (and hoping that delicious finds a buyer), I just received the following mail:

Yahoo!

Dear MyBlogLog Customer,

You have been identified as a customer of Yahoo! MyBlogLog. We will officially discontinue Yahoo! MyBlogLog effective May 24, 2011. Your agreement with Yahoo!, to the extent that it applies to the Yahoo! MyBlogLog, will terminate on May 24, 2011.

After May 24, 2011 your credit card will no longer be charged for premium services on MyBlogLog. We will refund you the unused portion of your subscription, if any. The refund will appear as a credit via the billing method we have on file for you. To make sure that your billing information is correct and up to date, visit https://billing.yahoo.com.

Questions?
If you have questions about these changes, please visit the Yahoo! MyBlogLog help pages.

We thank you for being a customer on Yahoo! MyBlogLog.

Sincerely,

The Yahoo! My BlogLog Team

Sunday, February 06, 2011

Evernote...at last

I am not the worlds greatest "Getting Things Done" guy (only got half way through the book, and just could not get around to putting it into place). In a quest for lightweight but useful "To-do" managers and "Do not forget" managers, I have been through the full gamut: I even bought Bento (and have not really done much with it). The one tool I really loved and do use is delicious, but I have the collywobbles with the Yahoo announcement (withdrawal from market, and then a quick revision to "we want to sell it) and have been looking into not losing my valuable brain estate. I've been taking a peek at evernote for a while, and so finally decided to download this and work out how to get my delicious stuff into it: easy-peasy, just follow these instructions - works a dream: Instructions to import delicious into evernote...Lovely.

Saturday, February 05, 2011

More reviewing...pt2

So, I've started on the first chapter of "Mining the Social Web" proper, and realise it will involve much side reading, as 1) you really do need to know python (although this is already installed in OSX, joy!) 2) you really need to read all the APIs and 3) it looks like the book is going to look to install a large number of python packages. Still, using the twitter package and pulling down the timeline content in terminal is quite cool.

I think I will probably need a once through hands-off, then a once through hands-on, as the first chapter is moving pretty fast, and making some fairly heavy assumptions on development confidence.

Friday, February 04, 2011

Starting out on the O'Reilly Blogger Review Programme

Those nice folks at O'Reilly have started a Blogger Review Programme, and kindly let me be a part of it. Essentially you commit to write a review of an O'Reilly tome in return for a digital review copy - just like I remember it from my real journalism days!

I've been insomniac the past couple of nights (it is 3:45am in Geneva), mostly churning some Measurement & Analytics annoyances in my head, so am now sitting reading the intro. to my first Blogger Review book "Mining the Social Web" (and I suppose I should really try and write this posting in hReview (but I can't be bothered)). So first thoughts are: looks interesting - looks like social network analysis for the social web, so ties nicely into some of my usual old hobby-horses. First gripe would be that all the code is in Python (which I don't know), so I'll have to get to grips with that. It looks like it might hook in nicely with the really interesting, but slightly forbidding "Programming Collective Intelligence" (which if I remember right is also chock-full of Python) - could be a useful companion piece.

Since the O'Reilly review is a 200 worder, I'll try and run a longer review chunk by chunk as I work my way through the book, and then try to some up the overall impression.

Monday, February 22, 2010

Use Casing with Your Clients - BDD or Given, When, Then...

Sitting at home, feeling sorry for myself as I am off work sick, I thought I might write a short blog post on using Behavioural Driven Development (BDD) for Requirements work with clients - a technique which I am starting to use with some (note the qualification) success.

I'm a big fan of use casing "As is" models and "To be" models to get a really nice sense of the contexts in which a system is supposed to work (and by system, I mean people, process and technologies), but this can be hugely tough unless you are working with clients who are formal, systematic, logic-driven thinkers.

And this is how I stumbled upon Dan North's BDD - as a way of reducing the formal use case methodologies (pre- and post-conditions et al.) into a very simple story formulation.
Feature: As a stakeholder, I need a something, in order to meet my goal (High level business requirement - main system "features")
Scenario: Given a starting context, when something happens to change this context, then this should be the result. (Lower-level scenario and low-level "feature" descriptions).
Note: I write "feature" as feature is a controlled word in BDD.

The greater joy of this technique is that it effectively shrinks the distance between what the client says he wants, and what your technical team is capable of delivering back to satisfy these requirements. Why? Well, as this fine blog post points out, it is because this is a business language way of describing states, event triggers and test conditions. Even more importantly, through the use of JBehave or Cucumber with Automated Testing tools like Selenium, your developers can start by writing failing tests for your requirements (requirements-driven, test-driven development), and then - if it is worth the overhead - write automated functional and acceptance tests.

I'm desperate to get this rolling from Business Contact through Business Analyst to Developers and Testers within the company I work at - but I wondered if anyone else has gone ahead and implemented this fully as an end-to-end requirements and development methodology. In the meantime, I'll get my act together and make a couple of posts (when I'm feeling better) as to a few workshop techniques which work nicely for the client-end of BDD.

Sunday, November 22, 2009

My Review of Website Optimization

Originally submitted at O'Reilly

Is your site easy to find, simple to navigate, and enticing enough to convert prospects into buyers? Website Optimization shows you how. It reveals a comprehensive set of techniques to improve your site's performance by boosting search engine visibility for more traffic, increasing con...


Diverse, related topics in one place

By thristan from geneva, switzerland on 11/22/2009

 

4out of 5

Pros: Concise, Helpful examples, Easy to understand

Best Uses: Expert, Intermediate

This is a hugely useful read, particularly if - like myself - you have a range of responsibilities for managing online assets ranging from design, deployment and testing to site promotion and marketing.

Much of the information on each vertical section can be found elsewhere - for instance SEO, PPC and performance optimisation - and individually, each section provides solid information which might not surprise.

The real advantage of this book is that it forces the reader to think of each of the diverse threads as being intimately related in the customer's experience. If you are a UML geek, think of it as an answer to an end-to-end use case (find site, load site, explore site): if you are a marketeer it will expand your horizon in understanding how technical, offpage elements can contribute to satisfaction, bounce rate reduction and improved conversion; if you work in application deployment or support, it will give you some useful pre-launch direction to load and performance testing and gain you brownie points with your Sales and Marketing customers.

So, short order review is that while individual sections on their own may not deliver any surprises to someone with existing expertise, the overall remit of the book will expand the horizons of most who read it.

(legalese)

Tuesday, October 06, 2009

Corporate Programme Management for the Web

I've worked for a number of blue-chips for a number of years, and am continually surprised by the fact that - since online is not a primary or secondary revenue base - sensible, value-driven approaches to managing a Web portfolio are few and far between.

Many larger companies play lip service to the concept that the Web is (or can be) a hugely important component in the process of "doing business": they show initial interest (in terms of capital investment) in a couple of pilot projects, and then seem to rapidly lose interest in keeping up with the Internet as a "going concern".

The complexity of stakeholders, risk management, the "agility" of a larger corporation, and - of course - competing (core) investment concerns all play a role in making it difficult to apply a longer-term, strategic programme approach to online activity, and ensuring that this is delivered according to best practices.

Benchmarking helps, but - funnily enough - when speaking to colleagues working in related roles for different companies, I find that this is, in fact, an endemic problem. So this is a call for comment, input and assessment from anyone working in an internet management role where the core product is not sold online, but where the Internet is "considered" a core component of business activity - correct me if I am wrong, maybe share successes and failures, and indicate the righteous path forward...