Blog

go back to the blog

On Estimation

  • 30/09/2014
  • 14226 Views
  • no comments
  • Posted by EuroSTAR
-->

In the late 1980s, I was working at a telecoms company as a development team leader. Around 7pm one evening, I was sat opposite my old friend Hugh. The office was quiet, we were the only people still there. He was tidying up some documentation, I was trying to get some stubborn bug fixed.

Anyway. Along came the IT director. He was going home and he paused at our desks to say hello, how’s it going etc. Hugh gave him a brief review of progress and said in closing, “we go live a week on Friday – two weeks early”. Our IT director was pleased but then highly perplexed. His response was, “this project is seriously ahead of schedule”. Off he went scratching his head.

As the lift doors closed, Hugh and I burst out laughing. This situation had never arisen before. What a problem to dump on him! How would he deal with this challenge? What could he possibly tell the business? It could be the end of his career! Delivering early? Unheard of!

It’s a true story, honestly. But what it reminds me of is that if estimation is an approximate process, our errors in estimation in the long run (over or under estimation) expressed as a percentage under or over, should balance statistically around a mean value of zero, and that mean would represent the average actual time or cost it took for our projects to deliver. Statistically, if we are dealing with a project that is delayed (or advanced!) by unpredictable, unplanned events, we should be overestimating as much as we under estimate, shouldn’t we?

But clearly this isn’t the case. Overestimation, and delivering early is a situation so rare, it’s almost unheard of. Why is this? Here’s a stab at a few reasons why we consistently ‘underestimate’.

First, (and possibly foremost) is that we don’t underestimate at all. Our estimates are reasonably accurate with some margin for error of course, but consistently we get squeezed to fit with pre-defined timescales and budgets. We ask for six people for eight weeks, but we get four people for four weeks. How does this happen? If we’ve been honest in our estimates, surely we should negotiate a scope reduction if our bid for resources or time is rejected? Whether we de-scope a selection of tests or not, when the time comes to deliver, our testing is unfinished. Of course, go live is a bumpy period – production is where the remaining bugs are encountered and fixed in a desperate phase of recovery. To achieve a reasonable level of stability takes as long as we predicted. It’s a ritual. We just delivered too early.

Secondly, we are forced to estimate optimistically. New technologies, approaches, fads if you like, adopted on the promise productivity improvement, are then assumed to be certainties. Risks to the project are deemed to be negligible and experience is ignored. The last project, which was so troublesome, was an anomaly and it will always be better ‘next time’. Of course, this is nonsense. One definition of madness is to expect a different outcome from the same situation and inputs. We (or our leadership) don’t seem ever to learn.

Thirdly, our estimates are irrelevant. Unless the project can deliver in some mysterious predetermined time and cost constraints, it won’t happen at all. Where the vested interests of individuals dominate, it could conceivably be better for a supplier to overcommit, and live with a loss-making, troublesome post-go live situation, to recoup the money in a support contract. In the same vein, the customer may actually decide to proceed with a no-hoper project because certain individuals’ reputation, credibility and jobs depend on the go live dates. Remarkable as it may seem, individuals within customer and supplier companies may find themselves actually colluding to stage doomed projects. Only the lawyers gain in the long run. Perhaps I’m being cynical.

No wonder there is a significant following of the #NoEstimates movement.

Assuming project teams aren’t actually incompetent, it’s reasonable to assume that project execution is never ‘wrong’ – execution just takes as long as it takes. There are only errors in estimation. Unfortunately, estimators are suppressed, overruled, pressured into aligning their activities with imposed budgets and timescales, and they appear to have been wrong – after the event.

But there is some small comfort for testers. Testers can, with experience, predict with a small amount of certainty, how long it will take to test a perfect system and then make allowances for the reality. Our challenge is predicting how long it will take to test a system that has few or many, minor and critical problems. Given the experience of the developers and the delivered system to test, we have at least some knowledge to refine our estimates. If our estimates are cut, we can at least say that we won’t deliver the same amount of knowledge to our project; risks will be taken; bugs will get through’; workarounds will be required.

Here is the Testing Uncertainty Principle (from The Tester’s Pocketbook):

• One can predict test status, but not when it will be achieved;

• One can predict when a test will end, but not its status.

You can predict the end of a test or its status – one, or the other – but never both at the same time. We have a defence at least.

I have some sympathy with developers. With uncertain requirements, unknown technologies, no experience (of the new system) the developers have to predict how long it will take to build, test, fix and deliver to a tester, if not a production environment. There is no hope of de-scoping requirements at the start, before the reality bites, and their estimates underpin the plan that everyone has to work to.

No wonder the #NoEstimates meme has support in the developer community

Conference Chair 2014, Paul Gerrard

c - paul gerrard_150x150Paul Gerrard is a consultant, teacher, author, webmaster, developer, tester, conference speaker, rowing coach and a publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them.

Educated at the universities of Oxford and Imperial College London, in 2010, Paul won the EuroSTAR European Testing excellence Award. In 2012, with Susan Windsor, Paul recently co-authored “The Business Story Pocketbook”.

He is Principal of Gerrard Consulting Limited and is the host of the UK Test Management Forum and the UK Business Analysis Forum.

Mail: [email protected]
Twitter: @paul_gerrard
Web: gerrardconsulting.com

Blog post by

go back to the blog

eurostar

Leave your blog link in the comments below.

EuroSTAR In Pictures

View image gallery