Minimum Viable Estimation, part 4

This is a writeup from a series of talks and workshops I’ve given on this topic. It’s been really helpful to think through what techniques I’ve used, what situations each work best in, and what less-than-obvious challenges make this such a hard topic to give simple answers on.

So far:

  • Part 1 introduced the topic, and took a look at how not estimating at all can work.
  • Part 2 explored using data to avoid having to ask people how long they think things will take.
  • Part 3 gave a practical “getting started” walkthrough to let you try using data and decide if you want to learn more.

This post, part 4 of the series, will start looking at: When the above aren’t going to work for your situation, and you need to ask some “how long do you think” questions, what’s the lowest-effort, least-painful, most-productive way to go about that?

Some fair questions

When asking about how long things will take, it’s true that some people get unreasonable expectations about what can be known, with how much precision, and how much work it’d take to come up with answers. It’s natural to push back against constantly being asked for answers on work you’re doing now, or for a wide range of new and uncertain things you might do in future.

However! There’s usually something we do know about how big something is – even if it’s very vague. When asked what they think, though, sometimes a first response is “I don’t know”, “I can’t say”, or maybe my favourite:

Typewriter text: "How long is a piece of string?"

That last one’s interesting, used as the classic “it’s impossible to say” analogy. But how long is a piece of string? Theoretically, it could be any length – but if this is an actual string, made in the world, there’s not going to be one hundreds of miles long. The world record for “largest rope” is only 251 metres. And if we were talking about an actual piece of string in front of us – maybe that we can see the end of and haven’t started pulling yet – we could bring that guess down quite a bit further, we probably haven’t found a record-breaker lying around in our office.

This might sound like we haven’t concluded anything useful, but I think we’re on to something. When someone says “I don’t know”, what they often mean is “I don’t think I can answer with any kind of precision that’ll be useful to you”. And that’s worth looking into.

Wide ranges can still be useful

Depending on what you’re planning to do with the answer, a very wide range might be all the info you need to make a decision. As an example, think about 2 imaginary requests a team might get asked about:

  1. For our application, can we get it to remember the user’s search ordering so that gets used again next time?
  2. For our application that’s currently a thick client people install, can we change it to a web application and let anyone in the world start using it?

This team might be thinking: Well the first one sounds pretty straightforward, that’s a small bit of work… and the other sounds much much bigger. If you don’t have a shared language for this kind of sizing, this can lead to misunderstandings: Is it “small” as in an afternoon? And “bigger” as in … 3 afternoons?

If we could get more of a sense of each, we could realise how differently each needs to be approached. Maybe it’s:

  1. Well we’ve got info needed for that kind of thing. Maybe … maybe this’d be done today, we’d have to check. We’d certainly be surprised if we’re still working on it by the end of the week.
  2. This wouldn’t be done by the end of this month, probably not even by the end of this year. It’s a completely different approach, using technologies most of the team doesn’t know. We’ll be learning and trying new things for a long time.

These ranges are wide and uncertain, but really helpful. For that first one: If we were at all interested in doing it, we probably have enough info to go ahead. If we get a few days in and realise some huge can of worms is involved, we can decide whether to back off and work on something different. And for the second: This isn’t an “add it to the list” item, you’d need to have discussions about whether there’s enough value in this idea to make it such a big investment of time and effort. Knowing what you’re getting into is important.

These examples are a good step forward from a blanket “I don’t know” answer – but these are extreme examples. It’s not always so easy to “bucket” things into different types.

Drawing a line

I’m a fan of using lines to help decision making (see lines of hope and despair), and for this kind of uncertain question, sometimes asking which side of a line something falls on can shortcut a lot of effort.

A real-life example: A friend once got asked at airport security, “How much money do you have on you sir?”

He was caught off guard and answered “I don’t know.” He was thinking: I remember going to the cash machine on Tuesday, how much did I get? And then I was out last night, did we pay cash or….

“You don’t know how much money you have on you?” This was clearly causing suspicion.

“Not – exactly… why do you need to know?”

“If you have more than £10,000 then it needs to be declared.”

Suddenly the question became very easy – my friend still wasn’t sure what notes or coins he had, but the answer was certainly “less than £10,000” and he carried on.

When looking at estimates, “what answer would make us change what were planning to do” can be a similar way to test for a quick, clear answer and move on.

Example 1: At a consultancy, starting a project that had budget for 4 months of the team’s time. We had a list of things the customer would like the application to do. We agreed a minimum set:

  • if it didn’t get those done there’d be no point starting, this is the core stuff
  • if it was just those things they’d be disappointed, there were lots of other good ideas and there was bound to be feedback and new suggestions once users got to try a basic version.

Without spending a lot of time and analysis, we decided that this minimum set would take somewhere between 2 weeks and 6 weeks. That’s quite a range, but every answer in there is the same answer for decision making purposes – a good chunk of time before our out-of-budget deadline, meaning there was lots of opportunity to improve on it and add things based on how the first few versions went.

The uncertain range of when the core things will be done, all far before the line we’re looking at

Example 2: In a department of a few long-running product teams, looking at an idea to add a new product to our collection. From an initial prototype, a list of features desired for a first live version had been put together. There was quite a lot of funcitonanlity, and “How long would this take” was complicated by adding “We could reshape teams or bring in more people”. It felt like lots of scenarios were possible.

I suggested changing the question: instead of “How long would this take, and what are all the options for making it sooner”, I asked: What date would make you decide it’s not worth going ahead with? After some discussion, it was agreed that if the first release was going to happen by the end of the year, we’d be happy to go ahead – but if it was longer, we wouldn’t.

We did our best to split the work into broad chunks of functionality and estimate them, based on the team size and shape we thought most suitable – the wide, uncertain ranges for each were put in a spreadsheet called SWAG.xls (for “Stupid Wild-Assed Guesses” to remind us this wasn’t precision work). That gave a very wide overall range for when the first release would be done – but every single answer in it was far past the “we won’t go ahead” decision line. We believed that further analysis might narrow the range a bit, but only within the min and max ends of this wide range, so there was no point – we knew the answer was on the wrong side of that line.

The next step was answering “can any options of changing team size move it past that line”. We all knew that throwing more people at an already-late piece of work doesn’t help, but it was possible – in this uncertain and yet-to-be started work – that some bigger team could get things done faster. We tested this out by starting with some unreasonable assumptions. What if: all the work could be neatly split in half, so another team the same size could take it on? And it was perfectly parallelizable, so both halves could go as fast as the original estimates for their chunk, with no waiting to sync up or pauses for dependencies between them?

Uncertain ranges and unreasonable assumptions, all far after the line we’re looking at

No work ever splits like that – but starting with these simple, unreasonable assumptions meant we could quickly check where that put us compared to the line. We were still definnitely on the wrong side of it. If some or all of the range had crossed over the line, we could have gone back to look in more detail at how close we might get to this unreasonable split – but since even this dream scenario didn’t help, that work could be avoided. We’d all agreed that “double the team size and half the time” was unrealistic – so other cases involving even more people was not worth looking at.

When you’re close to the line

These examples have shown cases where the decision line helps make things easy. There’s one last case – where the line you’ve picked (or had one set in the form of an unmissable deadline) and found the range is very close to one side of the line, or crosses through it.

This is the worst situation to be in, involving lots of stress and probably losing more time to overhead as people want reports, meetings, and explanations about which side of the line things will end up on. My best advice: persuade people to use this valuable early info to change the plans, either by aiming for a different deadline or by radically changing the minimum scope so that your range is a healthy distance from the line. When that’s not possible, I have some second-best advice.

You can set expectations early: we’re in a risky situation, but if it’s important to aim for this activity in this time there’s things everyone can do to make success more likely. My “How to disappoint people” series has suggestions for reigining in huge ambitions and helping people work together on what you’ve all agreed to.

You can help separate “estimates” from “targets” (a useful distinction explained well by Steve McConnell, recommended in part 1 of this estimation series). You might come under pressure to refine the estimate (and consciously or not, pressure to refine it in a “good” direction). It’s really helpful to encourage people to move instead to steering the outcome to different ends of that wide range. Flex and discretion matters: If meeting this deadline is really what’s important to everyone involved, letting the team make decisions on ways to do that is the best way to help.

You can politely remind people that things don’t have to be this way: for lots of people, the only experience they’ve had is to take an uncertain estimate, make that the exact deadline, and go through stress and drama to see if they’re successful. The idea that we could try something different next time might be appealing.

Next time

I’ve hopefully persuaded you that wide ranges can be surprisingly useful – which is good, becuase for lots of prediciton questions, a wide range is the only realistic answer.

The next, surprising challenge to deal with is that lots of us are terrible at giving accurate ranges. For many people, their intuitive sense of “I’m 90% confident the correct answer is in this range” turns out to be more like 30% confident. In part 5, we’ll take a look at reasons for this challenge and what you can do to help.


Posted

in

by

Tags: