Detailed Estimation Sucks

Pretend you have a team cranking away at a production app (likely full of technical debt). Part of the IT staff, so to speak. Management has a year’s worth of features, and of course wants to be able to plan when things will get done. This was my response to an agile coach who is working with such a team that is having trouble with their estimation accuracy.

Please have a seat. Yes, over there by the window is fine. Buckle up. You may experience some turbulence.

If you want to know “when you can get all that stuff done,” sit the team in a room, show them the list of stuff, and ask them. I’ll give you an hour. I suggest you have them use seasons from the sounds of the current mess 🙂

If you really want to know within a month as to when the list of 1000 issues will be done (including the 250 that you don’t yet know about), then wait until you are about to release, and you will have a very accurate estimate.

I doubt that estimating accurately or poorly will help all that much in knowing when an entire list of stuff will be done. Especially when the list changes over time. Given that, why spend any/much energy? “Don’t polish a turd.”

Why not simply guarantee to management that the team will do their best effort. Just grab a stack of important (i.e., prioritized) stuff to do (loosely based on a logical roadmap/release plan), track the amount of things done each iteration on a chart. Skip the up-front waste of time estimating everything in detail. I mean really… just take a stab at “we can do these 20 things this iteration” and see how it goes — adjust as needed. The effort should take all of about 30 minutes each iteration depending on your tools and your team collaboration, and ability to know what those 20 things are.

Put an indication on the chart of number of total issues — but be ready to continually update this as new issues/bugs arrive (it’s lots of fun to watch the goal continually move further away). Or have the chart represent the bare minimum marketable features (bigger ticket items without the details). Now anyone can sense the rate of closure of the team towards the goal as each iteration piles on more closed issues. After two iterations, you can draw a straight line, after three, you can use fancier curve fits and show the least squares correlation coefficient <g>.

Instead of spending time estimating, spend time making the project progress “information radiator” as meaningful as possible. Believe me, developing software is merely working a to-do list. Anybody can watch a list of things getting ticked off and do a mental estimation as to rate of progress. Build a better view into what the project is actually doing and let people think about how it might look going forward — no need to debate future predictions. After all, at least your historical data is guaranteed accurate!

The only time I would estimate “more accurately” is if I have competing ideas or ways to solve a problem via wildly differing approaches. Then maybe we might want to weigh one versus the other from a business point of view because it could make a big difference in the long run.

But for an entrenched team on an entrenched product to estimate each little feature/issue/bug fix/story at the outset of each iteration? I question the value of the team’s time being spent doing that. I gave it up for Lent.

Try not doing detailed iteration estimation. It can be very liberating to just do the work as best and as fast as you can.

At some point, you and management will realize that it doesn’t matter if the team is good at estimating or not. At some point you may realize that most of the estimation process is an illusion, a grand trip into self-delusion fantasy land. The same amount of work will get done (well, more will likely get done if less time is spent estimating).

If you need to control what work gets done by when, then you prioritize mercilessly and do a good job to make development efficient (which includes smart architecture and design, among other things). If you need to speed up when the work gets done, then you need to consider resources (better or more) and/or reducing scope.

Now, if you want to spend time making the team better at estimating for their own edification (a la PSP), let them do their own estimates and track actuals and try to improve over time.

(My agile coach friend drew some aircraft analogies…) If only software were as simple as aerodynamics. I can predict quite accurately how a NACA airfoil is going to behave in 2D and 3D airflow. I have never been able to have a software team predict to that level of accuracy. Predicting software is not much different than predicting a relationship between groups of people.

Something to ponder… Why bother with estimates at all?

1. Who needs them and what are they for?
2. What happens if you are dead-on accurate?
3. What happens if you are off by 100%?
4. What if #2 == #3??
5. Are the workers going to be fired with poor estimates? Given raises with good ones?
6. How did they estimate before?
7. Is estimation the best use of their time?

We’re now ready for take-off!