Just enough Estimation, Planning and Tracking
There are several methods of estimation. There are also ways to quickly change from
optimistic to realistic estimation. An important prerequisite is that we start treating
time seriously, creating a Sense of Urgency and that we care about time
(see for the urgency of time: "The Importance of Time"). It is also
important to learn how to spend just enough time on estimation. Not more and not
less.
Changing from optimistic to realistic estimation
In the Evo TaskCycle we estimate the effort
time for a Task in hours. The estimates are TimeBoxes, within
which the Task has to be completely done, because there is not more time. Tasks
of more than 6 hours are cut into smaller pieces and we completely fill all plannable
time (i.e. 26 hours, being about 2/3 of the 40hr available time in a work week). The aim in
the TaskCycle is to learn what we can promise to do and then to live up to our promises.
If we do that well, we can better predict the future. Experience by the author shows
that people can change from optimistic to realistic estimators in only a few weeks,
once we get serious about time. At the end of every weekly cycle, all planned Tasks
are done, 100% done. The person who is going to do the Task is the only person who
is entitled to estimate the effort needed for the Task and to define what 100% done
means. Only then, if at the end of the week a Task is not 100% done, that person
can feel the pain of failure and quickly learn from it to estimate more realistically
the next week. If we are not serious about time, we’ll never learn, and the whole
planning of the project is just quicksand!
0th order estimations
0th order estimations,
using ballpark figures we can roughly estimate, are often quite sufficient for making
decisions. Don’t spend more time on estimation than necessary for the decision.
It may be a waste of time. We don’t have time to waste.
Example: How can we estimate
the cost of one month delay of the introduction of our new product? How about this
reasoning: The sales of our current most important product, with a turnover of about
$20M per year, is declining 60% per year, because the competition introduced a much
better product. Every month delay costs about 5% of $20M, being $1M. Knowing that
we are losing about $1M a month, give or take $0.5M, could well be enough to decide
that we shouldn’t add more bells and whistles to the new product, but rather finalize
the release. Did we need a lot of research to collect the numbers for this decision
...?
Any number is better than no number. If a number seems to be wrong, people will
react and come up with reasoning to improve the number. And by using two different
approaches to arrive at a number we can improve the credibility of the number.
Simple Delphi
If we’ve done some work of small complexity and some work of more
complexity, and measured the time we needed to complete those, we are more capable
than we think of estimating similar work, even of different complexity. A precondition
is that we become aware of the time it takes us to accomplish things. There are
many descriptions of the Delphi estimation process, but, as always, we must
be careful not to make things more complicated than absolutely necessary. Anything
we do that’s not absolutely necessary takes time we could save for doing more important
things!
Our simple Delphi process goes like this:
- Make a list of things we think we have to do in just enough detail. Default: 15 to 20 chunks
- Distribute this list among people who will do the work, or who are knowledgeable about the work
- Ask them to add work that we apparently forgot to list, and to estimate how much
time the elements of work on the list would cost, 'as far as you can judge"
- In a meeting the estimates are compared
- If there are elements of work where the estimates differ significantly between estimators, do not take the average,
and do not discuss the estimates (estimates are not negotiable!). Discuss the contents
of the work, because apparently different people have a different idea about what
the work includes. Some may forget to include things that have to be done, some
others may think that more has to be done than has to be done. Making more clear
what has to be done and what has not to be done, usually saves time
- After the discussion, people estimate individually again and then the estimates are compared again
- Repeat this process until sufficient consensus is reached (usually repeating
not more than once or twice)
- Add up all the estimates to end up with an estimate for the whole project
Don’t be afraid that the estimates aren’t exact, they won’t
be anyway. By adding many individual estimates, however, the variances tend to average
and the end result is usually not far off. Estimates don’t have to be exact, as
long as the average is OK. Using Parkinson’s Law in reverse, we now can fit the
work to fill the time available for its completion. We use Calibration to measure
the real time vs. estimated time ratio, to extrapolate the actual expected time
needed.
In a recent case, to save even more time on the estimation
process, we used "Simpler Delphi": in stead of steps 6 and 7 of the process shown,
we took the minimums and maximums of the individual estimates, and then decided
by quick consensus which time (within the min-max range) to use. This short-cut
worked quite well.
Estimation tools
There are several estimation methods and
tools on the market, like e.g. COCOMO,
QSM-SLIM and
Galorath-SEER.
These tools rely on historical data of lots of projects as a reference. The methods
and tools provide estimates for the optimum duration and the optimum number of people
for the project, but have to be tuned to the local environment. With the tuning,
however, a wide range of results can be generated, so how would we know whether
our tuning provides better estimates than our trained gut-feel?
The use of tools
poses some risks:
- For tuning we need local reference projects. If we don’t have
enough similar (similar people, techniques, environments, etc …) reference projects,
we won’t be able to tune. So the tools may work better in large organizations with
a lot of similar projects
- We may start working for the tool, instead of having
the tool work for us. Tools don’t pay salaries, so don’t work for them. Only use
a tool if it provides good Return on Investment (RoI) for you
- A tool may obscure
the data we put in, as well as obscure what it does with the data, making it difficult
to interpret what the output of the tool really means, and what we can do to improve.
We may lose the connection with our gut-feel, which eventually will have to make
the decision
Use a tool only when the simple Delphi and 0th order approaches, combined
with realistic estimation rather than optimistic estimation, really prove to be
insufficient and if you have sufficient reasons to believe that the tool will provide
better ROI.