figure 1: TimeLine principle
Having estimated the work that has to be done in the first week, we have
captured the first metrics for calibrating our estimates on the TimeLine. If
the Tasks for the first week would deliver about half of what we need to do
in that week, we now can extrapolate that our project is going to take twice
as long, if we keep working the way we did, that is: if we don't do something
about it. Initially the data of the first week's estimate may seem weak evidence,
but it's already an indication that our estimates are too optimistic. Putting
our head in the sand for this evidence is dangerous: I've heard all the excuses
about "one-time causes". Later there were always other "one-time causes".
One week later, when we have the actual results of the first week, we have slightly
better numbers to extrapolate and scale how long our project really will take.
Week after week we will gather more information with which we calibrate and
adjust our notion of what will be done at the FatalDate or what will be done
at any earlier date. This way, the TimeLine process provides us with very early
warnings about the risks of being late. The earlier we get these warnings, the
more time we have to do something about it.
Failure is not an option. The earlier we get warning signals of possible failure, the earlier we can start making sure that failure is not going to happen.
figure 2: Earned Value (up to week 4) and Value Still to Earn (from week 5)
We can counter this dilemma by actively saving time, doing only what is really necessary (line g), or a combination of not doing what nobody is waiting for (line g2) and doing things more productively (line h), as explained in section 7.6 of booklet#2. Actively designing what exactly to do and in which order, saves a lot of time.
Using calibration, we can quite well predict what will be done when (figure 3). The estimates don't have to be exact, as long as the relative values are consistent: if Activity1 is estimated to take 2 (units of estimation) and Activity2 to take 1, then we assume that Activity1 will take twice as long as Activity2. We see that people are reluctant to accept that rather imprecise estimates yield rather good overall predictions. In practice, the positive and negative inaccuracies average out, providing a quite good accuracy of the summed total. Some people are even reluctant to estimate at all, being afraid to fail their estimates. However, if you don't have an estimate to fail on, you cannot learn, while the experience of failure makes us learn quickly, as long as we want to learn.
Once we have done several Activities, we know how long these activities took and now we can calibrate the remainder of the estimates to reality. We average the calibration factor over several recent activities until now:
figure 3: Using the
list of activities
to predict what will be done when
(Ar is real time, Ae is estimated time of an Activity) |
Now we can use this Calibration Factor to predict how much time we would need for future activities, if we continue the way we are currently working:
This way we can predict when we will have done what, or when "all" is done. This list of activities still to do (Value Still to Earn) will constantly be updated:
Note that we shouldn't use these numbers mechanistically. We still have to judge the credibility of what the 'mathematics' tell us and adjust our understanding accordingly. In conventional projects this manual interpretation may still lead to over-optimistic predictions, especially if what the numbers tell us is "undesirable".
In Evo projects, however, we want to succeed in the available time or earlier, so we are realistic and rather see any warning we can use to constantly improve, or to discuss the consequences as soon as possible. In practice I've seen calibration factors of 2 at the start of the project and then growing and stabilizing at 4 when the project is running at full strength. In some hardware development projects I've seen calibration factors between 1 to 1.5. In other projects we may see yet other factors. Note that the calibration factors of different projects are not good or bad and cannot be compared: they are simply the ratio of how much time this project needs to accomplish its activities, and the estimates as produced by the project's estimation standard. Different projects have different people and estimate in different ways. They merely calibrate the assumptions used at the original estimate, where they may not have taken into account Verification&Validation, Systems Engineering, Project Management, education, and many other things that have to be done in the project as well. Once the calibration factor has stabilized, we can use the slope of the factor to warn for deterioration, and to see the effect of process improvements.
Continue with Feeding Portfolio Management