Measuring project uncertainty

Patrick Weaver
August 6, 2013

Only fools and the bankers who created the GFC think the future is absolutely predictable. The rest of know there is always a degree of uncertainty in any prediction about what may happen at some point in the future. The key question is either what degree of uncertainty or, in project management space, what is the probability of achieving a predetermined time or cost commitment?

There are essentially three ways to deal with this uncertainty:

  1. Hope the project will turn out okay. Unfortunately hope is not an effective strategy.
  2. Plan effectively, measure actual progress and predict future outcomes using techniques such as earned value and earned schedule then proactively manage future performance to correct any deficiencies. Simply updating a CPM schedule is not enough; based on trend analysis, defined changes in performance need to be determined and instigated to bring the project back on track.
  3. Use probabilistic calculations to determine the degree of certainty around any forecast completion date, calculate appropriate contingencies and develop the baseline schedule to ensure the contingencies are preserved. From this baseline, applying the predictive techniques discussed in 2. plus effective risk management creates the best chance of success. The balance of this post is looking at the options for calculating a probabilistic outcome.

Methods of calculation

The original system developed to assess probability in a schedule was PERT. PERT was developed in 1957 and was based on a number of simplifications that were known to be inaccurate (but were seen as ‘good enough’ for the objectives of the Polaris program). The major problem with PERT is it only calculates the probability distribution associated with the PERT Critical Path which inevitably underestimates the uncertainty in the overall schedule. (For more see Understanding PERT [PDF]). Fortunately both computing power and the understanding of uncertainty calculations have advanced since the 1950s.

Modern computing allows more effective calculations of uncertainty in schedules; the two primary options are Monte Carlo and Latin hypercube sampling. When you run a Monte Carlo simulation or a Latin Hypercube simulation, what you’re trying to achieve is convergence. Convergence is achieved when you reach the point where you could run another 10,000, or another 100,000 simulations, and your answer isn’t really going to change. Because of the way the algorithms are implemented, Latin Hypercube reaches convergence more quickly than the Monte Carlo. It’s a more advanced, more efficient algorithm for distribution calculations.

Both options going to come to the same answer eventually, so the choice comes down to familiarity. Older school risk assessment people are going to have more experience with the Monte Carlo, so they might default to that, whereas people new to the discipline are likely to favour a more efficient algorithm. It’s really just a question of which method you are more comfortable with. However, before making a decision, it helps to know a bit about both of these options:

Monte Carlo
Stanislaw Ulam first started playing around with the underpinning concepts, pre-World War II. He had broken his leg and was in rehab for a long time convalescing and played solitaire to pass the time. He wanted some way of figuring out what the probability was that he would finish his solitaire game successfully and tried many different maths techniques, but he couldn’t do it. Then he came up with this idea of using probability distribution [PDF] as a method of figuring out the answer.

Years later, Ulam and the other scientists working on the Manhattan Project were trying to figure out what the likelihood was for the distribution of neutrons within a nuclear reaction. He remembered this method and used it to calculate something that they couldn’t figure out any other way. Then they needed a name for it! One of the guys on the team had an uncle that used to gamble a lot in Monte Carlo, so they decided to call it the Monte Carlo method in honour of the odds and probabilities found in casinos.

The Monte Carlo method (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results, that is, by running simulations many times over in order to calculate those same probabilities heuristically just like actually playing and recording your results in a real casino situation.

In summary, Monte Carlo sampling uses random or pseudo-random numbers to sample from the probability distribution associated with each activity in a schedule (or cost item in the cost plan). The sampling is entirely random, that is, any given sample value may fall anywhere within the range of the input distribution and with enough iterations recreates the input distributions for the whole model. However, a problem of clustering may arise when a small number of iterations are performed.

Author avatar
Patrick Weaver
Patrick Weaver is the managing director of Mosaic Project Services and the business manager of Stakeholder Management Pty Ltd. He has been a member of both PMI and AIPM since 1986 and is a member of the Asia Pacific Forum of the Chartered Institute of Building. In addition to his work on ISO 21500, he has contributed to a range of standards developments with PMI, CIOB and AIPM.
Read more