Published here June 2015.

 

Musings Index

Review of Project Success - How Soon and What Do You Measure?

In a recent LinkedIn thread discussing Post Closure Activities, Mark Muellenbach asked: "I'd like to hear best practices from the group. How long does one wait to let the changes take effect? What metrics does one measure? What documents are produced?" John Eremic promptly observed (in part):

"Close out is far more important than most people realize. Close out is rarely budgeted adequately. The budget must include the cost of certifying tests and inspections, creating and storing as-built drawings, post mortem reports, accounting close out, etc., etc. Usually, key team members have moved on to new projects and upper management is focused on supporting the current or newly starting projects."

Blaise Stephanus adds:

"I agree that project closeout processes are often lacking. Often, in my experience, this is because the project manager and members of the senior team are assigned to another hot project before project execution is complete. Lessons learned reports are important as well as populating the project's historical cost and schedule data into some organizational database. Scope acceptance is also important as well as contractual closeout, especially with any outstanding subcontracts. Various performance estimates such as CPI and SPI are valuable to record as it can identify organizational or group performance trends that may continue in the future."

Max Wideman's thoughts on the issue

The foregoing responses, together with others, to Mark Muellenbach's request for advice on "How long does one wait to let the changes take effect? What metrics does one measure? What documents are produced?" are all excellent contributions even if they do not answer Mark Muellenbach's original specific questions!

But in practice there are other questions to be answered. For example:

  • How should the data thus collected be stored?
  • How should it be made available without infringing copyright or confidentiality if that is involved?
  • How can it be presented so that it is actually useful to the managers of subsequent projects?
  • Hence, what form of post project processing would be necessary for such an archive to be useful?

On many of my past projects lots of good metrics have been collected but rarely actually applied to any great extent on subsequent projects. That's because:

  • The data was too voluminous to study in the rush to get a new project going.
  • It was project specific and you needed to understand the circumstances of the previous project for the data to be usefully applied.
  • And in the light of the environment of the new project, the data was possibly not all that valuable anyway.
  • Thus the time required to study the past data did not make a good return on investment out of the budget for starting the new project.

I suspect that these factors alone account for the lack of interest in creating the post project review in the first place, and applying them to some new project in the second place. The exception to all of this, of course, is in providing a valuable data resource for academics intent on writing a paper with both time and budget available for a deeper analysis!

Another factor at play

But there is another factor at play that is fundamental to the discussion, one that seems to be rarely recognized by the project management practice at large. And that is the distinction between collecting data relating to managing the project on the one hand, i.e., project management, and collecting data on the creation of the product, i.e., product development, on the other. True that the two are inextricably intertwined as the project progresses, and in some cases are difficult to separate. Nevertheless, this distinction is essential if the data is to be really useful for managing future projects, especially in a diverse project portfolio environment.

Examples of project management data might include such items as:

  • What was the extent of the project managers authority and responsibility?
  • How long did it take to obtain management approvals?
  • What standard templates were used, if any, for communications? And so on.

Examples of product development data depend heavily on the type of product involved. If the product is essentially vested in software technology, then:

  • What safety considerations or regulations were applied?
  • What approvals were required and at what phases in the project life span (i.e. upper management control) were they enforced?
  • What was the extent of the testing program? And so on

Or if the product is essentially, say, in construction of buildings:

  • What permits were required?
  • Who undertook the designs and other essential services (architects, engineers, testing services, etc.)?
  • What trades were involved and how were those skills procured?

What seems to be missing in all of this, is a better understanding of the Data Archiving problem generally and perhaps some recommended standards might be useful. It all needs more focused discussion and perhaps even warrants another section in our bodies of knowledge. (Heaven forbid!)

Obviously, the answer to the question first raised by Mark Muellenbach relating to what metrics to measure is not that simple. And equally the answer to the question of how soon after the project is completed should one wait to let the changes take effect depends on that answer.

Nevertheless, a good general rule of thumb is to wait as long as it took to formulate, i.e., develop, the project in the first place. And, for a well management project, that is equivalent to about as long as it took to execute the project.


Home | Issacons | PM Glossary | Papers & Books | Max's Musings
Guest Articles | Contact Info | Search My Site | Site Map | Top of Page