It is taken as a given within aerospace companies that risk registers are the critical tool to manage risks in a program. If the following survey question was posed to leaders:

“If a program has no risk register, a program is not managing risks”

I would wager that a large majority of respondents would mark True. The purpose of this document is to question this sacrament of program management orthodoxy. Risk-registers should not be considered an essential tool of risk management. Rather, there are different techniques that are better suited to help blunt the impact of risk on a program.

Before I press into my arguments against risk registers and for other techniques, either side of the argument needs to concede that it is difficult to prove their argument with data. I have never seen a study to show that the use of risk registers has been beneficial over those that do not have risk registers. Any study that did attempt this would have to lean heavily on statistical methods to make its case given all the other variables in programs that influence its success or failure. Arguments for or against different ways to manage risk need to be made with a large measure of humility since it is difficult to conclusively prove the merit of any approach.

Risk register’s fundamental problem is that in real programs, the space of potential risks is incredibly large. This means that while it is assured that some risks will impact your program, it is not only impossible to predict which ones will be realized, it is impossible to even conceive of the ones that will impact your program. People try anyway. A two-hour risk register session can generate multiple pages of risks. The likelihood and impact of each risk will usually be debated. For “thorough” risk registers, mitigation actions are identified and dates given for the expected date of completion. Despite the effort, my experience is that the risks that do occur are usually not on the risk register.

Creation of large risk registers creates another problem. People are not capable of managing the risks on a register as leaders believe, or at least pretend to believe. When risk registers are revisited, it is exceedingly common to roll through the risks and actions and find that nothing has been done to “treat” the risks outside of perhaps a few at the very top of the ranking. It may make people feel good when they put obscure risks on a register, but when risk registers get large enough that there is no longer resources or capacity to actively monitor and address each risk, they have lost practical utility.

We should think less about specific risks in a program and more about program robustness. The fact that risk registers have shortcomings is not to say that considering risk or uncertainty in programs is not important. Managing uncertainty is one of the most important skills in program management. Starting with the premise that you are nearly 100% likely to face unexpected challenges during a program but that you are unable to predict what they will be, drives you towards running a program that is robust to uncertainty.

What makes a program robust? This is a question that doesn’t seem to get discussed or studied enough. I will give some examples of what I think makes a program robust but it is no doubt incomplete. First, robust programs have team-members who are flexible; they are willing to learn and practice tasks and skills that may not be described by their official job description. Having people capable in multiple skills is useful when unexpected problems arise and extra resources need to be thrown at a problem, or capable people depart from a team. Wide-spread sharing of information is another key to robustness. People who are informed about how their work fits into the bigger effort will be more capable of adapting their own work without instructions when the unexpected occurs.

Another way to make programs more robust is to ask what makes a program fragile and try to eliminate it. Intensive study and detailed planning generally makes a program more robust because it results in a deeper understanding of the problem a program is trying to solve and possible solutions. But detailed planning is often expected to generate a project Gantt chart with hundreds of rows of tasks. Gantt charts this long are most useful to organize programs that are unlikely to have to adapt to changing circumstances in the future. But many programs must constantly adjust to the world around them. Large Gantt charts with many tasks, linked dependencies and resources quickly become obsolete and useless. Robust Gantt charts are ones that are useful for communicating to the team the essential elements of the program that they may not be aware of and helping a program manager who already has a deep knowledge of the program decide where to focus their efforts. Gantt charts such as these do not require 100’s of tasks with durations less than a week.

Poorly conceived reviews can be a source of fragility. Programs that must seek approval from people who may be “experts” but are not familiar with the fine details of the program under review will find it difficult to adapt to surprises. Programs do not have the time to both explain the problem and potential solutions over and over again. Reviews with external reviewers are important to combat group-think and encourage accountability, but if leadership cannot trust a program in its midst to make the correct decisions it is not robust.