This paper can also be downloaded here.

Executive Summary

Common practice Risk Management features ‘Qualitative’ assessment where risks are compared against predefined 1 – 5 scales for both probability and impact. An overall rating is calculated by multiplying the two scores. Risks are then prioritised according to this rating and their position on a Probability Impact Graph (PIG). Qualitative risk is widely used to support organisational decision making and is the foundation of most Enterprise Risk Management (ERM) systems.

Historically, this approach isn’t the outcome of a strategic design exercise to create the optimal system from a blank canvas. Qualitative risk has its roots in practices that were developed in the 50’s and 60’s for the management of Health and Safety risk. From these origins it has evolved organically within a loose but self-referencing community of risk managers, professional organisations and ‘thought leaders’. Though qualitative risk is taken for granted as a proven methodology; an objective appraisal reveals the approach to be flawed.

Qualitative in this context is actually a misnomer – a better description would be ‘semi-quantitative’ as numbers and calculations are involved. The loose classification according to bands of values introduces large amounts of ambiguity. This ambiguity means that the PIGs are unable to differentiate between risks that are very different; and can prioritise smaller risks over larger risks.

A more fundamental problem is that qualitative risk is based on a philosophy that ‘risk = probability x impact’. This in effect assumes that the essential characteristics of a risk are limited to its average value. Of course variability matters – the range of outcomes rather than just the average. It is ironic that the most popular technique completely misses the whole point of risk management.

Decisions made at each incremental development in common practice were the best that could be made at the time using the information available. Many of these would have been to solve particular challenges associated with the need to implement systems for managing risk. Some of these challenges are now long forgotten. Key factors would have been technology, time and cultural change or resistance. A big consideration would have been – and still is – the capabilities, knowledge and mindset of stakeholders and those tasked with doing the work.

The good news is that there are many alternative methods for assessing and prioritising risk, opportunity and uncertainty that do not suffer these failings. Though there is no single ‘one size fits all’ approach it is possible to create effective systems tailored to each application – and organisations that do so are likely to find the process very rewarding.

About Qualitative Risk and Probability Impact Graphs

The use of Probability Impacts Graphs (PIGs but also known as Matrices etc) is virtually universal in Risk Management and a cornerstone of nearly all Enterprise Risk Management Systems. The deployment of PIGs is widely advocated; by risk managers, training courses, consulting firms, text books, risk standards and guides. To many people Risk Management is the use of PIGs.

In a typical PIG based approach risks are categorised according to their probability and impact. Scores for probability and impact are found by referring to predefined ‘Qualitative’ assessment criteria. An overall score for the risk is found by multiplying the risk and impact scores. The process is intuitive, simple to follow and produces aesthetically appealing graphs and lists.

A typical PIG assessment Matrix for a Corporate ERM System is shown below:

 

Qualitative Risk Matrix
Qualitative Risk Matrix

The examples in this paper are based on commercial application designed to protect shareholder value. However it is worth noting that PIG graphs have a wide variety of applications in many other areas including Health & Safety, anti-terrorism and healthcare.

There are two primary objectives for PIGs. The first is to put risks into a relative order or ranking of importance. The second is to assign critical / high / medium / low labels to each; where critical and high level risks are reported to a board or other oversight body. It is the limitations and problems associated with these objectives of the PIG approach that are identified and appraised here.

How well do PIG’s prioritise risks?

For convenience we will assume in this section that the identified ‘risks’ fit neatly into PIG framework and the bands defined by the qualitative assessment criteria – we will also consider cost only. These assumptions allows us to examine specific failings; but will be revisited later.

One simplistic measure of risk is that of ‘expected value’ (EV); this is a mean value of the risk found by multiplying the numerical probability and impact values. EV is itself not a good method for prioritising risk but suits our purposes here. In the following examples we will compare the PIG based approach with the EV approach for prioritising risks. Scores for the following examples are based on the matrix in table 1.

This leads to risks being put into categories inconsistent with their underlying expected values

PrioritisationIssues

The errors above are introduced by the large ranges in the PIG band criteria – £10m to £50m or 3% to 15% are 5 fold differences in magnitude. With these large ranges it is possible for a risk to share a score with another risk whose expected value is 25x bigger or smaller. We can be sure the PIG graph has correctly prioritised two risks only if both the probability and impact of one risk are definitely larger than the other. In other words:

PIGS can only differentiate between two risks if the higher scored risk is at least one row above and at least one column to the right of the lower scored risk

But how serious is this limitation? An intuitive feel for the scale of problem can be found by examining the following graph. The graph shows for each box in the PIG graph how many other boxes it can correctly differentiate risks against – this is the total number of squares that are a) at least one column to the right and one row higher and b) at least one column to the left and one row lower.

PIGs - Ability to Prioritise
PIGs – Ability to Prioritise

Totalling all boxes in the graph gives a value of 200. If a PIG was 100% effective then a single box could differentiate between the other 24 boxes, giving a value of 600 (25×24). The difference in these two values should lead to a conclusion that PIGs are poor at differentiating between risks.

At this point a more involved analysis could take place; probably on a more elegant theoretical basis. There may also be debate about how the PIG performs with real data. However we do not need either of these to discredit the PIG based approach. Before we look at how PIGs are actually used we should consider opportunities (‘upside risk’) and revisit the idealised assumption that risks fit neatly within our PIG cost categories:

Issue 3 – Risks probably will not fit into the predefined PIG categories

Impacts are unlikely to fit into the categories set out in the PIG. This can result in the loss of precision, the creation of precision where it doesn’t exist, or just plain ambiguity. Take for instance the following examples:

  • > A risk (e.g. a tax liability) with an impact of £80m to £85m. This will be scored as a ‘4’ and could then be mis-interpreted as having a greater range of £50m to £100m. This also leads to a shift in the expected value.
  • > A risk with an impact of minimum £10m and maximum of £140m. This will be scored a ‘4’ which could then be interpreted as having a reduced range of £50m to £100m.
  • > A risk with impact of minimum £5m and maximum £15m. A decision will need to be made on whether to place in the ‘2’ or ‘3’ category; individuals will have different view on this decision.

Issue 4 – PI score is dependent on whether the item is framed as a Risk or as an Opportunity

Take a project context where there is an 80% chance of a Ground Remediation risk of £15m giving a score of 15. Suppose instead that the Project Manager decides to make a provision of £15m in the base plan and then reframe the risk as an opportunity that the remediation is not required. In this case the opportunity is scored as a 20% of £15m scoring 9.

Real World Applications

The table below is similar to dashboards commonly found on ERM systems and as recommended by many consultants for management and governance reporting. As it typifies common practice we will use it here to examine further the problems with the qualitative risk approach.

Typical ERM Dashboard Reporting Format
Typical ERM Dashboard Reporting Format

Additional ambiguity with multiple impact criteria
It is common practice to consider not only cost but other impact criteria also – and then take the maximum of these to get the overall impact score. In the example above risks have also been evaluated against other criteria – including health and safety, reputation & stakeholder impact though the individual assessments are not shown. It may be possible to determine which criteria applies from the risk descriptions – but then again may be not.

Additional ambiguity by obscuring PIG location
When the PI Score has been used to order a Risk Register only the score is visible and the location of the risk on the PIG graph is not shown. Loss of this information means the PI Score based approach can only differentiate between two risks if they have different scores of either 1,9,16 & 25. Even worse – in the example above only the High / Med / Low status is shown.

Killer blows to the PIG based approach

So PIG graphs are not effective at prioritising risk when compared to the Expected Values. If this is not convincing enough to discredit qualitative risk; the following might be: –

PIGs are a computational dead end (or should be)

There is no logically defendable way to work with PIG outputs. For instance it cannot be concluded that Project A is preferable to Project B (or Project A is getting better) because it has fewer higher scoring risks. However derivative calculations are common; for instance ‘Heat Maps’ that show risks or quantities of risks on a PIG chart and ‘Waterfall Charts’ that show how risk scores change over time. The quantity and sophistication of these derivative calculations are often a driver in the selection and procurement of ERM systems – they also give senior management a false sense of confidence.

PIGs are based on an event based definition of risk only A broader definition of risk might include uncertainty in estimates – to which no probability can be assigned. Such estimation uncertainties do not fit within the PIG structure – however experience shows that around 50% of exposure typically arises from ‘estimation uncertainty’.

Variability Matters
It should be intuitively obvious that variability matters: the fable of the ‘6ft high mathematician who drowned trying to cross a river with an average depth of 5 feet’ illustrates why. Investigation and understanding of variability should be the main focus for risk analysis and management. It is very puzzling therefore that the most popular method for prioritising risk, the PIG graph, completely ignores variability. This is particularly problematic for differentiating between High Impact but Low Probability and Low Impact but High Probability risks. Consider two risks scored with 5: Wouldn’t your organisation prefer a system that differentiates between a 90% chance of £500k and a 3% chance of £100m? This is a criticism of both the PIG based approach and any other prioritization method that uses average and the notion that ‘risk = probability x impact’.

Complexity matters too
PIGs take a simplistic view of risk in which there is no system based thinking about the root causes, relationships and dependencies between risks opportunities and uncertainties. Ignoring complexity of this kind (and therefore not managing it) can lead to cost overruns of up to 1,000%. See our white paper ‘Crossing the CASM: Why complexity matters in risk management’ explains more.

Slaying some Pro-PIG Arguments

At this point some advocates of qualitative risk and vendors of ERM software will wish to hang onto the status quo rather than change long held beliefs. The following arguments are anticipated:

The thousands of people and organisations that use PIGs can’t all be wrong 
Popularity is a good indicator but not absolute proof of effectiveness. Accepted logic is often overturned or modified in the light of new findings – for many years established medical dogma was that stomach ulcers were caused by stress, diet, smoking and alcohol. It was later found that though these were irritants the major cause or peptic ulcers was a bacteria – a Nobel winning discovery. One of the issues is that life would be extraordinarily inefficient if all decisions required us to undertake a bottom up appraisal from first principals. To make progress we all rely on learned responses, rules of thumb and references to what others do in similar circumstances. There is always the possibility that something we depend on and take for granted has a latent flaw. The risk is that if you don’t stop and look around once in a while, you could miss it – which was the motivation for this paper.

The absence of good/precise data means an approximate scale is the best option – particularly for probability
In the qualitative approach there are typically only 5 options for probability and impact; a damaging constraint. In our example matrix there is no differentiation between an uncertain event with a probability assessed 70 – 80% and one assessed 50 – 100%. A much better approach is to capture uncertainty quantitatively with upper and lower ranges, which offers infinite options. Techniques for dealing with impact uncertainty will be familiar to many – for example a Monte Carlo simulation with sensitivity analysis. However it is less well known that there are a number of techniques for analysing uncertainty around the likelihood of events and improving the quality of probability estimates. Making use of these techniques recognises that this can be an important component of any forecasting exercise.

A number of refinements have been made to PIG graphs to remedy known issues
10 x 10 matrices are better than 5×5 matrices and logarithmic scales prioritise high impact / low probability risks. However these and other modifications are best viewed as mitigations rather than solutions -all issues identified in this paper are still relevant only with a slightly reduced impact.

Conclusion

The purpose of this paper is to raise awareness of issues; definitive solutions cannot be the goal here. This is because there is no single one size fits all solution that directly replaces PIGs and qualitative risk. There are a number of alternatives from which organisations can choose dependent on requirements and circumstances. To list all the options here, together with associated benefits and drawbacks, would turn this paper into a textbook similar to those listed below.

Assessment scales are not universally a bad thing. Hotels assessed 4 star are generally better than 2 star hotels – useful for a trip to an unfamiliar city. The Beaufort scale has been very successfully categorising wind strength and sea conditions for two centuries. Problems develop with calculations involving multiple qualitative assessments – which seem to becoming increasing popular in management generally. By far the most dangerous calculation and belief is that ‘risk = probability x impact’. There aren’t many still-alive sailors that believe a gentle Force 3 breeze is the same thing as a 25% chance of a Force 12 hurricane.

Given the flaws, how is the longevity of PIGs and qualitative risk explained? Generally very positive feedback is given by project teams who have attended qualitative risk workshops. Closer examination of the feedback shows most of the benefit is derived from the joint discussion of issues – including the likelihood and implications of risks. Qualitative assessment is an easy to understand task which gives the team a focus and sense of a common purpose – which is good for team building. These are benefits of common practice which should be retained.

Another reason that PIGs continue to be used is they are surprisingly difficult to replace – and replacement is the only option as organisations cannot leave a vacuum. PIGs look good and the meaning of the Red/Amber/Green boxes is universally understood (even though the wrong risks may feature). To move away from PIGs requires identification of effective alternatives and then persuasion and re education of both internal and external stakeholders.

For the reasons above there are organisations who are already mature in the use of better and more sophisticated quantitative methods – but PIGS and qualitative risk is run in tandem. Real decisions are supported by the better methods but PIGs are retained as legacy communication channels to senior management or other stakeholders.

In summary, great progress has been made in the last decade in a common perception of the importance of managing risk – even more so in the last three years with the financial crisis. However at the same time common practice risk management has become somewhat formulaic, flabby and lacking in focus. Once there is a greater realisation of the issues with common practice there will be demand for something sharper, leaner and more effective. Progressive organisations who do not want to be caught on the wrong side of events need to start thinking now about how they will manage risk in the future.

Leave a Reply