Manufacturing AUTOMATION

The probability paradox: Risk assessment and machine safety

July 27, 2012
By Douglas Nix

In Occupational Health and Safety (OHS), risk is a func­tion of sever­ity of injury and the prob­a­bil­ity of the injury occur­ring. Understanding the sever­ity por­tion is usu­ally fairly easy, although advanced tech­nolo­gies, like lasers for instance, take advanced haz­ard analy­sis meth­ods to assess cor­rectly.

Probability, on the other hand, is a  chal­lenge. Mathematically, prob­a­bil­ity cal­cu­Ala­tions go from sub­limely sim­ple to insanely com­plex. The simplest forms, like the prob­a­bil­ity of get­ting a num­ber from 1 to 6 when you throw a sin­gle die, can be under­stood pretty eas­ily. On any one throw of the die, you have a 1 in 6 chance of get­ting any one par­tic­u­lar num­ber, assum­ing the die is not weighted in any way. When we’re talk­ing about OHS risk, it’s never that easy.

The first problem: No data

Risk assess­ment can be done quan­ti­ta­tively, that is, using numeric data. This approach is often taken when numeric data is avail­able, or where rea­son­able numeric esti­mates can be made. The prob­lem with using numeric esti­mates in quan­ti­ta­tive assess­ments, is this: math lends cred­i­bil­ity to the answer for most peo­ple. Consider these two statements:

Advertisement

1. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk is pretty low, because it is unlikely that the reac­tor will melt down.

2. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk of a fatal­ity in the sur­round­ing pop­u­la­tion is very low, because the prob­a­bil­ity that the reac­tor will melt down is less than one in 1 million.

Which of these state­ments sounds more ‘cor­rect’ or more ‘author­i­ta­tive’ to you?

Attaching num­bers to the statement makes it sound more author­i­ta­tive, even if there is no reli­able data to back it up! If you are going to attempt to use quan­ti­ta­tive esti­mates in a risk assess­ment, be sure you can back the esti­mates up with ver­i­fi­able data. Frequently there is no numeric data, and that forces you to move from a quan­ti­ta­tive approach to semi-quantitative approach, mean­ing that num­bers are assigned to esti­mates, usu­ally on a scale, like 1–5 or 1–10 rep­re­sent­ing least likely to most likely, or a fully qual­i­ta­tive approach, mean­ing that the scales are only descrip­tive, like ‘unlikely, likely, very likely.’ These kinds of assess­ments are much eas­ier to make as long as the scales used are well designed, with clear descrip­tions for each incre­ment in the scale, because the data used in the assess­ment is the opin­ion of the asses­sors.

Here’s an exam­ple, taken from Chris Steel’s 1990 arti­cle [1]:

Table 1: LO Likelihood of occurrence

table1{nomultithumb}

Some peo­ple might say that this scale is too com­plex, or that the descrip­tions are not clear enough. I know that the sub­tleties some­times get lost in trans­la­tion, as I dis­cov­ered when try­ing to train a group of non-native-English-speaking engi­neers in the use of the scale. Linguistic chal­lenges can be a major hur­dle to over­come! Simpler scales, like that used in CSA Z432 [2], can be eas­ier to use, but may result in gaps that are not eas­ily dealt with. For example:

Table 2: Avoidance

table2

A scale like the pre­vi­ous one may not be spe­cific enough, or fine enough (some­times referred to as  ‘gran­u­lar­ity,’ or in this case ‘gran­u­lar enough’) too be really use­ful. There are soft­ware pack­ages for risk assess­ment avail­able as well. One pop­u­lar prod­uct called CIRSMA uses a scale that looks like this:

Table 3: Probability of the hazardous event

table3best

A scale like this is more descrip­tive than the CSA scale, but less gran­u­lar and a bit eas­ier to use than the Steel table.

Probability is also influ­enced by frequency of exposure to the haz­ard, and each of the tools men­tioned above have scales for this para­me­ter as well. I’m not going to spend any time on those scales here, but know that they are sim­i­lar to the ones dis­played in terms of gran­u­lar­ity and clarity.

The second problem: Perception

This is the really big prob­lem, and it’s one that even the great­est minds in risk assess­ment and com­mu­ni­ca­tion have yet to solve effec­tively. People judge risk in all sorts of ways, and the human mind has an out­stand­ing abil­ity to mis­lead us in this area. In a recent arti­cle pub­lished in the June 2012 issue of Manufacturing AUTOMATION magazine, Dick Morley talks about the ‘Monty Hall prob­lem’ [3]. In this arti­cle, Morley quotes colum­nist Marilyn vos Savant from her ‘Ask Marilyn’ col­umn in Parade Magazine:

“Suppose you’re on a game show and you are given the choice of three doors. Behind one door is a car, behind the oth­ers, goats. You pick a door, say, num­ber one (but the door is not opened). And the host, who knows what’s behind the doors, opens another door, say, num­ber three, which has a goat. He says to you, ‘Do you want to pick door num­ber two?’ Is it to your advan­tage to switch your choice?”

Here is where things start to go astray. If you keep your orig­i­nal choice, your chance of win­ning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of win­ning the car become 2:3, since you know what is behind one door, and could the­o­ret­i­cally choose that one, or choose one of the other two. Since you know for cer­tain that a goat is behind door three, that choice is guar­an­teed. Choose Door Three and get a goat. But if you choose to change your deci­sion, your chances go from 33 per cent to 66 per cent in one move, yet most peo­ple get this wrong. Mathematically it’s easy to see, but humans tend to get emo­tion­ally dis­tracted at times like this, and make the wrong choice. According to Morley, stud­ies show that pigeons are actu­ally bet­ter at this than humans!

When we start to talk about risk in abstract num­bers, like ‘one fatal­ity per year per 1 mil­lion pop­u­la­tion’ or stated another way, ‘1 x 10–6 fatal­i­ties per year’ [4], peo­ple lose track of what this could mean. We like to talk to our­selves with a time frame attached to these things, so we might tell our­selves that, since it’s June now and no one has died, that some­how the risk is actu­ally half of what was stated, since half the year is gone. In fact, the risk is exactly the same today as it was on Jan. 1, assum­ing noth­ing else has changed.

In a recent court case involv­ing a work­place fatal­ity, one expert wit­ness devel­oped a the­ory of the risk of the fatal­ity using the Human Factors approach com­monly used in the process and nuclear indus­tries. Using esti­mates that had no sup­port­ing data, he came to the con­clu­sion that the like­li­hood of a fatal­ity on this par­tic­u­lar machine was 1 x 10–8, or roughly two orders of mag­ni­tude less than being hit by light­ning. In OHS, we believe that if a haz­ard exists, it will even­tu­ally do harm to some­one, as it did in this case. We know with­out a doubt that a fatal­ity has occurred. The manufacturer’s sales depart­ment esti­mated that there were 80–90 units of the same type in the mar­ket­place at the time of the fatal­ity. If we use that esti­mate of the num­ber of that model of machine in the mar­ket­place, we could cal­cu­late that the risk of a fatal­ity on that model as 1:80 or 1:90 (8 x 10–1 or 9 x 10–1), sig­nif­i­cantly greater than the risk of being struck by light­ning, and more than seven orders of mag­ni­tude more than esti­mated by the expert wit­ness. Estimating risk based on unproven data will result in under­es­ti­ma­tion of the risk and over­con­fi­dence in the safety of the work­ers involved.

Communication

Once a risk assess­ment is com­pleted and the appro­pri­ate risk con­trols imple­mented fol­low­ing the Hierarchy of Controls, the resid­ual risk must be com­mu­ni­cated to the peo­ple who are exposed to the risk. This allows them to make an informed deci­sion about the risk, choos­ing to do the task, mod­ify the task or not do the task at all. This is called ‘informed con­sent,’ and is exactly the same as that used by doc­tors when dis­cussing treat­ment options with patients. If the risk changes for some rea­son, the change also needs to be com­mu­ni­cated. Communication about risk helps us to resist com­pla­cency about the risks that we deal with every day, and helps to avoid con­fu­sion about what the risk ‘really is.’

Risk perception

Risk per­cep­tion is an area of study that is try­ing to help us to bet­ter under­stand how var­i­ous kinds of risks are per­ceived, and per­haps how best to com­mu­ni­cate these risks to the peo­ple who are exposed. In a report pre­pared at the UK’s Health and Safety Laboratory in 2005 [5], authors Williams and Weyman dis­cuss sev­eral ways of assess­ing risk perception.

One approach, described by Renn [6], attempts to chart four dif­fer­ent aspects of risk per­cep­tion in people’s thinking.

Figure 1: Risk perception factors

table4

An exam­ple of these fac­tors plot­ted on a graph is shown in Figure 2 below. The data points plot­ted on the chart are devel­oped by sur­vey­ing the exposed pop­u­la­tion and then chart­ing the fre­quency of their responses to the questions.

Figure 2: Graphing ‘dread’

table5

There are two fac­tors charted on this graph. On the ver­ti­cal axis, ‘Factor 2′ is the per­cept­abil­ity of the risk, or how eas­ily detected the risk is. On the hor­i­zon­tal axis is ‘Factor 1 — The Dread Risk,’ or how much ‘dread’ we have of cer­tain out­comes. In Figure 3 below you can see the assign­ment of fac­tors to the pos­i­tive and neg­a­tive direc­tions on these axes.

Figure 3: Dread factors

table6

At this point, I can say that we are a long way from being able to use this approach effec­tively when con­sid­er­ing machin­ery safety, but as prac­ti­tion­ers, we need to being to con­sider these approaches when we com­mu­ni­cate risk to our cus­tomers, users and workers.

When you are think­ing about risk, it’s impor­tant to be clear about the basis for the risk you are con­sid­er­ing. Make sure that you are using valid, ver­i­fi­able data, espe­cially if you are cal­cu­lat­ing a numeric value to rep­re­sent the prob­a­bil­ity of risk. Where numeric data isn’t avail­able, use the semi-quantitative and qual­i­ta­tive scor­ing tools that are avail­able to sim­plify the process and enable you to develop sound eval­u­a­tions of the risk involved.

References
[1]     C. Steel. “Risk esti­ma­tion.” The Safety and Health Practitioner, pp. 20–22, June, 1990.
[2]     Safeguarding of Machinery, CSA Standard Z432- 1994 (R1999).
[3]     R. Morley. “Analyzing Risk: The Monty Hall prob­lem.” Manufacturing AUOTMATION, June, 2012. p.26.
[4]     J. D. Rimington and S. A. Harbison, “The Tolerability of Risk from Nuclear Power Stations,” Health and Safety Executive, Her Majesty’s Stationary Office, London, UK, 1992.
[5]     J. Williamson and A. Weyman, “Review of the Public Perception of Risk, and Stakeholder Engagement”,  The Crown, London, UK. Rep. HSL/2005/16, 2005.
[6]     O. Renn, “The Role of Risk Perception for Risk Management.” in P.E.T. Douben (ed.): Pollution Risk Assessment and Management. Chichester et. al (John Wiley & Sons 1998), pp. 429–450

Douglas Nix, A.Sc.T., is managing director at Compliance InSight Consulting, Inc. (www.complianceinsight.ca) in Kitchener, Ont. He produces a blog and podcast called Machinery Safety 101, exploring a wide variety of machine safety topics. Check out his blog at www.machinerysafety101.com.


Print this page

Advertisement

Story continue below