The safety world is doing a lot of talk about safety measurement and safety metrics lately. There’s a widespread belief that the reliance on lagging indicators for safety measurement (most especially incident rates) isn’t beneficial. And there’s also a widespread belief that we should be using more leading indicators, even if it’s not always clear which leading indicators to use.
Plus, there are interesting discussions about quantitative v. qualitative indicators as well as controversies about things that can’t be measured at all.
We sat down with Pam Walaski, whose recently been studying up and revising her own beliefs on safety measurement, to get a nicely nuanced introduction and some guidance on moving forward when it comes to safety measurement (notice in particular her suggestion to use both lagging and leading indicators but also her different spin on what lagging and leading indicators are, which ones to use, how they should be related to one another, and how they should tie-in to business goals).
Feel free to watch the video below to begin soaking it in. If you’re the type who’d rather read, we’ve included the transcript of the discussion below the video.
Pam Walaski on Safety Measurement, Safety Metrics, and Safety Indicators
Let’s dive right into this discussion with Pam on safety metrics.
Convergence Training: Hi, everybody and welcome.
This is Jeff Dalto of Convergence Training and Vector Solutions back once again with another webcast-audio-podcast. And today we’re talking occupational safety and health issues revolving around safety metrics and the use of indicators.
And we have a repeat guest today, Pam Walaski is with us. A lot of you probably know Pam from ASSP and elsewhere. Pam is a Certified Safety Professional, and she’s also the Senior Program Director with a company called Specialty Technical Consultants. And in addition, and I might argue, most cool-ey, she is a faculty member at the Indiana University of Pennsylvania’s Department of Safety Sciences.
So with that, I’d like to say hi to Pam and welcome. How are you doing?
Pam Walaski: Hi, Jeff. Good. I’m always glad to talk with you and it’s good to be back.
Convergence Training: Yeah, I always enjoy talking with you and chatting with you offline as well. I’m excited about today’s discussion, so thanks in advance. I know you’re doing a lot of work on this indicators and metrics issue. And obviously it’s of a lot of interest to everybody out there. So we appreciate your time.
Pam Walaski: And one quick point, though I am technically a temporary faculty member at IUP right now, I’m only just teaching one class. But it is nice to be working with the future of the safety profession. I teach a freshman class. So it’s fun to sort of have a role to play with their start in the field, as you said.
Convergence Training: Well, thank thanks for correcting me and yeah, I agree. That’s a great opportunity to work with young people and help enable their career and move the safety profession forward. So hats off to you and to all the people out there doing the same thing.
So before we jump in and start talking about safety metrics and safety indicators, for the people listening out there, can you tell them a little bit about who Pam is and what you have done in your professional background?
Pam Walaski: Yeah, I think you caught up some of it. I’m currently a Senior Program Director with Specialty Technical Consultants. We’re a niche consultancy firm; we focus on management systems, auditing, and program development. It’s a small group of folks, there are about 15 of us, we’re all over the country. And I’ve been with them since May, so it’s kind of a new gig for me.
Prior to that I was with another consulting firm, but it was engineering consulting, and I was their safety director. And I’ve also done a lot of consulting over the years.
Right now, my focus is mostly on as, as I said, management systems. Work risk assessment is an area that I’m extremely interested in and do a lot of work in as well.
And this particular topic, interestingly enough, came up because one of my clients approached me and said that they were trying to change their perspective. They had heard that they needed to be doing things with leading indicators and wondered if I could help them. And I said, “Sure, everybody knows that lagging indicators are bad and leading indicators are good. And so this should be an easy project.”
But I wanted to do a little bit of research first before I sort of sat down with them. And the more I read, the deeper the rabbit hole got. And I found that I really didn’t know as much as I thought I did about this topic. I’ve learned a lot and gained some new perspectives.
And so that’s kind of what we were hoping to talk a little bit about today. I climbed out of that rabbit hole, I think. So hopefully I can be helpful.
Convergence Training: Oh, no doubt. Well, we look forward to talking about what you’ve learned. And Pam did mentioned she does a lot of work with risk and occupational safety. I know Pam teaches for the ASSP on similar topics. And we have a number of recorded previously recorded webinars with Pam on those topics. So we’ll link them to the transcription here and we encourage people to check those out.
Here are those previously recorded webinars with Pam:
Definitions: Safety Measurements, Safety Metrics, and Lagging and Leading Indicators
But as Pam said, today we’re going to be talking about safety measurement and safety metrics, and we’re going to be talking about lagging and leading indicators. For people who are maybe new to this kind of language, can you start off by telling us what we’re talking about, what those terms safety measurements, safety metrics, lagging indicators, and leading indicators mean?
Pam Walaski: Sure. One of the things that I found is that there really isn’t a good definition out there that I think is commonly accepted. But we do often in the profession use the terms leading and lagging indicators to describe how we measure occupational safety and health performance.
And I think pretty consistently, most people would think of a lagging indicator as an after-the-fact measure–something that’s already happened, where we’re looking at the outcome of something that we’ve done within our organization. The most common ones that people are familiar with, of course, are the incident rates, the total recordable incident rate, the DART rate, the lost time rate, and those kinds of things.
Experience modification ratings are also another commonly used lagging indicator because they tell you what’s happened. And they’re used to measure your performance year-over-year against yourself as well as against other organizations. And there’s a lot of benchmarking that goes on out there.
Leading indicators have gotten a little bit more popular lately in terms of discussions and articles. And they were thought to be better. And I use that term loosely, because they measure proactive or preventative work that we do as occupational safety and health professionals. The idea is that’s really where we should be looking, because driving continuous improvement means really looking forward or upstream, if you will, at what we do. And so developing those kind of indicators are better than looking at things after the fact.
But interestingly enough, when you read the literature, you don’t really get a common definition for either of them, just the common understanding of what they are, and I see a lot of other terms used to describe them: trailing perspective, upstream, downstream, preventative, proactive, reactive, just a lot of different terms are out there. And so there’s some lack of consistency about how we use them. But I think in general, most occupational safety and health professionals would be most familiar with the terms leading and lagging in terms of how we measure.
What We Talk about When We Talk about Safety Measurement
Convergence Training: Cool, and then I guess, if I could just drag you over to safety measurement. What would people talk about when they talk about safety measurement?
Pam Walaski: I think we’re looking at the types of things that we do as occupational safety professionals, the kind of activities that we engage in, and whether or not they are providing value to our organization, and are we measuring the kinds of things that provide value to our organization. So they typically are associated with occupational safety and health activities, incident investigations, training programs, policies and procedures that we develop compliance, near-miss reporting, those kinds of things, which is kind of one of the problems with, in my perspective, the typical use of indicators in occupational safety and health, and we’ll talk about that a little bit later.
But the idea to me is that those indicators are set aside from business indicators, other KPIs that your organization might be tracking. And so a leading indicator or lagging indicator is traditionally focused on occupational safety and health, but they don’t have a tie to the business. They don’t have a tie to the businesses strategy. And so we find in many respects, we are isolating ourselves or setting ourselves aside from the business. And then we kind of wonder why we’re not engaged with the C-suite and we’re not showing how work that we do is engaged with what the business is doing.
And so I tend to think about performance measurement from an occupational safety and health perspective as business performance measure. It’s nothing different; we’re measuring the success of the business in terms of what it thinks is important. And obviously businesses think that performing well from our occupational safety and health perspective is important, but a lagging indicator really just kind of sits out there all by itself. And I think one of the things that we need to start doing is finding ways to integrate the more into the businesses expectations.
Convergence Training: Great. That was a great answer on safety measurement and soon we’ll talk about this more, about how you want to move to performance measurement, aligning with business goals, and we’ll talk more immediately about lagging and leading indicators.
But when you were giving the definitions, they are slippery and confusing, and you see them in different ways and what occurred to me is that a lot of times people talk about lagging indicators as something that’s already happened. But obviously that’s true of a lead of a leading indicator as well, because you’re counting something you’ve already done. It’s already happened. And the thought is, as you pointed out, that the leading indicator has a future preventive value or future predictive value. Is that correct?
Pam Walaski: Right. And so two things about that.
One of the things that I discovered in terms of the predictive value is that there really are no good studies out there, scientifically validated studies out there, that tell us that those indicators we’re measuring have any tie to what we do. And that’s really one of the bigger issues. And there are a number of folks out there who are traditional researchers, who will argue that these are all well and good, but they don’t really tie statistically in any valid way relate to what you’re doing. And that really is a problem.
There are also some folks out there, including somebody who I’m a big fan, Fred Manuele, and another gentleman by the name of Anthony Hopkins, that makes some compelling arguments that leading indicators can be lagging indicators and lagging indicators can be leading indicators. And so using those terms kind of confuses and confounds the issue. Fred Manuele called the whole discussion about what was leading and lagging “jibberish” in his one of his articles that I read. And Anthony Hopkins said very much the same, that it doesn’t really matter what you call them, what matters is what they’re measuring and how you’re using them to drive continuous improvement.
So that’s part of my learning as I did more research, it doesn’t really matter what we call them. And so I’ve started to get away from the terms leading and lagging and focus more on performance measurement and what that means.
Convergence Training: Alright, we look forward to hearing more about that. If you’ll bear with me, I’ll ask you a question about lagging and leading indicators but then we’ll shift our focus and for people listening will definitely get links to the articles Pam just talked about.
Convergence Training: Can you talk to us about what some people call lagging indicators as we just defined them and talk about their historical use and our current use of them in safety management?
Pam Walaski: Yep. So lagging indicators most traditionally are, as we said earlier, the sort of incident rates that we use or experience modification ratings or cost of claims that our workers comp carriers might provide to us. And so they are traditionally used in that way, in that after-the-fact way
There is a lot of value to lagging indicators. And so my argument now and as we continue to talk is that lagging indicators aren’t bad, and we shouldn’t be throwing them away. One of the values of lagging indicators is that they’re very well understood in the industry. For example, incident rates have a formula that they’re used to calculate. So my incident rate is calculated in the same way somebody else’s incident rate is calculated, which in theory makes them comparable. And so that’s a very important part about lagging indicators, traditionally from an incident rate perspective. So we can use them in a consistent way, and people understand what I mean when I say total recordable incident rate. It’s a common term, we all calculate it pretty much the same way. And so that’s one of the real values.
Business also understands lagging indicators, because we’ve been using them for so long. And so if we want to evolve how we measure occupational safety and health performance, the solution isn’t to take something that we’ve used for the past 40 or 50 years and say “Well, you know what, never mind. Let’s try something else.” Right? We don’t want to necessarily abandon them. Lagging indicators are good ways for organizations to measure their performance year-over-year. They may not be the best to benchmark against other organizations, but they do provide some value to the organization and they are easy to understand.
The problems with them, in addition to being after-the-fact measures, is that in some respects, incident rates and injury information are not necessarily consistently applied in terms of how they’re calculated. So for example, there’s a lot of arguments out there about what is an OSHA recordable. And I’m speaking to the US based audience here of what an OSHA recordable is. So you’ll see threads on LinkedIn or Facebook where somebody will post “This just happened to me. Here’s the details of the incident-is this recordable or not?” And I guarantee you that there will not be agreement among the people who chime in. Half of them will say no, and then there’ll be all kinds of variations in between. So the point is that if we all don’t agree on what a recordable is, then we can’t really compare recordable rates, because the information may not be the same. Now the aggregate data that we get from BLS is probably large enough to kind of smooth out those rough edges. And that’s helpful, but we have to be very careful about that.
There’s also the randomness of injuries occurring. And I just finished a book, and we’ll talk some more about it, by Carsten Busch. I know you know him very well. It’s called If You Can’t Measure It…Maybe You Shouldn’t.
Check out our earlier interview with Carsten Busch, Safety Mythologist: 10 Safety Myths.
And he just published it just recently, and I’m almost done with it. But he argues that injuries in and of themselves, and he makes a very good case for this, are random events. They don’t always have the same kind of expectation of occurring with the same circumstances. And we know that to be true. The same situation can occur 100 times and 99 times nobody gets hurt, and one time somebody does. And so that randomness doesn’t mean that there’s a good way to compare what happens. And he also talks about using a very small sample within your own measurements, so using the injury rate from last year doesn’t really mean anything, you really have to go much further back or look at rolling averages, again, to kind of smooth out the problems associated with the lagging indicators.
The other problem, and this to me is the really the biggest issue, is that when we use lagging indicators, we teach our senior leadership to pay attention to them, which means that every time we have an injury that maybe affects our incident rate, everybody throws their hands up in the air and says, “Oh, my goodness, somebody got hurt, we’ve got to do something about this!” And so we focus most of our time and energy on fixing that one injury or that one incident that may not have any relevance. But we’ve taught our business leaders that this is important, and we should pay attention to this because somebody just got hurt. Which isn’t to say that we don’t care if somebody gets hurt, or that we should ignore it. But it places a huge emphasis on one individual incident, and I can speak from experience in my last position, when we would have a recordable, it was a very, very traumatic event for everybody. Even if it was something as simple as somebody getting bitten by a tick and going to the doctor and getting a prophylactic antibiotic just in case the tick was infected with Lyme disease. But we spent hours and hours and hours looking into that particular incident and how could we have kept that from happening?
So to me, that’s one of the problems. Business doesn’t measure itself, typically, by its failures. And an incident rate is a failure. Business measures itself by its successes. And so the theory then goes that leading indicators are successes or better ways to measure success.
How New Safety Fits In
You know, the point you just made, about how if we always report on lagging indicators like incident rates that gets everybody focused on something is very similar to a point I heard during a discussion with Todd Conklin recently here as well. And he was talking about how, you know, if you’re doing something high-risk, I can’t remember what it was, but it was maybe someone’s up on a scaffold and they’re building something, and if the goal is not to kill people, then businesses have to give safety professionals the freedom to break some arms sometimes. So that’s pretty similar to what you were saying.
What Makes for a GOOD Leading Indicators?
Pam Walaski: Yeah, absolutely. Absolutely.
But on the other side of the coin is not that leading indicators are the solution to all of it either. So leading indicators are intended to measure preventative or proactive activities. But the problem with leading indicators that I’ve found in looking is that we really, as a profession, haven’t really squared in our heads what a good leading indicator is. And so what I see in some of the things that I’ve read and heard is something called a leading indicator that is really no more than a tally of an activity that is proactive.
So for example, a leading indicator that is the number of hours we spend in training in 2019, or the number of JSAs that we update, or the number of safety suggestions that we get. All of those are preventative and proactive activities. But the indicator is nothing more than a number. It’s a tally, it doesn’t have any quality component to it. Just because we had we met our indicator of training hours for 2019 doesn’t mean the training was good. And it doesn’t mean that anybody learned anything from it. And just because we met our indicator for near-miss reports doesn’t mean that those reports were any good and that we can do anything with them.
And so when we switch over to thinking about leading indicators, we have to be careful that a number, a tally, is not an indicator. It’s just a number. And so we have to incorporate a quality component into that indicator before it has any relevance to what we’re trying to do. So for example, finding a way to measure the success of training in terms of people’s takeaways, not just did they score and pass the quiz that we give at the end of the training session, but two weeks later, can we observe them performing in ways that tell us that they learned something? Or in terms of JSAs, you know, have they been written in a way that is a high-quality JSA and meets certain parameters, not just that somebody pencil-whipped a new JSA and signed off on it and said, “Here you go.”
Because the one thing that we know about leading indicators is that if we set up a measurement of X number of something, chances are pretty good, we’re going to come close to hitting it because people are going to see that number and say, Okay, how do I get there? What do I have to do to get there?” And getting there becomes the most important thing.
So, one of the things that Wells Fargo learned, in a very bad way, was having incentives that are rewarded creates a tremendous problem. So the Harvard Business Review had a really great article in September called Don’t Let Metrics Undermine Your Business. And they use Wells Fargo as an example of something they call surrogation, which is just simply that the metric becomes the Holy Grail, and achieving it is the most important thing. And so what Wells Fargo found out, and they used a metric that their CEO put out called “Eight is Great.” And “Eight is Great” meant that if I went to Wells Fargo for a mortgage, for example, and I successfully applied for and got a mortgage, their customer service people and their product sales people’s job was to convince me that there were seven other Wells Fargo products that I must have.
And so people signed up for things they really didn’t want and ultimately, because that incentive was incentivized with rewards to the sales people, people got signed up for things without their permission. That’s what brought the whole thing down and millions of dollars and terrible reputations, and later, Wells Fargo learned a terrible lesson.
But it applies to us as well. If we establish a leading indicator, even a lagging indicator, and we incentivize that in some way, people will do whatever they have to do to achieve it.
The other problem that Wells Fargo found, and that is related, is that the indicator that they picked was not done in conjunction with the people who were tasked with achieving it. And again, you see that a lot in indicator selection, particularly leading. You know, the Safety Department decides what the leading indicators are going to be, as opposed to sitting down and talking to the people who are going to be responsible to find out what they think would be a good way to measure the good things that we’re doing, the preventative things that we’re doing.
So there are lots of ways leading indicators can get twisted around to not be what we would hope they would be.
Convergence Training: All right, good thoughts. I’ve scribbled some notes here.
First you were talking about how leading indicators are often used currently, just as tallies or as what I would call basically just a measure of business, as opposed to having some quality component to it, like you said, and the first example you gave was about training hours. And so there I would encourage people to check some good models, include some new models, of training evaluation. We just had a discussion about the roots of this with learning researcher Dr. Will Thalheimer.
Side note: Here are two recorded discussions with Dr. Will Thalheimer on learning evaluation:
And then the second thing, even if you have a quality measurement to some extent, there’s still the decision you talked about earlier, which is there’s an assumed value to these leading indicators, even if there’s no data or evidence to show that it’s actually true that it has predictive or preventive value, right.
It’s important to think about that, because every field has those assumptions that we believe are true. And we don’t think critically about, right?
Side note: for example, check these related articles:
Pam Walaski: Right. And so you know, as you think about ways forward, that brings up the point that I think is really important to address, which I mentioned at the very beginning: our indicators currently sit out there, and they’re devoid of any connection to a business driver, right? And leading indicators are no better, because they still sit out there all by themselves, even if they are designed and well-crafted and have a quality component. There’s still a lack of integration into the business, into the strategic plan of the organization, which is really where in the long run I think we need to be in terms of how we use both leading and lagging indicators.
And even more so, leading indicators and lagging indicators should be talking to each other, they should be a continuum as opposed to separate things. So what I would envision is not that we have a little dashboard that has our leading indicators over on this side, and our lagging indicators over on that side, but that we have a business driver or a strategic goal, and our leading indicators and lagging indicators are tied directly to that.
And there are lots of great ways to do that. There are a lot of people who are talking about that. Carsten Busch, in the book we just mentioned, talks a lot about that. Peter Susca, who’s a gentleman who does some really good stuff for ASSP in their Professional Safety Journal, he talks about Stephen Covey and how one of his seven habits of effective people is to begin with the end in mind. Susca says with an indicator, that’s not what you want to do, you don’t want to begin with the end in mind. Because that drives you to an indicator that is detached. You want to end with the beginning in mind. And so what he’s trying to say is: what’s the strategy of the organization, and how can we tie occupational safety and health indicators to that strategy? So you have leading indicators become input indicators, as opposed to leading indicators. And then you have a process or a goal or something you’re trying to achieve and activities that are going on around that. And then a way of measuring if you are successful is with your lagging indicators. So it becomes input indicator-process-output indicator. And that’s kind of the model that I think we really need to be working our way to, and you’re seeing more and more of that out there in the literature, about getting away from isolating our indicators in occupational safety and health.
Performance Measurement & Safety Management Systems
Convergence Training: And is that what you meant when you were talking earlier about how you’d like to see and these indicators are this kind of measurement issue moved towards performance measurement?
Pam Walaski: Yes, exactly. It ties it to performance measurement and, you know, we haven’t talked at all about how all of this fits into safety management systems. But this is a good segue to throw that in there.
So another person who I have a lot of respect for is a woman by the name of Kathy Seabrook. She writes a lot and she was the ASSP vice chair for ISO 45001. And her perspective is that a fully functioning safety management system is a leading indicator in and of itself. And anybody who is working in management systems knows that one of the most important parts of it, whether you’re using 45001, or Z10, or whatever, is performance measurement, right? And that’s where we need to be tying our performance measurement into our management system, which is of course integrated into the organizational performance in general. So it all ties together, it all feeds up as opposed to sort of setting aside.
There’s a Global Reportability Institute standard, 403, that measures safety and health. And they talk a lot about leading indicators in there as well. And 45001 is in the process of developing either guidance or a separate standard on performance measurement, in which they will take a much closer look at how management systems measure performance and what guidance that we can give people who using 45001 to do that.
Convergence Training: Pam, I’ve got a question for you about the management system we were just talking about. I was having an interview with a different safety professional and she was talking about some online safety management software system she was using. And one of the questions was, if I recall, do you have a safety management system? And you know, you check the box you said yes. And automatically, and I think she was critical of this, your risk rating went from red to green and from seven to three, or I’m making that up. And that kind of seems like potentially something where we’re just assuming something like a safety management system somehow makes it better. Is there data to show that’s true?
Pam Walaski: There isn’t. But because safety management systems are still a little bit new, the people who are researching them would make the same argument at this point–that there really isn’t any valid scientific studies that tie implementation of a management system to improved safety performance. And as you say, just because an auditor comes to your facility and says “Yes, yes, yes, yes, yes, yes, yes,” you checked all the boxes in terms of conforming to what the management system says you have to do, that doesn’t mean, number one, that you’re actually doing it, or number two, that you’re doing it effectively. And so your outcomes may not be better just because you have a management system. It provides a foundation and complying with a standard like Z10 or 45001 just means that you’re complying with the standard. It doesn’t mean that you’re doing a good job.
And you know, there’s a corollary too. Just because you’re following the OSHA standards doesn’t mean that you have a safe workplace, right? We know the absence of injuries or compliance with OSHA doesn’t mean things are safe. And again, lots of people are writing lots of stuff about that idea. The absence of injuries does not mean the presence of safety.
Convergence Training: Right. So I guess so the safety management thing, there’s really two levels. One, we don’t know for certain that having a safety management system in place really does make you safer, and two, everybody’s got their own flavor of safety management systems, and they’re performing in their own way at their site-specific context. So people may be doing it more or less effectively.
Pam Walaski: Right. And, you know, auditing is designed to have some general conformance to that across the board. But you know, it isn’t a perfect system either. And organizations shouldn’t feel that just because they have an ISO 45001 on their website means that that we should all think that they’re safer than everybody else.
Are There Things We Can’t Measure? If So, Should We Try?
Convergence Training: So is this a good time to talk about something you’ve talked about a few times already, and to raise our buddy Carsten Busch, whom I’ll have on here soon talking about this new book, and talk about how maybe sometimes if you can’t measure something or can’t figure out how to measure something, it’s because you shouldn’t be measuring it?
Pam Walaski: Yeah. Well, and you know, Carsten is very pragmatic. And he challenges assumptions. And so even some of the things that I’m suggesting are good solutions, he would probably disagree with. You know, he talks about a different way of measuring performance that is not based upon numerical kinds of things. He talks in his book a little bit more about sort of a richness in terms of stories and those kinds of things that are not easy to quantify. They’re very quality-based and they’re a real challenge. And he would challenge us to really rethink how we’re measuring what it is that we do. I’m not completely on board with everything that he says, and you know, I’m sure he would say that’s perfectly fine. I’m not right, you’re not wrong. But he does raise a lot of interesting perspectives about performance measurement and what we’re doing.
So the idea that we really lack really hard evidence would be something that I think he would suggest is an indicator that we should be doing something different.
Convergence Training: Yeah, I see a lot of talk about the distinction between qualitative and quantitative measurements for safety. And I’ve seen had similar discussions with someone like Ron Gant about just the importance of stories. And sometimes you can’t get stuff in the terms of numerical data, but there are other forms of evidence out there.
It’s really kind of an interesting thing, the whole signal and the noise conversation.
Pam Walaski: Yeah, he talks he talks about the signal and the noise and, you know, I think it would be fair to say that he would say that lagging indicators are full of noise.
You know, a TRIR in and of itself is nothing more than a number. What’s in that number is much more valid. But unfortunately, we don’t often take the time to really dig into that number, we just say, you know, “Our TRIR is above or below or at,” or, you know, we use it in a variety of ways that really don’t mean what we would like to believe they mean, right?
Convergence Training: Right. So it’s reductive, we don’t unpack it to find the true wisdom or the true nugget in it, or to use it in an effective way.
Safety Metrics Examples
Pam Walaski: Right. And I have a couple of examples, and I can share them whenever you think the time is right.
Convergence Training: Now, how about now?
Pam Walaski: Yeah. So for example, let’s just say that part of my incident investigations process is that I’m consistently finding one of the causes of incidents occurring is a lack of the use of management of change, or a proper use of management of change, and I consistently am finding that as a cause.
So as I dig into my lagging indicator, I say to myself, “How can I use this information to change?” So going back to our process model, we decided that one of the things we’re going to do in 2020 is improve our management of change process, we’re going to revamp it completely because it’s not being used. It’s not being used properly, and it’s not being used when it should be used. And in theory, it should be helping us to reduce incidents, if that’s what we believe.
So we decided as leading indicators of this process or process indicator input indicators, we’re going to measure things like a revision to our management of change processes and procedures. And we’re going to establish some markers for what would tell us that those procedures have been properly vetted, revised, and rolled back out to our organization. And we’re going to measure our organization’s understanding of these new procedures by training them and then measuring their understanding of what they’ve been trained on, that they understand now what management change of change is and how to use it. And we’re going to develop other leading indicators before we begin to roll this new process out.
So then we go through the rest of 2020, we roll out the management of change process, and our lagging indicator becomes, “have we seen a reduction in the number of incidents where management of change or failure to use it is a cause?” Because if we have properly improved management of change within our organization, we would begin to see a reduction in that, because people understand what it is, people know when to use it, and they use it properly. And as a result, it begins to prevent incidents.
And so that the sort of traditional lagging indicator of recordables or incidents should also go down because we no longer have management of change as part of that. And we can begin to see people understanding that process of management of change, as opposed to just doing incident investigations or just looking at a TRIR. So we take a lagging indicator, we transform it into a process improvement, and then we tied leading indicators and lagging indicators to that.
So this one is a little bit more tied to business operations. And this is an example of the organization that I just came from. And we had a higher-than-normal TRIR. And as you looked into that TRIR, there were a number of different groups or categories of injuries that we thought we probably could do a better job at preventing. But in order to tie it to the business, this particular organization did a lot of environmental consulting. And so that involved field staff who would be out in all kinds of weather and all kinds of terrains, doing wetland delineations, and habitat surveys, and pipeline surveys, and they would because of the difficulties of the work, the physical and strenuous nature of the work, they would sometimes slip or trip or fall. And as a result of that, you know, maybe they would tweak their knee, or bump their elbow, or twist an ankle.
And as a result of that, they would say to themselves, “You know, I wonder if I should see a doctor?” And so they would ask their coworkers, or they would ask their field supervisor, or they would ask me, “Do I need to go see a doctor?” And I would say, “I don’t know, I’m not a doctor.” And so in the absence of good quality expertise, they would go see a doctor who would then diagnose it and we’d end up with a recordable. So as we looked at this, and we talked a little bit more about business expectations, the business side of the organization felt like we did a really good job with environmental work, that we had some top notch biologists and species folks who did really high-quality work and our clients really loved the work that we did. And we wanted to do more of that.
And so part of our 2018 goal was to improve or increase the number of dollars that we brought in for those kinds of projects. But the problem was our TRIR was causing a block in that because our clients were saying, “You know, we love the work that you do, but your TRIR is above our benchmark.” So we said, “Okay, look, we can improve this business driver, we can tie improvement into that, we can support what this department wants to do in terms of increasing, if we can reduce our TRIR. So we found a medical triage service that provided our employees with 24 hour-a-day, seven-day-a-week nurses who would answer the phone when we would call and say “I slipped and fell and I hurt my knee. Do I need to go to a doctor?”
So as leading indicators we looked at “Let’s find a vendor who can do this for us, let’s develop a procedure and a protocol for how this will work, let’s train people on how to use it, and let’s measure how often they use this service appropriately.” So those were our leading indicators. And we put all that stuff together, and we rolled it out. And then the lagging indicator wasn’t “Did our TRIR go down,” necessarily, but for the types of injuries where people had questions about the need to seek medical care, did they get the kind of information that they needed? And after a year of using this service, we found that about eight people had reported an injury to the medical triage service and had been diverted from medical care into self-care. And in talking to those eight employees, they said “Yes, I felt perfectly fine not seeking the doctor because talking to a nurse helped me.” So we tied that reduction in those soft tissue injuries to a specific implementation of a nurse triage service, which then we tied back into the business department’s driver, reducing the TRIR so that they can continue to seek work in that area.
And so that’s what I mean by tying it to business. Right? So instead of just saying, “Oh, we’ve got to get rid of these injuries,” we said, “What are our injuries doing that impact our business? And how can we tie reducing them directly to that?”
Convergence Training: Yeah, and I like that you solved it not in a cynical way, or by hiding something, but just by interjecting expertise.
Pam Walaski: Exactly. And you know, it wasn’t that we thought that our employees were just running off to the doctor every time they had a little bump or bruise. They truly didn’t know what to do, right? And because of the remoteness of the work that they were doing, they weren’t often close to their own doctor. So they would find the local medical express or urgent care, and that’s where they would go. And we didn’t want to just tell them not to go or chastise them for it or say, “Hey, it’s your fault that our TRIR is up.” We wanted to respect their concerns and find a solution for them. And that solution then had a cascading impact on the number of injuries that we had that required medical treatment, and then how our TRIR rate went down, and how our business units were able to then go to our business partners and say, you know, “Look how good of a job we do for you, let’s do more together,” and it sort of all worked together.
Advice for Safety Professionals Reconsidering Their Safety Metrics
Convergence Training: Alright, so I guess we’ve kind of touched on this and maybe people will already see it, but from your perspective, what would be your advice to a safety manager who wants to start measuring safety or reconsider how he or she is measuring safety or reconsider their indicators?
Pam Walaski: Yeah…a couple of things. First of all, my primary suggestion is don’t throw the baby out with the bathwater, right? Lagging indicators are not bad. They’re not ineffective, they have their place. And in fact, we can’t throw them out anyway, even if we wanted to, because there are a number of different reasons why we have to track them and report them.
But also to educate themselves a little bit more on leading indicators. Pay very close attention to surrogation. Your indicators should never be tied to an incentive. People should not be rewarded for achieving an indicator in the way that we typically do that. Because all you’ll do is create ways in which people will let the indicator be the end-all-be-all, and do whatever it is they have to do to make sure that they meet that indicator.
Also, make sure that the people who have to implement and are responsible for the success of the indicator are part of the discussion. You know, and this is sort of where Carsten would suggest that sitting down with the people who are with their boots on the ground, and as they say, at the sharp end of the stick, right? Ask them “What would you say is a good way to measure how well we’re doing things?” And just asking that question and using those suggestions as a way to do that.
Convergence Training: I think that’s such a great point. I really liked that you said that, I really liked the idea. I’m a big proponent of human centered design. And I’ve been studying design thinking a lot lately. I love the idea of including the people, and I think it’s very common that the people whose performance arguably is being measured are never consulted. And I bet that’s common, not just in safety, but in the entirety of workplace America. That you come to work one day, and you see a measurement scheme that’s been devised by your managers or somebody, and you had no opportunity to be consulted at all and it seems, well if nothing else, incomplete or not the full story. So I’m glad you called that out.
There’s a guy named Steven Shorrock, who’s been writing a lot about human centered safety and stuff. I’d recommend that people check him out.
Pam Walaski: And the HBR article that we talked about, the Wells Fargo example, is a perfect example of that. I mean, that had nothing to do with safety, that just had to do with selling products, right? And look what happened to them. Now, your organization may not end up with millions and millions of dollars of fines and penalties associated with your incentives. But we know OSHA doesn’t like incentives being tied to injury reporting either, for all those same reasons. So definitely stay away from that.
I think the appreciation for balancing leading and lagging indicators, that sort of balanced scorecard approach, I think is very important. But also being careful not to let your indicators sit out there by themselves, and finding ways to tie them to your business performance, I think is a very important thing that folks need to really start to pay attention to.
It isn’t as simple as getting rid of lagging indicators and moving into leading indicators. It really does take some time to do it. One of the things that I’ve seen in some of the stuff that I’ve read, which is also another caution, is that if you’re going to start using leading indicators, take your time to do it, right?
And I’ve seen some recommendations that starting with a leading indicator, any leading indicator, is the way to go. And I’m not sure that I agree with that. I think part of the problem is that we teach our managers that, “Okay, look, maybe lagging indicators weren’t so good, but now we’re going to switch to leading indicators and we’re just going to get started with any one.” And then the next year when we go back to them and say, “Listen, that leading indicator we used last year really wasn’t a very good one. We’re going to really try to do a good one this year,” all we do is confuse folks. There’s no value to starting with something, anything, just to get started.
My recommendation is find one, just one really good leading indicator, and try to make that your goal for 2020. And then if you find some success with it, begin to build on that, as opposed to just throwing some leading indicators out there. We collect a lot of data. And a lot of people would say that that data can be valuable leading indicators, but I’m not of that opinion.
Convergence Training: Yeah, and it sounds like you’re saying, don’t just find one leading indicator for this year, but take some time and find a good one, instead of just throwing spaghetti on the wall.
Pam Walaski: Right. Look at your business’s strategic plan, what are the business goals of your organization? And how can you find a way to tie what you’re doing as a leading indicator to what those business goals are? As opposed to just doing preventative activities for preventative activities’ sake, right.
Other Resources about Safety Metrics to Look At
Convergence Training: So that leads nicely…so if we want people to give some good thought to this, I think this interview is a great resource for that, but obviously you mentioned there’s a big discussion and a lot of people who are being thoughtful about safety metrics and leading indicators, and there a lot of good resources out there. Any resources you’d recommend that folks check out?
Pam Walaski: Yeah, we’ve talked about Carsten’s book, which I highly recommend. It’s a very slim book, it’s maybe 115 pages. And it’s got very brief opinions and ideas about things, and it’s got tons of resources. So as a start, it’s good. It gets you thinking, but there are so many other things in there that you can pull out, and look and read from books and articles and websites and all kinds of good stuff.
The Campbell Institute, which is part of the National Safety Council, it is sort of their institute of best practices among their member companies, has done a lot of work in this area. They launched a survey and developed a work group on leading indicators. Their focus was on leading indicators in 2013. And since then, they’ve published five white papers on the side project, and they are excellent documents, all of them. They all have lots of really good information. Again, they do focus on leading indicators, there’s not as much of a balance between leading and lagging indicators as I would like to see. But it really does take you down the path of finding good leading indicators. And they’ve got in the last paper that just came out in 2019, maybe seven or eight pages of potential leading indicators by topic, like worker participation, or management leadership, or some of those other things which you can use to sort of think about what would work.
In July OSHA published a guide on leading indicators. It just came out in July 3 or 4 or whatever. It’s a 20-page document, and again, it’s an excellent resource to sort of see what OSHA is thinking about leading indicators. And so OSHA is kind of getting in on this kind of leading edge, if you will (if you don’t mind the pun) use of leading indicators. So that’s another good document.
And I think also just kind of paying attention to what’s out there with some of the other articles. I think there are some things that will help you kind of move the needle. There’s a lot of stuff being talked about. Next time you attend a conference, I bet there’s probably a couple of presentations on that topic.
Convergence Training: Right, cool. Any parting thoughts or anything you’d like to add in conclusion?
Pam Walaski: I think really just focus on balancing your indicators, tying them to business performance, and just educating yourself. And again, I go back to the reason that this all started for me, which was, I thought, I had figured it out. I thought I knew and, you know, in my attempts to just start to read a little bit more, I just kept finding more and more and more and digging deeper and deeper and deeper. And I feel like it’s really been a valuable exercise for me. It’s really helped me learn.
You know, as safety professionals, we have to always be evolving. The things we did 20 years ago may not work anymore, they may not be the best thing. And the only way you’re going to evolve is if you really take the time to learn and read and hear and listen. Your podcasts or your webinars here, whatever we’re doing, whatever you’re calling this, are great ways to do that. Because it’s a really great way to learn about sort of cutting-edge stuff. What are people thinking about? What are they talking about? You don’t have to agree with it. You don’t have to overhaul everything that you do. But you’re not going to evolve as a professional if you aren’t hearing what other people are talking about. Don’t insulate yourself.
Connecting with Pam & Chatting about Safety Metrics
Convergence Training: Yeah, I like that message about just keeping your radar open and being exposed to other viewpoints. I love your story about how you thought you knew how to do this. We’ve all had this experience, you think you know how to do it, so someone says do it, and if you’re honest, you realize you don’t know how to do it. So that’s a great story.
And I think what you’re doing is a great example of something I’m a big proponent of, which is just working out loud and putting it out there and kind of crowdsourcing the idea. So I I admire that and appreciate it.
As you as you continue to move forward and solidify your thoughts and rethink and rethink, how can people keep up with you? How can they connect with you? And where might they be able to find you? Do you have any conference presentations coming up, or are there any publications in ASSP’s Professional Safety Journal planned?
Pam Walaski: Yeah, yeah. Well, I’m on LinkedIn. And I always welcome conversations. I’m on a number of groups, probably some of the similar groups that you’re on, like Safety Differently. And some of those I learn a lot from those.
I’m also on Twitter, it’s just @SafetyPam. And those are where I am social media-wise. I try to keep Facebook for personal social networking.
I am actually taking this topic and I’m deep into writing an article for Professional Safety Journal, and I’m hoping that they’ll approve it for publication. And we’ll see that maybe in the spring or early summer, or depending upon how long it takes for me to get it done.
I do have some stuff coming up conference-wise. I just got signed up to do the BLR Safety Summit in Indianapolis. And I’m going to be talking about this particular subject there, as well as some risk assessment stuff. And that’s April 6 through the 8, I believe. And I’ll be at the ASSP professional development conference in Orlando in June. Lots of folks will be there. I think you’re coming, right?
Convergence Training: Yeah, I’ll be I’ll be there as well. I look forward to seeing you there.
Pam Walaski: Yep. So that’s what’s coming up on the horizon for me.
Convergence Training: Alright. Well, thanks, Pam. For everybody out there, I’m sure you guys enjoyed this. This was Pam Walaski. As a reminder, she’s a Certified Safety Professional and the Senior Program Director at Specialty Technical Consultants and also, apparently, a temporary faculty member in the University of Pennsylvania’s Department of Safety Sciences. And for the people out there, we encourage you to to follow Pam. And Pam, thanks so much for your time, we really appreciate it.
Before you go, feel free to download the free guide below as well!
Free Download–Guide to Risk-Based Safety Management
Download this free guide to using risk management for your occupational safety and health management program.