What’s driving your program performance goals?

As I think about it, I probably should have come up with this one a little earlier in the year when the world was full of the spritely talk of annual evaluations of physical environment management programs, but I guess there’s no statute of limitations on discussing goals—or indeed, any sort of moratorium on when one would/could/should look back at established goals. Depending on how your program is structured, there could be instances in which programmatic goals are reviewed only when it’s time for the annual evaluations, but I do think there’s a value in keeping track of how things are going over the course of the year, etc. I suppose one could make the point that, if you can leave a review of goals to a once yearly pursuit, the goals probably aren’t as powerful/meaningful as they could be. Having goals that focus on what in your program needs improvement is perhaps the most useful thing you could do.

As a going concern, I think it makes a great deal of sense to look at the management of physical environment program goals in (casting back a little bit in time) the CMS memo from 2023, which focuses attentions on the oversight of the QAPI program by hospital leadership up to the board level. This probably has much to do with the various accreditation organizations (AO) and how they administer the survey process. Up to this point (that being March 2023), any time there was a condition-level (or more severe) survey finding, there would also be a condition-level finding under leadership/governing body. I guess CMS and the AOs determined that this “strategy” would be enough to force governing bodies and hospital leadership to take this stuff seriously. Interestingly enough, the number of condition-level findings has not really declined and has, in fact, increased. This makes the powers that be very unhappy, so they are (sort of) doubling down on the QAPI process as a means of identifying why hospitals are still receiving condition-level survey results.

An excellent starting point for creating objectives would be to review the findings from your most recent survey, particularly if there are any findings that relate to staff knowledge (perhaps stuff like the use of fire extinguishers and/or the process for shutting off the medical gases in the event of a fire). As could (and likely will before too long) happen during a regulatory survey, staff knowledge could become the focus of questions during fire drills and other rounding activities, and you could use that data to demonstrate whether the post-survey education process was/is effective.

As a closing thought (or two), it might be worth looking at your surgical fire prevention program for some objectives. It’s not just the main OR, it’s the surgery center, it’s the cath lab—there are a lot of environments to consider, potentially. Also, don’t forget about the second most frequent location for fires in healthcare (after surgery)—the kitchen—never underestimate the complexities of safety in the kitchen.

As a final (really!) thought, I see a lot of organizations use “widgets” for their performance indicators—100% compliance with testing; compliance with the number of fire drills, etc. In my experience, unless there are problem areas that indicate focused attentions, it’s just patting yourself on the back, which generally doesn’t result in improvement opportunities. You want the goals to focus on trouble spots or legit opportunities. For example, if you use computer-based learning for your fire response education, pick some specific elements and focus on making sure that folks are not just adept at recognizing the correct answer out of the multiple choices. I guess I’m a little hung up on staff education and knowledge, but I see a lot of safety programs getting the “short end of the stick” when it comes to orientation time, and it just doesn’t sit right with me. I remember when this started to slip—I understand that there’s lots of practical knowledge (e.g., benefits, payroll, etc.) that needs to be communicated during the orientation process. I guess I’m biased toward the position of safety at the top of the hierarchy, but I have a hard time ceding the top position to anything else.

About the Author: Steve MacArthur is a safety consultant with Chartis Clinical Quality Solutions (formerly known as The Greeley Company) in Danvers, Mass. He brings more than 30 years of healthcare management and consulting experience to his work with hospitals, physician offices, and ambulatory care facilities across the country. He is the author of HCPro's Hospital Safety Director's Handbook and is contributing editor for Healthcare Safety Leader. Contact Steve at stevemacsafetyspace@gmail.com.