Thursday, July 2, 2009

CSHO Evaluations

The last post brought out a couple more comments, the first commenter asked:
How should CSHOs be evaluated?
To which a second commenter responded:
On numbers and lapse time. At least partially. If we set goals for them to achieve (like we do for the ADs and AADs) at least they'll have a starting place. And that will eliminate the low performers/deadwood on staff.
Let's put aside the fact that we can't legally be evaluated on the number of inspections we conduct, I personally would be willing to be evaluated on inspection numbers and lapse time, because I trust my AD to make allowances for fatalities or sig cases. But I know CSHOs in other offices who don't trust their AD to be fair. The problem is that there are a few ADs who seem to be incapable objectivity, and so how do we develop an evaluation system that's fair in those situations? Does anyone really think that the CSHOs that worked on Imperial Sugar or Milk Specialties Company had a chance in hell to do 100 inspections for the year? Now ask yourself this: if you had to do one of those inspections, would your AD make allowances and reduce the number of inspection you had to do for the year?

One suggestion from Kane at the OSHA Underground is to use a panel for our evaluation. I've haven't seen a detailed enough proposal to say whether or not this is a good or bad idea, but I'm leery. The panel would almost have to consist of CSHOs from within the AO because outsiders aren't familiar enough with what we accomplish. The biggest obstacle I see is that my performance bonus is based on how many elements I exceed, the same as the potential panel members. Because we're competing for the same money, they have a vested interest in my having a lower evaluation than they get. Now, instead of being pissed at management, we're pissed at each other. If Kane ever expands on his idea I'll certainly give it a new look, but color me skeptical.

Where does that leave me? I'm not sure. Lapse time should certainly be part of the evaluation, again, as long as there are provisions for unusually long cases. Maybe someone could develop a formula that adjusts acceptable lapse time based on the number and types of violations. You get 8 hours for an IC complaint, but 120 hours if you have 4 Willful and 10 serious violations.

Let's open this up a little, what do the other CSHOs out there think, how should we be evaluated?


  1. Doesn't lapse time encourage people to cut short the inspection?

  2. No, I don't think it will. Lapse time is the time from when you open the inspections until you send the citations out the door.

    The majority of inspections only involve one or two days on site, most of the lapse time the case file the time it sits on your desk. The problem is that some cases can be very complex and require a lot of research, or you may be dependent on what other people have to do, or it can just be difficult to finish up the case with everything else going on )meetings, training, duty officer, leave, etc.). There are a lot of different reasons.

    I remember a confined space issue I had umpteen years ago that we asked the RO and NO to weigh in on, but it took months for them to respond. In fairness, it was a very difficult question, but that delay was on my lapse time.

    Fortunately for me, my AD didn't hold it against me. But that does demonstrate why I think that if lapse time is used, it has to have a built in flexibility.

  3. A very difficult question. I personally would prefer an objective "totality of job performance" standard, allowing evaluations to include all the tangibles and intangibles that don't necessarily show up in the numbers. Who's going out there and taking some initiative? Who's sitting around just collecting a check? Who does their homework on citations and writes good ones time after time? Who writes citations on items the ALJs and OSHRC have routinely tossed out for over 20 years?

    Can that be done? Or is it impossible to get objectivity on such things?

    I like lapse time, but I agree that it could very easily create an incentive to cut corners. Also very easy for ADs to overlook difficult or complex cases.

  4. I do not understood this bizzare obsession with 'lapse time.' I'm not sure exactly what lapse time indicates. It's been said that low lapse time indicates quick abatement. This assumes that most employers do not act in good faith to correct a violation as soon as the CSHO brings it to their attention. Typically this is the exception to the rule. In my experience most employers begin trying to fix things as soon as they're brought to their attention by a compliance officer.

    The ironic thing with lapse time is that inspections involving those bad actors who can be counted on not to abate a violation until absolutely forced to are the source of most significant cases. These cases intrinsically have the longest lapse time, most approaching the full six months, due to regional and national office review of the cases.

    The whole lapse time thing just seems like a red herring. And it does encourage some, especially hygienists, to take short cuts. Sampling takes at least three weeks to receive results. Don't sample and you can shorten your lapse time. I'd bet that offices with average lapse times under 15 days never do air monitoring.

    The other thing that significantly increases lapse time is waiting on a process likely to cause overexposures to be active, such as an employer that sandblasts once a month or runs a process with chromium paints once a quarter. Most IHs now simply document 'process not active' in their case file, throw out some safety or haz com violations, and move on to the next inspection. Keeps their lapse time low. I don't think this is doing much to protect the health of potentially exposed employees.

  5. We keep reading that the law doesn't allow CSHOs to be evaluated based on the number of inspections. Where is that in the law? When you quote it to me, read it closely. Doesn't it say RESULTS of inspections? How does that prohibit being evaluated based on productivity -- as opposed to the percent serious, or the size of penalties, etc., etc., etc.?

  6. This "Lapse Time" statistic is brought to you by Monty Python's Bureau of Silly Statistics.

    The problem with using lapse time as a metric is simple. All CSHOs do not do the same type of inspections. In every office, a few of the more qualified staff do more complex inspections. Managers know who these people are and assign them the "beasts." We all know or should at least suspect this.

    Emphasizing lapse time without trying to equilibrate for the variable difficulty of inspections creates a disincentive to do complex work and initiates a rush for the bottom. Instead of wanting the difficult cases and being recognized for the effort, folks begin to clamor over the easy stuff.

    It's already happening at the end of each FY. That's why any sane analysis of the inspection numbers should tell you that there are bushels of quick and easy inspections/citations being pursued and turned in quickly to lower the lapse rate at the end of the FY.

    Incentives work.

    IMO, the answer is to create an algorithm that gives different strengths to different types of inspections. Fat/Cats, Wilfull, Jumbo's, Full-shift monitoring, etcetera, could be scored to reflect the increased difficulty level. It's not rocket-science, you wouldn't need mathematicians to do it. Plus, you could incentivize staff to do good, complex work. You can argue over the weighting system, but at least you'd be arguing about real issues instead of assuming something that you already know to be false. All inspections are not created equal.