Tuesday, July 28, 2009

Dr. David Michaels

The White House issued a press release a little while ago announcing the nomination of Dr. David Michaels to be the head of OSHA. Congratulations Dr. Michaels, and welcome to a world where public health, science, politics, law and bureaucracy all collide into the big mangled pile of frustrations, forgotten dreams, lost ideas, fading idealism, and reality that is OSHA.

Friday, July 24, 2009

Penalties Across the Country

The OSHA Underground asked an interesting question today:
"WE wonder after 39 years, why don't we have standard penalties for routine violations?
An unguarded press would be worth the same in Maine to Florida."
I would like to hear what others think, but obviously I have to put in my two cents.

I have never agreed that all violations are created equal. If I inspect an injection molding shop that has 300 different plastics and they have 298 of the 300 MSDSs, and the two missing MSDSs are for plastics they haven't used in three months, and they get the MSDSs faxed to them before I leave, should I even cite them? I don't think so, I'll note it in the case file and move on.

Take that example to the next step, if they are missing 30 MSDSs of rarely used plastics and have an effective HAZCOM program, should I cite them? I think so. The question becomes at what level? I would propose OTS, no penalty.

Now to the extreme, the employer only has 15 of the necessary 300 MSDSs and has routinely thrown out MSDSs when they are received. That to me is a serious violation.

The obvious question to my example is where do I draw the line? I can't answer that, there are way too many possibilities, but it seems to me that issuing the same citation with the same penalty in all three of those situations is unfair.

To continue the punch press example, the severity and probability assessment might preclude violations in Maine and Florida from being the same. If the site in Florida has a press that six employees share over two work shifts, the press is in continuous operation, and the employees have their hand near the point of operation, that seems to me to be high severity/greater probability. If the site in Maine has one operator, who uses the press once per week, and uses a wood dowel to hold the piece in place as the press actuates, that seems to me to be a high severity/lesser probability.

Now what happens if one of the sites uses a press that has a down stroke force that is incapable amputating a finger? Now the severity isn't high, it's medium or low.

I get what Kane is saying, and we do have consistency issues, and even though those issues aren't as bad as they were 20 years ago, I think they have been getting worse for the last few years.

I think one piece of improving our consistency is through OTI. Not just the courses, but the opportunity for CSHOs from around the country to get together and discuss how we do things. It seems to me our consistency began to fall about the time our training budgets fell. The fact that OTI seems to be going to distance learning has only accelerated the problem (distance learning is a whole new rant).

Consistency is important, but so is flexibility. So how do we achieve both?

Monday, July 20, 2009

Inspection Coding

Does anyone like coding? Does anyone care about coding? I used to assume the answer to both of these questions was an emphatic no. As it turns out, I was wrong.

I'll admit right up front that I'm horrible when it comes to coding. I've been through phases where I've used every code I could find for every inspection, even those codes I knew were out of date. I've also been through streaks where I've coded almost nothing. I've employed both of these extremes out of shear annoyance with the system.

The other day I was talking to a friend of mine in the NO when some how we got around to the topic of coding. I went into my normal rant against the NO and their damned coding. But instead of a sympathetic ear, I got an earful. It turns out that coding isn't so much an exercise in bureaucracy as it is in political and media defense.

We've had an on going discussion on this blog about inspection numbers. I still don't think that inspection numbers are a good measure of how successful we are (or are not) but the fact is that Congress, the White House, and the media understand inspection numbers and not much else. This makes inspection numbers a necessary evil that won't be going away any time soon.

My friend pointed out that bad inspection data only pisses people off, and when Congress, the White House, or the media are pissed off, well, it all hits the fan. And after it hits the fan, we all know it then rolls down hill. How? in the form of another NEP of course.

What does this mean from a practical standpoint? It means I'm going to be more diligent when it comes to coding. I'm not going to spend hours researching coding, but I am going to make an effort.

If we all do this, then just maybe a little less will roll down hill and land on me.

Wednesday, July 15, 2009

Success - Follow-up

I just received this comment on an earlier post that I wanted to bring forward:
Wayne Gray and John Scholz have published research indicating that OSHA inspections do make a difference. You can check out their paper "Do OSHA Inspections Reduce Injuries? A Panel Analysis." Also "Inside the Black Box," co-authored by Gray and John Mendeloff, which was published in the March 2005 issue of Law and Policy.
The first paper they mentioned, "Do OSHA Inspections Reduce Injuries? A Panel Analysis," was published in 1991. Gray and Mendeloff followed that up with the paper, "The Declining Effects of OSHA Inspections on Manufacturing Injuries: 1979 to 1998," published in 2002, which shows a decline in OSHA effectiveness. I can't access the second reference, "Inside the Black Box: How do OSHA Inspections Lead to Reductions in Workplace Injuries?" so I'm not sure what it says.

It's interesting that these papers come from the National Bureau of Economic Research, not from the S&H field, although at least some of the funding for the research was provided by NIOSH.

Monday, July 13, 2009

OSHRC

I was perusing the Whitehouse.gov list of nominations the other day, just to see where the department stood in terms of unfilled appointments, when I noticed that Thomasina Venese Rogers was named to Chair of the OSHRC. I don't follow the OSHRC closely, but Ms. Rogers has been been the Chair before and has been on the Commission since 1998, having been appointed by both President Clinton and President Bush. The site All Gov has a short bio of her.

It seems that she has been on our side of two big cases, both of which we lost. The first was Ho Ho Ho Express, Inc., a case that is somewhat infamous within OSHA for how callous Mr. Ho was towards his migrant employees. The OSHRC ruled against us and eliminated part of the egregious citations, and the 5th Circuit Court of Appeals upheld the Commissions decision. Since then, we have apparently begun rulemaking to address those rulings.

The second case was Secretary of Labor v. Summit Contractors, Inc., which was a case involving multiemployer worksites, specifically our ability to cite a general contractor when they didn't have any employees exposed. The OSHRC ruled against us and vacated the citations, but the 8th Circuit Court of Appeals overturned that decision earlier this year.

I'm mentioning Ms. Rogers because she dissented in both of those case and I think they both demonstrate the impact that ALJs and the OSHRC have on our jobs. Do you remember the good old days of ergonomics, before the Beverly decision? For those unfamiliar with Beverly, the judge in that case basically ruled that back injuries aren't serious injuries so OSHA couldn't cite an employer for ergonomic issues under 5(a)(1). Even though the OSHRC overturned the judges ruling, ergonomic enforcement has never been the same. The same could have happened for multiemployer worksites.

My point is that sometimes factors outside of OSHA can have more impact on our enforcement activities that the Secretary or Assistant Secretary. I think that appointments like Ms. Rogers are every bit as important as that of Assistant Secretary, but it's something that gets overlooked by most of us at OSHA, and most of those in the S&H field in general.

Let us hope that future appointments to the OSHRC will help employees, not hurt them.

Saturday, July 11, 2009

Evaluations, Yet Again.

Couple more comments on evaluations to follow-up on. First:
"We keep reading that the law doesn't allow CSHOs to be evaluated based on the number of inspections. Where is that in the law? When you quote it to me, read it closely. Doesn't it say RESULTS of inspections? How does that prohibit being evaluated based on productivity -- as opposed to the percent serious, or the size of penalties, etc., etc., etc.?"
OK, here is the text of "Public Law 105-198 - To amend the Occupational Safety and Health Act of 1970:"
Section 8 (h) The Secretary shall not use the results of enforcement activities, such as the number of citations issued or penalties assessed, to evaluate employees directly involved in enforcement activities under this Act or to impose quotas or goals with regard to the results of such activities.
It took a little searching, but here's the Congressional equivalent of a preamble for that law:
H.R. 2877 would conform the law to current practice. It would prohibit the Secretary of Labor from using the results of enforcement activities, such as the number of citations issued or penalties assessed, to evaluate employees directly involved in enforcement under the Occupational Safety and Health Act. It would also prohibit the Secretary from imposing quotas or goals on employees that are based on the results of enforcement activities. The Occupational Safety and Health Administration discontinued using such performance measures and incentives in 1994.
I'm guessing here, and it would be nice if a lawyer type could confirm this, but one of the results of an enforcement activity is an inspection. It's pretty clear that the intent of the law is that we not use inspection numbers as part of our evaluation, even if the language isn't as clear as it could be.

Next, in response to a comment by RT, another commenter left this:
I think that JT's method is used by most supervisors. It's sort of how the performance standards are written, or at least what the supervisor has to put in the narrative to justify any rating other than 'Meets.'

As for the NCFLL wanting objective performance standards, that's a lark. Joe Dear came closest to establishing objective performance standards (i.e. numeric goals) and got throuroughly trounced for it. The NCFLL was the key reason for Kennedy's support of the amendment that prohibits using the number of violations per inspection in the evaluation of CSHOs. That, combined with the number of inspections conducted, may not have been the most perfect evaluation criteria, but at least it was objective.
To which RT responded:
@ Anon: But numbers aren't necessarily objective. Is a CSHO doing a ton of inspections actually effective, writing good citations and focusing on things to get injury rates down, or just a glorified traffic cop writing tickets for the sake of numbers/generating revenue? I think Kennedy's amendment was concerned with government bureaucrats turning CSHOs into the latter.

@Abel: If the problem with performance review is some ADs will play favorites and/or discriminate, then perhaps the solution is to replace the ADs? :) Easier said than done, I know.

Question about the NCFLL - what objective performance elements have they succeeded in implementing? Or proposed without success?
First, sorry RT, it was my post that lead the commenter use JT, and I'm not sure how I got JT stuck in my head.

Second, I agree with the first commenter that there is enough wiggle room in the elements for an AD to adjust the evaluation based on the kind of work a CSHO does, but that also allows ADs who want to help or hurt certain CSHOs the same opportunity.

Third, about what the NCFLL has proposed or been successful with, I'm not sure, I've never been involved in the union activities on a national level and I haven't bothered to keep up with it (I've too much to do already).

Third, changing some ADs might not be a bad idea, but remember "the devil you know..."

Yet another commenter added this:
The problem with using lapse time as a metric is simple. All CSHOs do not do the same type of inspections. In every office, a few of the more qualified staff do more complex inspections. Managers know who these people are and assign them the "beasts." We all know or should at least suspect this.

Emphasizing lapse time without trying to equilibrate for the variable difficulty of inspections creates a disincentive to do complex work and initiates a rush for the bottom. Instead of wanting the difficult cases and being recognized for the effort, folks begin to clamor over the easy stuff.

It's already happening at the end of each FY. That's why any sane analysis of the inspection numbers should tell you that there are bushels of quick and easy inspections/citations being pursued and turned in quickly to lower the lapse rate at the end of the FY.

Incentives work.

IMO, the answer is to create an algorithm that gives different strengths to different types of inspections. Fat/Cats, Wilfull, Jumbo's, Full-shift monitoring, etcetera, could be scored to reflect the increased difficulty level. It's not rocket-science, you wouldn't need mathematicians to do it. Plus, you could incentivize staff to do good, complex work. You can argue over the weighting system, but at least you'd be arguing about real issues instead of assuming something that you already know to be false. All inspections are not created equal.
I certainly agree that there are issues with using lapse time, but if we used an algorithm to weigh inspections, then it seems to me we're crossing the line that Public Law 105-198 set. Using inspection results to evaluate a CSHO means I now have an incentive to increase the penalties to an employer to get a significant case or issue Willful violations. And if I'm going to get credit for full-shift sampling, then that's what I'm going to do every time. I'm going to set up the pumps even if the chemical isn't on the complaint and I know there are no exposures, I have an incentive.

You can see how much discussion lapse time and number of inspections has generated, imagine if this was a complicated issue.

Wednesday, July 8, 2009

Evaluations - Lapse Time

I got this comment on CSHO evaluations I want to follow-up:
"I do not understood this bizzare obsession with 'lapse time.' I'm not sure exactly what lapse time indicates. It's been said that low lapse time indicates quick abatement. This assumes that most employers do not act in good faith to correct a violation as soon as the CSHO brings it to their attention. Typically this is the exception to the rule. In my experience most employers begin trying to fix things as soon as they're brought to their attention by a compliance officer.

The ironic thing with lapse time is that inspections involving those bad actors who can be counted on not to abate a violation until absolutely forced to are the source of most significant cases. These cases intrinsically have the longest lapse time, most approaching the full six months, due to regional and national office review of the cases.

The whole lapse time thing just seems like a red herring. And it does encourage some, especially hygienists, to take short cuts. Sampling takes at least three weeks to receive results. Don't sample and you can shorten your lapse time. I'd bet that offices with average lapse times under 15 days never do air monitoring.

The other thing that significantly increases lapse time is waiting on a process likely to cause overexposures to be active, such as an employer that sandblasts once a month or runs a process with chromium paints once a quarter. Most IHs now simply document 'process not active' in their case file, throw out some safety or haz com violations, and move on to the next inspection. Keeps their lapse time low. I don't think this is doing much to protect the health of potentially exposed employees."
I agree with much of what the commenter said, although I do object to the statement "Most IHs now simply document 'process not active' in their case file, throw out some safety or haz com violations, and move on to the next inspection." I don't do that and most of the IHs I know don't do it either.

I think there are two reasons we're obsessed with lapse time, it's objective and measurable, and employers hate having the ax hanging over their head, just waiting for the citations to show up.

How many measurable elements do we have in our evaluations any more? There was a time when we got extra credit for presentations and outreach type things, but the CAS positions have taken most of that away. We're not supposed to be evaluated on the number of inspections we do. So what's left? Violations per inspection? That doesn't make sense because then we would be on what would amount to a quota system, which brings into question our objectivity during an inspection and isn't fair to employers. What objective measures are left?

Monday, July 6, 2009

Evals - Follow-up

I received a comment/question from JT that I want to discuss.
A very difficult question. I personally would prefer an objective "totality of job performance" standard, allowing evaluations to include all the tangibles and intangibles that don't necessarily show up in the numbers. Who's going out there and taking some initiative? Who's sitting around just collecting a check? Who does their homework on citations and writes good ones time after time? Who writes citations on items the ALJs and OSHRC have routinely tossed out for over 20 years?

Can that be done? Or is it impossible to get objectivity on such things?

I like lapse time, but I agree that it could very easily create an incentive to cut corners. Also very easy for ADs to overlook difficult or complex cases.
I spent part of my weekend thinking about this, which I hope is a reflection on my dedication to the job and not on how pathetic my life may seem.

I think my answer is that it's a great idea, but it can't work for two reasons;
  1. Not all ADs will be fair. I think most would be fair, but there will always be a few who will just plain discriminate based on race or sex, or who will favor their friends. You can't get around it, it's out there.

  2. The union won't allow it. Why? See number one above. The NCFLL has fought for a long time to make our performance elements objective instead of subjective. They want specific goals with specific ways to meet or exceed those goals. I certainly understand that position, and it's hard to argue against it, unfortunately it does allow for deadwood.
But also think about this, JT noted that it's easy for ADs to forget difficult or complex cases, so how can we expect them to remember all of the intangibles?
I don't know, maybe the answer is to have two different evaluations, one objective and one subject, and allow each area office (ie CSHOs) to decide at the start of each evaluation year which they want to be evaluated on. That seems like a logistically stupid idea to me, but I haven't been able to come up with anything better.

Thursday, July 2, 2009

Possible Explainations for Drop in I/L Rates

Here's the list of possible factors that have influenced the drop in Injury/Illness rates, in no particular order. If you have any ideas, leave a comment or send me an e-mail and I'll add it to the list, and thanks to the people who have sent me e-mails with additional ideas.
  1. The shift of jobs to other industries (note: when a manufacturer down sizes, it is usually the least experienced employees who are released, and we've all seen the studies on the lack of experience versus injury rate).
  2. Incentive/disincentive programs.
  3. Lack of significant recordkeeping cases.
  4. Loss of compliance staff.
  5. Increase in the number of safety and health professionals
  6. Internet access (I can tell you that many employers absolutely do not like to call OSHA for information, but they might access the website, which, if the rumor I heard was true, gets almost 1,000,000 hits per month).
  7. The first suggestion: "The growth of cooperative programs. I know many on the compliance side don't sometimes like these programs. But from an industry perspective, the programs are great. Take a look at the growth of the Voluntary Protection Program."
  8. Pressure on safety departments for reduction in injuries/illnesses
  9. Safety department with thin resources, administering work comp, liability insurance, environmental, training, and other outside duties.
  10. Resume padding
  11. Lack of follow up in high injury industries to determine recordability.
  12. Lack of injury reporting oversight: The risk of someone finding out about under-reporting when reporting is voluntary is minimal, making under-reporting is a slightly more attractive alternative.
  13. Litigious society. Employers getting pounded by lawsuits looking to reduce those related costs.
  14. Employers finally seeing the value and moral obligation of providing safe and healthful workplaces.
  15. In the west the diminishing base of heavy industry seems like an obvious choice, same goes for traditionally hazardous industries in the West such as logging and fisheries. Much of the aviation industry has disappeared from CA, and chemical and manufacturing work that is considered "polluting" has moved to China or overseas.
I've moved this list from the side to a blog posting so I could recover the space on the side.

My hope is that some one in academia will start to evaluate these different factors and not just assume that all businesses lie and that we don't know how to look at data.

CSHO Evaluations

The last post brought out a couple more comments, the first commenter asked:
How should CSHOs be evaluated?
To which a second commenter responded:
On numbers and lapse time. At least partially. If we set goals for them to achieve (like we do for the ADs and AADs) at least they'll have a starting place. And that will eliminate the low performers/deadwood on staff.
Let's put aside the fact that we can't legally be evaluated on the number of inspections we conduct, I personally would be willing to be evaluated on inspection numbers and lapse time, because I trust my AD to make allowances for fatalities or sig cases. But I know CSHOs in other offices who don't trust their AD to be fair. The problem is that there are a few ADs who seem to be incapable objectivity, and so how do we develop an evaluation system that's fair in those situations? Does anyone really think that the CSHOs that worked on Imperial Sugar or Milk Specialties Company had a chance in hell to do 100 inspections for the year? Now ask yourself this: if you had to do one of those inspections, would your AD make allowances and reduce the number of inspection you had to do for the year?

One suggestion from Kane at the OSHA Underground is to use a panel for our evaluation. I've haven't seen a detailed enough proposal to say whether or not this is a good or bad idea, but I'm leery. The panel would almost have to consist of CSHOs from within the AO because outsiders aren't familiar enough with what we accomplish. The biggest obstacle I see is that my performance bonus is based on how many elements I exceed, the same as the potential panel members. Because we're competing for the same money, they have a vested interest in my having a lower evaluation than they get. Now, instead of being pissed at management, we're pissed at each other. If Kane ever expands on his idea I'll certainly give it a new look, but color me skeptical.

Where does that leave me? I'm not sure. Lapse time should certainly be part of the evaluation, again, as long as there are provisions for unusually long cases. Maybe someone could develop a formula that adjusts acceptable lapse time based on the number and types of violations. You get 8 hours for an IC complaint, but 120 hours if you have 4 Willful and 10 serious violations.

Let's open this up a little, what do the other CSHOs out there think, how should we be evaluated?

Wednesday, July 1, 2009

Polls and Questions

The last poll closed with a curious, although not totally surprising, result. I think most people in OSHA believe that we, as an agency, fair better under a Democratic administration than we do under a Republican administration (please, that's not a political statement, it's just an opinion of the perception of a difference). The top two vote getters were Scannell and Henshaw, both appointed under Republican administrations, which is to me the curious part. What's not surprising, however, is that they were both safety and health professional first, administrators second. Do you suppose the current administration knows this? By the way, the new poll will be up soon.

Here are a couple of questions someone left after my last post that I want to answer:
"What exactly is the evaluation system for CSHOs nowadays? Does a CSHO get dinged at review time if their citations are constantly contested by employers? Or SOL tells them their citations are no good? Or vacated by the Commission?"
There are several answers to each question. Like so much in OSHA, there's the way it is supposed to be and the way it is, and the way is isn't necessarily the same from office to office or region to region.

Does a CSHO get dinged at review time if their citations are constantly contested by employers? We're not supposed to, and in my AO we don't. The reason we're not supposed to get dinged is because some of us do mostly complex inspections, which are more likely to be contested. But there are also companies out there that as a matter of policy always contest OSHA citations, so to get dinged for that isn't right. Again, that doesn't mean it doesn't happen.

Does a CSHO get dinged if SOL tells them their citations are no good? We're not supposed to, and in my AO we don't. I think this one will depend on each regional SOL office. There are RSOLs where the attorneys won't take any case to court, no matter what, and I'm guessing that in those regions the ADs mostly wouldn't hold a CSHO responsible for an RSOL weakness.

Does a CSHO get dinged if citations are vacated by the Commission? Again, we're not supposed to, and in my AO we don't. My guess is that doesn't happen, and for two reasons;

First, very few cases actually make it to the Commission. More cases make it to an ALJ, but even those cases aren't the norm.

Second, the cases may not be heard for a year or more from the date inspection is finished, which means the CSHO is in a totally different evaluation period.