Best Places to Work in Gov’t…Results are in!

I’m a long-time member of GL and it’s always a pleasure to be able to post the Best Place to Work results that my organization Partnership for Public Service compiles every year. Right now you can find a great summary of the study results here. Also a nice Wash Post article this morning by reporter and fellow GovLooper Ed O’Keefe

Just a couple of important things to mention that sometimes get overlooked with all the talk of which agencies rank where, which have gone up and which have dropped over the past year. The first is that none of this would be possible without OPM’s annual efforts to gather and compile detailed data from emplyees across government.

The second is that the results matter because “people” make better performance and outcomes possible. The basic idea is that by focusing on people issues in government, agencies can improve mission performance and service delivery to the public. Good data helps, but we need to be clear that it’s not about satisfaction or rankings for their own sake. Otherwise it’s too easy to say, “so what if feds aren’t satisfied with their jobs or motivated by their leaders…they’re lucky to have jobs.” Ultimately, the rankings just add motivation to do the hard work needed to make government better.

Since the study is about “doing” things and inspiring action, here’s my question:

How has your agency used the results and where is it making a real difference? Don’t want this to be a leading question, so if it’s not happening, what would help?

Leave a Comment

9 Comments

Leave a Reply

Darrell Hamilton

It would help if they split the list down into smaller groups. There is not much to be said when you throw everyone into the huge “Department of Defense” and ask for specific measures to take advantage of this data.

Terrence (Terry) Hill

I agree with Darrell. Not every agency is a “large” agency. Since there is not accountability for these rankings, they are mainly a novelty. They can easily be ignored by leadership with no real consequences. It’s a shame because I do believe in the accuracy of these ratings.

Mark Hammer

I’ve been following the ranking of government agencies in jurisdictions for a dozen years, and I have to say there are a number of measurement issues that are often overlooked, and complicate the picture. There are many confounds that can create unnecessary concern, or unwarranted comfort, and those who are tasked with taking action to make their place of work a great place to work need to have the clearest, most valid, picture possible if they are going to achieve that goal….which I think is what they want to do.

1) Size matters: All organizations are made up of work units, division, branches, whatever. They all have their bosses, their unique challenges, their business lines, their geographical location, etc. Some are terrific to work in, some may have a toxic manager. But they each constitute a kind of micro-climate. When you pool them all together to ask about how agency X is doing, it all regresses toward the mean and comes out with a “gentleman’s B” that passes muster but does not really reflect those corners of that agency that are in serious trouble, or that might follow practices that others ought to be emulating.

Small agencies – which can often be the size of an individual division or work unit that makes up a larger agency – can often be found among the highest and lowest rated/ranked organizations. We may think of them as a separate agency, but they are really no different than the individual division of a much larger agency, where one very good or very bad manager can have an impact on the whole pace. Trouble is, they can’t really “hide out” among the better work units/divisions in a larger agency that blend together to produce an acceptable average. You can get a sense of this by looking at the spread of scores over the top 30 large, and top 30 small agencies. Much bigger spread for the small places, at both ends of the scale.

2) We’re still not really sure about how much matters: I suspect that 92 is better than 10. But I don’t know if the difference between 95 and 85 is equal to the difference between 85 and 75, or if 95 represents something definitively better than 75. It’s not just a question of whether the scale used is a true ratio scale or merely an ordinal scale. We often can’t tell what the boundaries are between outstanding, holding their own, and in trouble. Maybe the boundary between those first two is around 82, maybe differences are moot until you get down around 63. Nobody really knows. Now if anyone found out that around, say, 61, your turnover rate suddenly spikes, as does your absenteeism rate, you’d have something to talk about. But for the time being, all anyone is really sure about is that 82 probably isn’t “worse” than 77.

I’m not scoffing at the scores. But the scores desperately need external validation, and a clearer sense of scale in order to be appropriately actionable.

3) Some places ARE tougher to work at: Walk into any hospital and you KNOW where the best place in the joint to work is: the maternity ward. It may have better management, but the work itself makes people happy and cooperative. Will there ever come a day when the IRS is “the happiest place on earth”? I have my doubts. But it may not be because they’re not trying. You have to accept that there can be real limits on how well some places can score. The “scale” looks like it’s 100pts wide for everybody, but may only be 65pts wide for some places.

So, I agree with Darrell’s more succinct remarks, and partly agree with Terry that, unless we tease apart the signal from the noise, such measurement exercises may continue to be seen as a novelty, rather than the bottomless well of corporate intelligence I know them to be.

Joshua joseph

Darrell, Terry and Mark — thanks for your good comments and observations. These are definitely issues we think and talk about as well. Will share some of that in a minute.

But first want to make sure that folks know we don’t just report breakdowns for large and small agencies. On the Best Place to Work website, there are (3) main tabs: one for large agencies with over 2,000 full-time employees, one for small agencies with less than 2,000 but more than 100 FTEs, and one for agency subcomponents, where you’ll find breakouts for some 240 parts of larger agencies, including many from DOD. It’s true that some agencies don’t report their data this way and, following OPMs reporting guidelines, we don’t report results for small agencies or subcomponents that have fewer than 100 FTEs, but there’s lots of detail in the rankings in addition to the agency roll-ups.

That said, there’s no way to make the groupings or rankings perfect for every organization. Is it totally fair to compare a large agency with 3,000 employees to one with 60,000 or 200,000 employees? Is a small agency with 1,500 employees exactly comparable to one with 150? What about agencies that have a clear mission focus versus those which don’t? What about agencies where a great majority of employees are professionals at the GS-13 level or higher versus those which have a wide mix of employees across levels? None of these is ideal and many more good questions like these should be asked. I come from a pretty rigorous research background, so am not just paying lip service here.

At the same time, there’s another important set of considerations that have to do with individual agencies using data more effectively to understand and inform workforce challenges. It’s not always easy to encourage this. Having worked in too many places where results from research reports just gathered dust, I can appreciate the need to get secretaries and deputy secretaries engaged at the start. The rankings have REALLY helped to do that over the years.

The other thing that stands out for me, and why I posted my questions, is that at the end of the day (or year) it’s matters less where an agency stands compared to others and more how it compares to itself. The rankings are a starting point for agencies to look inward at their own unique situations. Often, they need to do a lot more to make sense of the data than our site can provide. The can get more detailed breakdowns directly from OPM and then decide what issues merit their attention first and how to approach them. And I guess I’m interested in how we can encourage more of that to happen. Welcome more of your good thoughts.

Mark Hammer

Thanks for your followup Joshua.

It may be patently clear to some, but I don’t want there to be ANY misunderstanding that I am challenging the numbers themselves. Rather, my caveats have everything to do with how managers react to them. They have their regular jobs to attend to, so they haven’t really followed the production of the numbers every step of the way, and may lack the analytic capacity within their organizations to dissect the numbers, so they treat them at face value.

Should you feel envious, guilty, or discouraged by the finding that your mid-sized organization comes up with a 68 and your buddy who works in small agency X has a 76? Not necessarily. Conversely, should one feel like everything is hunky dory and takes care of itself because the organization comes up with an 84?

It’s important to understand where the numbers come from, and what influences them.

Finally, I think people can be too distracted by rankings and percentages. The more interesting, important, and useful stuff comes when you look at the elements that make up that global score, and how they are interconnected. That’s the way in which this transforms from a bland “accountability” exercise, into a corporate intelligence exercise. These exercises provide the most value to all stakeholders when they help to identify levers that managers and non-managerial employees can apply to improve where they are, or if they’re in a very good place, hang onto it with less risk of erosion.

Neil Bonner

Unfortunately TSA ranks #232 out of 240 federal agencies. In the category of “effective leadership” TSA scores 227 out of 229. Ouch.

Joshua joseph

Neil, Gary —

Those are tough cases. TSA has some difficult jobs to do and a workforce with very high annual turnover compared to most federal agencies. I’m not as familiar with the forest service but, like TSA, they probably share some of the same challenges of having a highly decentralized workforce and some thorny relationships to navigate between HQ and field. Still, in both agencies, I’ll bet there are standouts — airports where the numbers are consistently much better than average and forest service offices were the same is true. That might be the place to start…looking not just at the major challenges but also what can be learned from the successes that are already right there within these agencies.

Joshua joseph

Mark —

I think we’re saying pretty much the same thing…the numbers are a starting point, not an end. Managers shouldn’t take them as gospel but should use them to flesh out what they already know. Many times, the data won’t come as any surprise, but can be used to take a closer look at problems, find out what underlies them and support change efforts…much like what you’re saying about the need to “look at the elements” that make up the overall score.

And that’s what this effort is about. If other people are having conversations like this one…looking for ways to understand and then act on the results, we’re moving in the right direction. If too many folks are just fixated on the rankings without asking “why” or looking for answers, we’ll need some more good suggestions for engaging them. What have you got for us! 😉