Mark Hammer

I like to distinguish between what I call an “accountability agenda” and a “corporate intelligence” agenda when gathering organizational information. The former tends to revolve around how much fear is generated amongst management, and all too often results in very shallow analysis and a short-data-lifespan. Once a manager deems themselves “safe”, they no longer have any interest in what the data might say. Moreover, the kinds of things that are measured tend to be superficial and list-driven (in the sense of “Here’s a list of the things we in senior management wish to be able to say at the end of this exercise), rather than model- or insight-driven. It results in a rabid pursuit of presumed benchmarks, leading managers to “manage to the measures”, in the same way that teachers “teach to the test”. And of course, there is the constant fear that unpleasant numbers will show up in the press, or the pursuit of “happy story” numbers that are vapid and meaningless (see Q7 of the 2010 federal Employee Viewpoint Survey for the poster child of that one).

Pursuit of the “corporate intelligence” agenda tends to result in a more thoughtful analysis (how is X connected to Y? What are the sorts of circumstances hospitable to program/initiative/strategy z?), and a longer useful lifespan for any data acquired. There is less paranoia surrounding the exercise because it is not intended to separate managers in to the good, the bad and the ugly, but rather to simply understand what works well.

The accountability agenda is a bit like taking someone’s temperature, finding out that they don’t have a fever, and can now go to school, where the other approach is more like taking an MRI and doing a full metabolic workup to see how everything is working, and if there is anything a-brewing. You also increase the opportunities to stumble onto things you never realized or thought to look for.

I also think that the most important components of “performance” in anyone’s job and organizational role are difficult to measure in any sort of consistent fashion across an organization as vast and diverse as “government”, and that few managers will really be in any sort of position to tell the measurers that “This, this, and that are what really represent optimal performance in my work unit and here is how you would measure them”. And of course, whenever people are held to account for anything, they insist on consistency of measurement/process, or else they deem the process patently unfair. When they view your motive as more curiosity-driven, they are more willing to tolerate being measured on things they don’t see as necessarily being the most valid reflection of performance.