, , ,

Using Performance Metrics to Manage

In the final installment on a performance management framework, we’ll look at using performance metrics and analysis in order to effectively manage agencies and their programs by using remediation and corrective actions. We’ve already covered the first and second steps, and will focus on the final two in this posting (all four are listed just below):

1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause(s) of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

Assuming that reliable metrics have been gathered and reported, and that data trends have been analyzed, an agency should have a good idea about where it stands operationally. The question then becomes how best to use the new information. For example, if I’m an FEMS agency that knows my emergency response times are trending in the wrong direction, and I know that the problem lies somewhere in my call center, what’s the next step? (I’ll answer this in a minute.) Given the diversity of agency missions that exist within any government, it would be impossible to give specific guidelines on how to fix troubling trends. For the purpose of our framework, however, the important thing is that the information is used to formulate some plan of action, and that the plan of action is clear, has timelines, and is documented for future consideration. Maintaining documentation of attempted corrective actions can be particularly helpful when there are several options for remediation. Each option can be tried over a given reporting period and performance data can be tracked. If there is some improvement in the numbers, the corrective action was likely effective; if there is little or no improvement according to the data, then another option on the list may be your best bet. The important thing in documenting the remediation is not to spin your wheels by proposing the same corrective action repeatedly and expecting a different outcome with each successive attempt.

Going back to our emergency response example in which we assume that the call center has been identified as the source for deteriorating response times, there may be multiple options to improve performance, including additional training, process re-engineering, etc. There may not be an obvious “best” remedial option, but the important thing is to pick one and continue to track response times. If additional training was implemented but the trend is not reversed in response times, then lack of training can be eliminated as both the cause of the problem as well as a corrective action. Continue the cycle of capturing and reporting the metrics, but with a different corrective action this time. Perhaps the response process is streamlined or adjusted and overall times improve. We then have some indication that our proposed solution had a positive effect on the operations that we are tracking. Through trial and error in the corrective action process, while concurrently continuing to track and report data, any agency can improve effectiveness in its operations.

The important takeaway from this exercise is that in order to demonstrate marked improvement in any public sector operation or program, all steps in the framework that we’ve outlined here (and in past postings) must be followed. Tracking and reporting metrics without proposing and documenting remediation in trouble spots won’t bring about the change in negative outcomes that most agencies are seeking. The feedback loop of track-report-remediate-repeat is the fundamental process behind our performance management framework, and is essential in solving government inefficiencies.

This was originally published on my blog at http://measuresmatter.blogspot.com/.

Leave a Comment

3 Comments

Leave a Reply

GovLoop

I like the four stages with the feedback loop. Often with performance metrics there is a big push for one phase but not a focus on the four phases and the continual loop.

Paul Hoffman

Very good article. My only question is how the metrics are determined. I worked for the OIG and the metrics were so tight they lead you away from the mission.

Joseph Peters

It’s difficult to say exactly how to determine each agency’s metrics, but theoretically the mission, goals, and initiatives should drive the determination process. I was recently part of an effort where we put together an agency plan before even discussing the metrics. The plan contained high level goals as well as specific initiatives (again, qualitative not quantitative) and only after we completed this exercise did we look at what might be good performance metrics to track our progress against the initiatives. For example, one initiative was to reduce processing time for various citizen requests. It was pretty easy to come up with time-related metrics to meet those initiatives. Other initiatives on improving customer satisfaction were more difficult to capture in a data system so we agreed that surveys would be the best way to gather data.

To you point about “tight” metrics, there’s definitely a balancing act and I think it’s usually better to start a little more broad and then narrow the measure as you get better at it. Agencies get so obsessed with creating the perfect metric that they can benchmark across for years to come that they often become too inflexible when it’s later determined that a variation of the original measure may be better. “But we’ll have to start all over again in benchmarking”, is often the response. I say so what. If another measure helps you learn more about your organization and improve effectiveness, that’s reason enough to change. At some point you’ll have to stick to a metric, but it takes time and revisions to get the right one. The dirty secret of agency performance management is that it’s largely about trial and error and there’s no perfect metric for any agency.