In his blog The Thorp Network, author John Thorp suggests that we should, “Measure what’s important and manage what you measure.”
When a strategy for improvement is identified it’ll be important to measure performance and success. Performance is important to measure because improvement depends on change, and there’s an inherent presumption of efficiency and momentum of the status quo that works against change. When performance is not measured, change either does not occur or occurs unevenly. Success should be measured in terms of the degree to which the intended benefits are realized because plans, even your best plans, will often have to be adjusted as new information is discovered and as circumstances change.
The UK’s National Health Service’s (NIH) Institute for Innovation and Improvement suggests creating, annually reviewing, and (most importantly) using a Performance Measure Sheet. On this sheet are the usual suspects in a thoughtful performance measurement tool that can be used to measure performance and success: Purpose of the measure, Related objective, Target, Formula, Frequency, Who measures, and Source of data.
With components of performance and success measured, it’s then important to manage what you measure. Without active management performances goals and success happen only by luck or happenstance, or more likely, occur unevenly. The NIH’s Performance Measure Sheet comes to the rescue again by including these two additional critical questions:
Who takes action?
Who is responsible for taking action on the measure?
What do they do?
Specify the types of action people should take to improve the performance of the measure.
Author William Gibson said, “The future is already here — it’s just not very evenly distributed.” By measuring what’s important and managing what you measure, you can ensure that the future is both here and evenly distributed; that the future is as you intend it to be.
All true, but I will offer up a caveat: sometimes you can end up “managing to the measures”, the same way that teachers can end up “teaching to the test”. In other words, indices/measures that one knows are merely peripheral indicators of something much deeper that you recognize as important, can end up driving behaviour in unintended ways. What is important isn’t always countable, so we often turn to countable things, as indices of something deeper and more important.
So, by all means, measure what is important, and strive to manage in terms of what is important, but remember what lies underneath those measures.
A major weakness is that most truly importnat outcomes can only be judged qualitatively, if at all. Entirely too often, performance measurment programs elevate the the importance of outputs simply because they lend themselves to quantitave analysis. Leaders need to measure what they can but realize the truly critical goals may not fit a numerical matrix.
Really good points Mark & Peter. I remember hearing a story about a call center employee who was told that s/he was spending too much time on each call. To resolve the issue, s/he would hang up in the middle of the call. Good customer service is a harder measure. When my wife and I were in Lamaze class they talked about the machine that showed a number representing the strength of each contraction. They warned that husbands in particular were really drawn to that machine and fascinated by the numbers, but that’s not really the important thing is it?
We also often measure easy things like computer system up-time and ignore more important measures like how has the employee used system to transform their work practice(s).
I think the inherent presumption of efficiency and the momentum of the status quo too often leads us to jump to conclusions regarding measuring. If we don’t currently measure customer satisfaction, that doesn’t mean we can’t. If we can’t quantify some things, that doesn’t mean that we can’t quantify everything. If we measure, we continue to measure, and we continue to file the reports because that’s what we’ve always done. My final point is often that same presumption of efficiency and momentum of the status quo prevents us from seeing that measuring is a means to an end and those monthly reports are only valuable when used.
Very interesting conversation and nice templates too.
To expand, according to modern research (Kaplan et al.) measures should always be accompanied with a solid understanding of “the why” and “the how” explaining why measure or set of measures is critical to focusing effort on the things that will result in achieving the organization’s mission. Without this justification and the highest level objectives, so-called “dashboards” can grow in length and irrelevance as more and more beans are counted.
It’s best to start with some sort of Strategy Map (a number of strategy map examples are available here). Then, an executive dashboard or “Balanced Scorecard” should be posted or reviewed regularly with staff and stakeholders so they can understand how their collective efforts are resulting in real change.
Best of luck aligning measurement, management and motivation through the highest levels!
with ClearPoint Strategy, web-based balanced scorecard and dashboard software
I can recommend “Theories of Performance”, by Colin Talbot (2010, Oxford UP), a nice concise, and scholarly, stroll through all of the issues germane to identifying relevant organizational performance indicators in the public sector.