, ,

Why Isn’t Performance Information Being Used?

Champions of performance management in government are confounded. After decades of trying to integrate the use of performance information into agency decision-making, it still isn’t happening on as broad a scale as once hoped.

The initial premise twenty years ago was that if performance information was made readily available, it would be used by agency decision-makers. That turned out to not be true.

Background. A recent GAO study conclude that the “use of performance information has not changed significantly” in surveys of federal managers between 2007 and 2013. More specifically:

• “. . . only two [of the 24 major] agencies – OPM and the Department of Labor – experienced a statistically significant improvement in managers’ use of performance information.” And four experienced a decrease.
• But “SES managers used performance information . . . more than non-SES managers both government-wide and within each agency.” And in 9 of the 24 surveyed agencies, the gap was statistically significant.

While Congress was able to mandate the collection and reporting of performance information via the 1993 Government Performance and Results Act, there hasn’t been a successful strategy to get managers to use the information. The Bush Administration tried, by focusing on program-level measures. The Obama Administration tried, by focusing on cross-agency and agency-level “priority goals,” supplemented with quarterly progress reviews. But the GAO survey doesn’t show any real changes over time.

So Now What Do We Do? GAO’s report offered some “better practices” that it thinks would help, based on some of its past work and observations. These included a series of “effective practices” such as improving the usefulness of performance information and better communicating performance information. These practices may help.

However, a recent article (paywall) by Jeanette Taylor, a professor at the University of Western Australia, offers some new insights on what leaders might do differently. She examines the “lack of use” challenge from a different perspective – organizational culture.

In her research, she found “the effects of performance information on organizational performance depend on the organization’s culture,” and that “organizational culture . . was the dominant antecedent of performance information use,” and that “. . . different types of cultures adopt performance management differently.” Her research tries to unbundle the distinctions in order to provide a roadmap of the different ways leaders need to approach the use of performance information in their organizations.

Four Types of Organizational Cultures. Drawing on the work of other academics, she highlights four distinct models of organizational culture:

The Individualistic Culture. This type of organization stresses individual effort and skill, and a belief in competition. It may, for example, adopt performance incentive structures.
The Egalitarian Culture. This culture emphasizes a high sense of belonging to a group. Staff in this kind of agency would be more receptive to performance dialogues instead of incentives.
The Hierarchical Culture. This type of organization stresses well-defined rules of social interaction. Employees and managers here will likely want performance management to be aligned with the professional and technocrat core of the organization.
The Fatalist Culture. Employees are skeptical about organizational prescriptions for human betterment and may “engage in ritualistic performance management exercises” (e.g., passive-aggressive compliance with requirements: “Just tell me what you want me to do and I’ll do it”).

In practice, real organizations do not fall neatly into one or the other of these models. But understanding the distinctions suggest different implementation strategies.

Three Layers of Organizational Culture. Fellow academic Edgar Schein differentiates three levels of organizational culture that exist within each of the four types of cultures.

Observable Artifacts. In this layer, visible characteristics that an outside observer can see might include office layout, dress code, observable routines, and published documents. Some academics see routines as “the critical factor in the shaping of behavior.” Learning forums are examples of organizational routines, as are strategic planning and benchmarking. Interestingly, Taylor says that “routines can promote continuous change if they occur regularly, the organizational context support the changes, and “professional employees have discretion in the way they perform their tasks.”

Espoused Values and Beliefs. This layer is comprised of documented norms, ideals, goals, and aspirations of the organizational group. Taylor says: “A clear, understandable, and distinctive organizational mission has been found to be positively related to employee mission valence.” She also observes: “The development of a common language, particularly for key concepts like performance indicator and benchmarking, can contribute to the successful use of in-project measurement.”

Underlying Assumptions. This layer is comprised of unconscious, taken-for-granted, non-negotiable beliefs and values that influence how group members think and feel about things and guide their behavior. This is the hardest layer for outsiders to influence because “. . . performance information involves subjective interpretation by the managers who acquire and use it,” and “Performance management requires that judgments be made on what to measure, how to measure and interpret it, what determines success and failure, and what information is relevant or important.” As a result, an organization’s underlying culture “can influence how it views and behaviorally responds to performance management.” Just ask any VA executive over the next two decades about how its underlying culture affects their perception of performance management!

Schein’s layered approach explains how understanding an organization’s culture differs, depending on one’s perspective, and how the deeper ones are harder to identify, measure, and change. His approach also recognizes that there can be subcultures within an organization (geographic, professional, hierarchical), and that it is inappropriate to assume that a single, organization-wide dominant culture will prevail across a department or agency.

So the bottom line, says Taylor, is that successful implementation “requires changes in the organization’s systems and structures (artifacts), its underlying values (assumptions), and the way management reinforces these values (espoused values).” In contrast, most federal agencies have emphasized the creation of what Schein calls “artifacts” – processes, methods, and technical know-how.

Re-Thinking Strategies. It may be time to re-think the strategies for how best to encourage federal managers to use performance information in their jobs. GAO and Taylor both help point the way to a more nuanced approach.

Getting managers to use performance information isn’t just a procedural or technical exercise. It is a fundamental change in how they do their day-to-day jobs and how they approach problem-solving. Harvard’s Bob Behn says that using performance information is a leadership strategy, not a set of processes and procedures.

In fact, GAO found that training managers on how to technically develop performance measures actually led to a decrease in their use by managers! GAO found that training managers on how to analyze and use performance information was far more conducive to use.

Is this too hard? Can managers’ mindsets be changed? It already has been done, in dozens of places across the federal government. The challenge is to showcase and share lessons from existing efforts. The successes aren’t called “GPRA.” Instead, they go by different terms, such as “strategic analytics,” or “evidence-based decision-making,” or “moneyball government.” These efforts are not rooted in complying with GPRA requirements. They are energized by managers who use these approaches to get clear mission results such as reducing fraud, increasing air quality, speeding drug approvals, streamlining disability benefit approvals, and more.

Showcasing these initiatives is happening, but more could be done. Maybe a mentoring program is needed. Maybe more targeted training could help. But it is clear that requiring new processes, procedures, organizational structures, and reporting isn’t going to increase managers’ use. The hard part will be that it has to be developed within each organization, and within their respective cultures.

IBM Center for The Business of Government

Graphics Credit: Courtesy of Salvatore Vuono via FreeDigitalPhotos

Leave a Comment

8 Comments

Leave a Reply

Mark Hammer

First, interested parties can circumvent the paywall by simply popping Jeanette a note, at [email protected] , and requesting an electronic copy. I’ve corresponded with her on many occasions, and she’s good people.

Second, my own bias is that often, many of the folks making decisions are not “measurement types”. They’re not stupid, but cogitating about ideal quantitative indicators, and what could be done with them, is simply not what they spend much time on. They’re also not in any position to either challenge, or productively think with, those performance measures handed to them by others. So I think there is a skittishness and reluctance there that, while it doesn’t displace Taylor or Schein’s notion of “culture”, in terms of relevance, is a factor on top of culture.

Then there is the not-so-trivial issue of the stability of performance indicators. One thinks/decides with them to the extent that the same indicators are retained over time, so as to permit comparison and planning. A terrific little 2007 paper from Lonti and Gregory, entitled “Accountability or Countability? Performance Measurement in the New Zealand Public Service, 1992–2002” (Australian J. Pub. Admin., Vol 66 No 4) observed that the accountability frameworks and performance indicators of agencies with fairly concrete goals and deliverables tended to remain fairly stable, whilst those of agencies with more abstract or ill-defined goals and deliverables tended to have rather regular change in the frameworks and measures. To the extent that managers find themselves with fuzzy objectives, it is understandable that they don’t turn to performance indicators as a useful tool for thinking with, if last year’s accountability framework is different from this year’s.

I regular laud and promote the late Larry Terry’s idea of public administrators as “conservators” of their institution. As an extension of Terry’s thesis, I suppose one might suggest that when public institutions begin to drift from their origins, perhaps by a rejigged mandate, or by lumping additional objectives and business lines under the same agency, the performance indicators similarly shift, and becoming harder to think with. Stated differently: the more stable the agency/institution, the more likely performance indicators are to be integrated into managerial planning and decision-making.

Finally, what makes for an elegant and streamlined number/indicator does not necessarily convey information that people can use. Imagine a car dashboard panel whose speedometer was a 2-digit display of your current speed in terms of a percentage, relative to the maximum possible speed of the vehicle. The gauge may flash a big “54” (i.e., you’re at 54% of your vehicle’s maximum velocity), and yes it’s “simple”, but how fast am I going, relative to the posted limit? how long is it going to take me to get to my intended destination? Not all elegant indicators are necessarily useful or helpful for thinking with.

So, to sum up, we can expect performance information to be integrated into decision-making, to the extent that the people tasked with using it are hospitable and nimble recipients of such information, the performance information is stable enough to permit comparison and planning, and the indicators are relevant and easy to think and plan with.

Mark Hammer

Hmmm, missing a chunk of a paragraph there. I’ll give you the paragraph in its entirety.

Finally, what makes for an elegant and streamlined number/indicator does not necessarily convey information that people can use. Imagine a car dashboard panel whose speedometer was a 2-digit display of your current speed in terms of a percentage, relative to the maximum possible speed of the vehicle. The gauge may be a nice simple number like “54” (54% of the maximum velocity of the vehicle), but what is your speed relative to the posted speed limit? How long will it take you to reach your destination, given your current speed? Simple does not always translate into easy-to-think-with.

Patrick Fiorenza

As usual, this was a fascinating post – lots of great insights to think about. Thanks for your insightful comments!

Lou Kerestesy

Thank you, John. Valuable observations and analysis. I agree with the points about organizational culture and would add a point about personal decision making, which occurs in the context of the organizational culture: Most decision makers are not reflectively self-aware of their decision making style or process. If that’s true, and I think there’s ample anecdotal and research evidence that it is, I can’t imagine performance data being put to much use.

For starters there’s the question of “What do I do with it (performance data)?” If one’s decision making style and process are implicit, how would one even know where to begin adding new data to the process? One would have to open decision making to examination to know where and how to use new information.

Then there’s the “Why should I change?” question. There’s also ample research evidence that people tend to decide and then find evidence to justify the decision. We streamline that process over time and make it very efficient. Adding new data disrupts something we work very hard to perfect – something presumably of value to individuals and their organizations.

There’s also a very important risk question involved. We tend to work out all manner of cognitive and affective biases in our decision making style and process. Opening decision making up to new inputs can mean reconsidering assumptions, biases, heuristics, and other deep-seated things decision makers have become very comfortable with – and don’t want disturbed! My guess is most see more risk than reward in such an undertaking.

I think you’re right that to make decision making better for the individual requires training, coaching/mentoring and a safe organizational environment within which to try something different. And not just different for the decision maker, but different for the decision maker’s team.

John Kamensky

Hi Mark, Lou, and others – Thanks for the insightful comments!

Mark – appreciate your additional citations that should extend readers’ opportunities to learn more.

Lou – I think I was in your frame of mind (can you really teach an old dog new tricks), but have seen real changes in how managers do their work, when framed in a way they see as relevant and engaging — or by creating an “ah ha!” moment for them. For example, I was at an event last week where Gerald Ray, who is the deputy executive director for SSA’s disability appellate operations, talked about how an appeals judge sees him/herself as an expert, with years of experience, and yet he was able to show them data where they had patterns of errors in their rulings and was able to target specific training to help them, individually, to make better decisions that would then be reflected in their cases being upheld later in the decision process. I thought it was a sophisticated approach to using analytics to improve performance (you can learn more at: http://www.businessofgovernment.org/brief/using-data-analytics-improve-mission-outcomes ).

Lou Kerestesy

Thanks, John. Will definitely follow the link and read. Didn’t mean to sound negative. I’m a big believer in individual and organizational change. Only meant to say there’s a very personal component to be addressed in the process. That merely the availability of more or new data is unlikely to produce change if for no other reason than a decision maker doesn’t know what to do with it, and it challenges them in ways they’ll initially resist. My guess is that Ray’s approach made the judges aware of their decision making style, processes, habits, etc., and that awareness produced the ah-ha moment. That moment – when people realize their ends-means are misaligned AND they’re shown a way to realign them – is very powerful and productive.

Thanks for the additional info!

Lou