,

Getting Government to Use Performance Data

Academics sometimes hit the nail on the head! University of Wisconsin professor Donald Moynihan, a thoughtful observer of the evolution of performance management in the U.S, along with colleague Stephane Lavertu from Ohio State, examine historical GAO survey data to understand why recent federal performance improvement initiatives haven’t resulted in the hoped-for increase use of performance information to make decisions.

Will the third time be a charm? Moynihan and Lavertu dig behind the data to find out why the first and second efforts to embed the use of performance information into the government faltered, and offer hints as to whether the most recent effort – the GPRA Modernization Act of 2010 — will do any better.

Background

Moynihan and Lavertu say the Government Performance and Results Act of 1993 (GPRA) was created “at least in part, to foster performance information use.” The Act requires agencies to create strategic plans, annual performance plans, measures of progress, and annually report on their progress. The original Act, however, “. . . was subsequently criticized by the Bush and Obama administrations and Congress for failing in this task,” the authors observe. Agencies developed plans, measures, and reports, but this did not seem to generate much use of the data by managers.

The Government Accountability Office (GAO) validated the weak use of GPRA-created performance information in a series of surveys of federal managers, which have been conducted about every three years since 1996. These surveys found that only about 40 percent of managers use performance information when making decisions, and this figure has adeclined somewhat over time.

The Bush administration, in its effort to spur an increased use of performance data, tried a different approach. Rather than collecting and reporting data agency-wide, they focused at the program-level, creating a Program Assessment Rating Tool (PART) to establish effectiveness ratings for more than 1,000 major government programs. This, however, did not generate additional usage of performance data either.

The third attempt to spark the use of performance data is currently underway. The GPRA Modernization Act of 2010, signed by President Obama earlier this year, mandates agencies to conduct quarterly management reviews of the progress of “priority goals” set by their leadership. Whether this will have the intended effect will likely not be known for several years.

The GAO Surveys

GAO administered surveys to federal employees on their experience with performance information in 1996, 2000, 2003, and 2007. These surveys of federal managers’ use of performance information are interesting in terms of how responses changed over time (between 1997 and 2007). In its 2007 report, GAO observed:

“. . . there were two areas relating to managers’ use of performance information in management decision making that did change significantly between 1997 and 2007. First, there was a significant decrease in the percentage of managers who reported that their organizations used performance information when adopting new program approaches or changing work processes. . . . Second, there was a significant increase in the percentage of managers who reported that they reward the employees they manage or supervise based on performance information.”

Analytic Findings

So while GAO found federal managers did not increase their use of performance information, but they were not able to answer why. This is what Moynihan and Lavertu try to tease out of the data.

Moynihan and Lavertu start with theory: “Organizational theory suggests that behavioral change among employees can be fostered by altering their routines. . . GPRA and PART, in different ways, both established organizational routines of data collection, dissemination, and review.” The GPRA Modernization Act and the Obama administration’s performance management initiatives “continue to be premised on the notion that establishing performance management routines is central to promoting performance information use. The key difference with the earlier reforms seems to be the choice of routines employed.”

“We estimate a series of [statistical] models using data from surveys of federal employees administered by the Government Accountability Office (GAO) in 2000 and 2007.” In their models, they assessed the extent to which survey respondents engaged in “purposeful” use of performance information versus a “passive” use – e.g., doing the minimum required to comply with the procedural requirements (but not really using the data).

They found that the original GPRA and PART “were more highly correlated with passive rather than purposeful performance information use. . . “ They found the routines created by GPRA and PART did not sufficiently influence managerial behavior. But they say “this does not mean that routines do not matter. . . The ‘stat’ approach to performance management [as required by the quarterly review process in the new GPRA requirements] involves embedding such routines. . . “

But what was really interesting in their statistical analyses was that “Additional attention from OMB actually seems negatively related to other forms of use, such as problem-solving and employee management.” That is, by focusing on compliance, OMB may actually be undercutting the use of GPRA for management!

What Really Matters

Moynihan and Lavertu’s analysis offers a sobering conclusion: “government-wide performance reforms such as PART and GPRA have been most effective in encourage passive forms of performance information use, i.e., in directing employees to follow requirement to create and improve performance information.”

In the end, the authors say that their findings “tell us something about the limits of any formal government-wide performance requirements to alter the discretionary behavior of individual managers. . . .”

But how can success occur? The authors observe that “a series of organizational factors – leadership commitment to results, the existence of learning routines led by supervisors, the motivational nature of task, and the ability to infer meaningful actions from measures – are all positive predictors of performance information use.”

But as good academics, they end by saying more research needs to be done. So here’s where you can follow it happening!

Leave a Comment

2 Comments

Leave a Reply

Avatar photo Bill Brantley

@John: Speaking from just personal observation – I believe the issue with performance management is the quest to find performance measures that all parties can agree on. When you are doing simple process work where you have a tangible output, measuring productivity is a relatively simple matter. I just have to produce so many widgets in so many hours to demonstrate good performance.

But how do you accurately measure the performance of knowledge workers? I may write seven policy documents but the actual impact of those policies in terms of increased government efficiency and cost savings may be much less than my colleague who only wrote three policy documents. I produce more tangible stuff (in terms of pages) but what is my actual productivity?

And then, how do you measure productivity of a team? I may be the person who attended all of the meetings and did most of the writing and editing but the real output came from the lady who had a key insight that led to the breakthrough. Again, who is the better performer?

The fundamental problems with performance management is you really can’t measure intangibles and you tend to have unwanted results as people game the measures. Yes, we should have performance management but the real questions are how can we do it so that the measures are fair, people are held accountable, and we get the results we want?

John Kamensky

Hi Bill — you are right, “performance” comes in many flavors and there are different lens on how it is interpreted by different stakeholders. Productivity or efficiency may be of high importance to budgeteers, while service may be important to a customer of a service, and effectiveness may be important to a policy evaluator or a political observer . . . yet a program (or a person working in a program) may need to show how she or he is meeting each of those measures of performance.

You seem concerned that measuring intangibles is a paramount challenge. There are some areas where intangibles are important, but they should contribute in some way to a greater mission (e.g., a diplomat negotiating a peace treaty or a scientist working on a mathematical theory). True, not everyone works in a call center and is measured in minutes and customer satisfaction ratings, but there are ways (imperfect, but still) that people in varied roles can be held accountable and the public can see what results they get for their tax dollars. . . . and results are not always best measured by outputs (number of policy documents written, as you suggest), but rather impact (sometimes only one policy document has more of an impact than 10).

As for team work, the performance of a team may be easier to assess than the individual contributions, especially when it is the cumulative contributions that count. The role of any one individual may be greater or lesser, but that matters only to members of the team. In Gore Corporation (makers of GoreTex), team members do the hiring, select their team leaders, and remove team members that don’t perform. That’s a different kind of accountability for team performance than you’ll see in the public sector.