GovLoop

Data’s Tweedledum and Tweedledee

When thinking about data used for performance measurements versus program evaluations we find ourselves facing a Tweedledum and Tweedledee scenario.

Lewis Carroll references Tweedledum and Tweedledee in Through the Looking-Glass and What Alice Found There. Upon meeting the two interesting figures, Alice recites the original nursery rhyme that mentions the pair and their agreement to have a battle, but fail to carry it out once a crow appears at their heels. Most interestingly, however, the two never contradict each other in the rhyme. In fact, they actually tend to compliment one another’s words in a manner that leads people to believe they are twins- although they possess somewhat different mannerisms.

The same can be said for the data used in performance measurements and program evaluations. Both sets of evaluations don’t match, but can and should compliment one another.

Donald Moynihan, Professor of Public Affairs at the La Follette School of Public Affairs at the University of Wisconsin-Madison, sat down with Christopher Dorobek on the DorobekINSIDER program to discuss how we can better the relationship between performance measurements and program evaluations.

“In some periods of time we’ve been much more focused on program evaluations and in other periods of time we’ve been much more focused on performance measurements. The two things have never really quite met and come together as a single approach to managing data and evidence, “ Moynihan said.

Although Moynihan believes that combining the two sets of data into a single approach is doable and necessary, he does recognize that we need to understand the differences between the two approaches before we can find a way to combine the two. “They are really two different tribes,” Moynihan explained. “The people who are responsible for collecting data, in either sphere, tend to have different framing mechanisms, so both parties don’t naturally communicate with one another.” So, let’s evaluate the differences.

The best way to distinguish the two sets of data is to think of a story. Program evaluations offer the bigger picture of the story, whereas performance measurements are like individual chapters.

Performance measurements tell us the “what” factor; what does the information really tell you. “They give you information about the rate of performance in a particular unit or for a particular program, at a particular point in time,” Moynihan stated.

Program evaluations tell us the “why;” why did something happen. They provide us with “the intelligence to know whether said program is working overall or not,” Moynihan said. It also helps us evaluate whether the program is worth continuing or if resources are better allocated elsewhere.

Moynihan shared that he believes the solution to combining both data sets into one process is a matter of designing the processes within organizations to compliment both types of data. A good previous example comes out of the Bush administration in their creation of PART- “Program Assessment Rating Tool.” PART allowed OMB to grade the performance evaluations of various programs within the federal government. In so doing, agencies needed to utilize performance metrics, as well as evaluation data, which “created a demand from the performance management people to also get program evaluation information,” Moynihan shared. He furthered that the current administration is running a similar initiative in its call for quarterly reviews on all major federal goals.

Ultimately, we need data’s Tweedledum and Tweedledee to work together-in unison. “ There is going to be a big payoff when we find ways to bring in smart people with different training, who all care about that fundamental issue of using data for better outcomes, in the same room,” Moynihan said.

 

 

 

Exit mobile version