As discussed yesterday in part one of this interview, there is a lot to recommend testing government agencies to glean quantifiable results or grades. Having numerical values to analyze the government’s performance allows for a better understanding of the status quo, and improved decision-making.
Deloitte University Press recently authored a report entitled “Accountability Quantified: What 26 Years of GAO Reports Can Teach Us About Government Management.” The report used text analytics and quantification of data to simplify and analyze GAO reports dating back from 1983.
Two of the authors, Daniel Byler, Lead Data Scientist at Deloitte, and William Eggers, Global Public Sector Research Director at Deloitte, spoke with Christopher Dorobek, host of the podcast DorobekINSIDER, on what their findings mean for government performance.
The process that Deloitte used to transform the raw reports into numerical data sheds a great deal of light on government innovation and accountability (covered in yesterday’s post), but the real meat of the equation are their findings on just what GAO’s recommendations are and how they are received.
What the researchers found by looking at the data in such a wide-ranging context was that there are several significant issues facing government that cut across agency lines. “It goes from how these organizations are funded through Congress and the multiple jurisdictions embedded within them, to the different incentives and silos, the IT systems, and so on,” said Eggers. The findings from the data analysis, according to Eggers, “definitely highlight the need to focus like a laser on how to fix those issues.”
Another conclusion from Deloitte’s report was the lack of change or variation in GAO recommendations and their implementation over the years. This was an unexpected result for both of the researchers.
“I think that one of the things that was really surprising to me in the data was actually the level of stability that was present over the course of time. It stretches over such a long time horizon,” said Byler. “That’s pretty unusual when human beings are involved. As an outsider, I’m guessing that’s because the GAO is naturally adaptive, and is trying to give recommendations that agencies have a chance to complete.”
The success of these GAO recommendations did not vary much between agencies, either. “I thought you would see a much bigger delta between agencies who were very successful and those who were non-successful. And in fact, we didn’t see that,” said Eggers.
But at the end of the day, the findings are more optimistic than one might imagine, according to Eggers and Byler. It appears that GAO recommendations are taken seriously by agencies, and that they try harder than expected to fulfill the GAO’s requests. “Overall, in pretty much all your topic categories and all your agencies, you see pretty high levels of response rates to GAO reports,” said Byler.
So, by looking at the government’s GPA, we can see that it’s actually doing pretty well, based on GAO recommendations. Could this be an area of government that’s not in dire need of critique or improvement?