Two of the stumbling blocks to expanding the use of collaborative networks in government are: how do you figure out what works? And, how do you create accountability?
The IBM Center has sponsored a series of reports in different policy arenas over the past decade that look at the creation and use of collaborative networks to get results. Reports exploring collaborative networks in emergency management, watershed management, international environmental issues, and traffic congestion describe the richness of the complexity of getting things done across agency and sectoral boundaries. But in each case, the assessment of success focused on the ultimate end results. This meant measuring the performance OF networks, not the performance IN networks.
But in many cases, achieving end results – such as cleaner water or reduced traffic congestion — take years to demonstrate. So, what happens in the interim? How do you measure the performance IN networks? In the IBM Center case studies, the answer (roughly) was that the participants in the networks would hold each other jointly accountable.
This may work in certain circumstances, but in a strongly hierarchical political system such as ours, that answer is not good enough. With the passage of the GPRA Modernization Act, and its requirements to develop cross-agency goals (which implies cross-agency collaborative networks), there is now a stronger need to demonstrate how we measure the performance IN networks? Figuring this out may be a key step to expanding their use in governmental settings.
This challenge was the focus of a series of articles in the June 2011 issue of Public Performance & Management Review, where various authors used the multi-sector governance network as their unit of analysis to examine the use of “performance management systems within complex networked governance arrangements.” The authors identified three models:
Comparative case study analysis, which is “a systematic way to identify and describe performance management systems within complex, inter-jurisdictional networks. It also discusses the role of federal agencies in building the capacity for such systems, using traffic congestion management as a model.
Social network analysis, which is used to “analyze the relationship between the kinds of network configurations” using emergency management response plans as its case study.
Complex adaptive systems approach, which is used to evaluate network performance in the third article, examining the deliberative processes of healthcare delivery networks.
The symposium editor, Christopher Koliba (University of Vermont) concludes: “All three articles reflect the growing use of computational power to assess performance and inform decision-making.” Interestingly, this parallels similar research in the use of collaborative networks in the for-profit and non-profit worlds!
WOW! Thank you for posting. I will take a look at this research. I am currently working on a similar problem for public procurement. Now more than ever, public procurement professionals are being tasked with doing more with less. Many are turning to a collaborative approach. However, the greatest collaboration, in terms of measuring performance needs to be between auditors (and what they see as acceptable performance measures) and the procurement professionals (and what they are actually reporting). I’ve done some initial research and there is definately a gap between what auditors want and what procurement professionals are reporting. Certain degrees of accountability and even ethics come into play when reporting cash and non-cash savings. Thank you again for posting this information. You can check out the project I’m working on here.
HiCandace – Thanks for sharing your link! I think performance — and accountability — are perceived differently by different stakeholders, largely because they come from different professional disciplines that stress different values (see my earlier blog post on The Meaning of Accountability where the Kettering Foundation extends this disconnect beyond government).
The PPMR articles suggest that collaboration participants/stakeholders — especially if they come from different professional disciplines — need to upfront define a common set of measures before they enter into a collaborative venture, so they are all using a common yardstick. Interestingly, this approach is supported by professionals who engage in alternative dispute resolution, and I’ve seen it built into large technology projects, again, upfront before any disagreements occur.
This is great stuff and very timely in the coming era of more budget cuts.