, , , ,

Brownie points, or results?

Using the Gulf oil spill to get clear about measuring Open Government

Measure “Open Government”? Yes, but …

I think that the success of the Obama Administration’s Open Government effort is critical, but I’m put off, even bored, by the measurement discussions to date. Imagine that you’ve got good reason to believe that your nephew is the next Jackson Pollock, your niece the next Maya Lin, and then the first report back from their studios is” “he’s painted more than 500 square feet of canvas! she’s created two and a half tons of sculpture!” and you’ll know how I feel.

It’s as if someone brought a speedometer to a sunset.

In December, Beth Noveck, the leader of the Administration’s Open Government efforts, wrote that measures of Open Government would be useful as a way to hold the Administration’s “feet to the fire” and to ensure that proposed changes were implemented. She suggested a variety of measures, including:

  • The year to year percentage change of information published online in open, machine-readable formats;
  • The number of FOIA requests processed and percentage change in backlog
  • The creation of “data platforms” for sharing data across government
  • The successful posting of data that increases accountability and responsiveness, public knowledge of agency operations, mission effectiveness, or economic opportunity

(I’ve left the link in, but alas the page has disappeared.)

To be fair, it’s a tough problem. As Lucas Cioffi noted, seemingly reasonable measures may be difficult to interpret, for instance: the time spent on a website might signal popular use … or that users were confused.

So let’s start again, this time from the bottom, up: if you were managing an Open Government effort, what would you want to measure? For instance…

Virtual USA

In Feb, 2009 , Homeland Security rolled out Virtual USA (vUSA) for the sharing of geospatial data between emergency response agencies, , “a national system of systems … so that disparate systems can communicate with each other”. It will allow responders in multiple locations to coordinate their efforts with a common set of images and thereby reduce confusion and shift at least some activity away from phone calls. It is a bottoms-up collaboration between DHS, first responder groups, and eight southeastern states. The system is dependent in part on states and localities to provide data, and is locally controlled: The agency providing the data owns it, controls how and when and with whom it is shared, and can use its existing software to do so.

vUSA seems impressive:

Two more pilots are starting, covering eleven more states. And the user community at Firstreponder.gov has about 150 members.

The nearest level of management

Continuing with our exercise, imagine that you’re in charge of vUSA. You face decisions about which additional GIS standards and technologies to incorporate, how to divide resources between technology vs. additional outreach or training for participating states, and whether to reach out to additional Federal agencies, for instance, the Minerals Management Service, which had primary regulatory authority over the BP oil well.

To guide these decisions, you’d ask your staff these quantitative questions:
  • How much staff time in participating states has shifted from coordination via telephone to coordination via vUSA?
  • For what issues and data needs are participating states still using the phone?

and these qualitative ones:

  • What would have changed in the oil spill response if vUSA didn’t exist?
  • How does adoption and involvement differ between various agencies in the participating states and the various components of each agency?
  • Are response sites still using fax, couriers, or other workarounds to share information?

Big picture managers

Now zoom out a bit: imagine that you’re a senior manager at the Department of Homeland Security (DHS), with ultimate responsibility for vUSA but also many other programs.

Given your agency’s recent history with Katrina on the Gulf Coast, among other things, you’ll monitor how smoothly local, state, regional, and federal actors work together in dealing with emergencies and wonder whether staff increases (e.g. for liaison officers), training, or incentives would be more likely than technology (such as vUSA) to improve coordination. And you’d consider whether coordination should be addressed more broadly than geospatial information sharing, for instance to include the development of shared goals among the coordinating agencies or agreement on division of roles and responsibilities.

You’d ask the questions we’ve already considered, but you’ve got a broader range of responsibilities. The vUSA manager’s career will live or die by the success of that effort, but you’re worried about DHS’s success in general. Maybe there are better ideas and more worthwhile efforts than vUSA.

To assess this, you’d ask your staff to research these issues:

  • how eager are other states are to join the vUSA effort? (So the two additional pilots would be a good sign.)
  • How has vUSA affected the formulation of shared goals for the oil spill clean-up effort?
  • Is each agency involved playing the role that it is best suited for in the clean-up?
  • how has emergency response to the flooding in Tennessee, a participant in vUSA, differed from the response to flooding earlier this year in Minnesota and and North Dakota, states that don’t participate in vUSA?

The last question is an example of a “natural experiment”, a situation arising out of current events that allows you to compare crisis management and response assisted by vUSA vs. crisis management and response handled without vUSA, almost as well as you could with a controlled experiment.

You’d also have some quantitative questions for your staff, for instance: how have the FEMA regions participating in vUSA performed on FEMA’s overall FY 2009 Baseline Metrics from the agency’s Strategic Plan?

And back to “measuring open government”

Note how much more compelling these “close to the ground” measures are than the generic “Open Government” metrics. If you were told, this morning, that a seemingly minor vUSA glitch had forced the oil spill command center to put in extra phone lines, no one would have to interpret that measure for you: you’d already know what you’re going to focus on today. And if, as a senior manager, you had a report in front of you demonstrating that none of the dozen hiccups in the response to North Dakota’s flooding were repeated in the vUSA-assisted response to the Tennessee disaster, you might actually look forward to a Congressional hearing.

Two of the Open Government measures are relevant:

  1. vUSA is a new platform for sharing data across government.
  2. It’s certainly intended to increase DHS’s responsiveness and its effectiveness in carrying out its mission, though it appears that only some vUSA data are publicly available.

But these considerations would hardly be useful to the line manager, and they’d be useful to the agency’s senior managers mostly as checkboxes or brownie points when big Kahunas from OMB or the White House came to call.

Conclusions

Of course, if we had picked other Open Government efforts, we would have identified different measures, but there are some general lessons for the problem of developing Open Government metrics.

Get your hands dirty

Reviewing an actual program, rather than “Open Government” in the abstract, makes it easier to get a handle on what we might measure.

Decision requirements drive measurement needs

The line manager, about to decide whether to reach out first to EPA or MMS in expanding vUSA’s Federal footprint, will be eager to know how much back channels have been used to bring these two agencies into the oil spill cleanup. The GIS guru will want to know whether there’s something about mapping ocean currents that can’t be handled by vUSA’s existing standards.

Different decision-makers require different metrics

In contrast, DHS senior manager better not get lost in the weeds of GIS interoperability, but ought to be ever alert for signs that the whole vUSA effort misses the point.

In other words, when someone asks “what’s the right way to measure the success of this open government effort?”, the appropriate answer is “who wants to know?”.

Seek out natural experiments

Even with great measures, Open Government champions will always be confronted by the challenge of demonstrating that their project truly caused the successful result. A “natural experiment”, if you can find one, will go a long way towards addressing that challenge.

[Cross posted from Citizen Tools.]

Leave a Comment

Leave a comment

Leave a Reply