,

Ten Things to Monitor As Agencies Invite Input On Open Government Plans

This post was originally published on the Intellitics blog: Ten Things to Monitor As Agencies Invite Input On Open Government Plans
Now that a whole lot of agency.gov/open websites are live and many agencies have indeed set up a ”mechanism for the public to […] [p]rovide input on the agency’s Open Government Plan” it’s time to figure out what to watch out for over the coming weeks and months in order to evaluate the success of these initiatives.
As I noted back in January, my hope is that these new projects will address and improve upon three key issues that we saw during last year’s Open Government Dialogue (namely, lack of convener involvement, insufficient moderation, herding).
All in all, I’ll keep an eye on the following (in no particular order):
  1. Expectation management: Is the agency clear about the scope of their participation initiative and their promise to the public? Do participants know what impact they can reasonably expect and when?
  2. Community ground rules: Every agency should have these “rules of engagement” in place and be ready to enforce them if needed. Bonus points for friendly, easy-to-understand language!
  3. Level of convener involvement/participation: Does the agency become actively engaged in the discussions?
  4. Quality of moderation: Will the agency manage to keep discussions on topic and moderate distractions in a fair but timely manner?
  5. Quantity of participation over time: How many participants will sign up? How much content will they produce? (luckily, IdeaScale exposes a few basic metrics in real time, such as number of ideas, comments, votes and registered users)
  6. Outreach and diversity of participants: Does the agency manage to attract a broad range of participants from various backgrounds? Or do usual suspects dominate the discussions?
  7. Conclusion and impact: This one will be especially interesting as there doesn’t seem to be an end date defined for any of these initiatives. In case of ongoing participation programs, does the agency at least share interim results?
  8. Tech support: Does the agency address technical support questions and resolve any issues in a timely manner?
  9. Project communications: Does the agency offer ways for participants to stay in the loop (or get up to speed quickly) with regard to current state of the discussion, frequently asked questions, highlights, interim results, next steps etc.?
  10. Mood: Overall, how happy is everyone with the process? What’s the energy level? Are things productive? Etc.
What else should be on the radar? Sound off in the comments.

Leave a Comment

9 Comments

Leave a Reply

Profile Photo Steve Ressler

I think a big one is “driving audience.” There is a little of a build it and they will come…which I don’t think is necessarily true.

Agencies should make sure to promote /open dialogues heavily via already built-in audiences (email lists, partner lists) and throughout other channels (prominent web placement, reach out to traditional media, leadership discussion at in-person meetings).

Profile Photo David Kuehn

One element I would want to measure is how well Open Government Plans integrate agency missions and programs. Specifically, I would look for leadership and involvement by core program offices that have the critical interactions with the public and agency stakeholders. If the Plan is developed and supported soley by the CIO I believe it will result in limited long-term change in how agencies conduct business.

Profile Photo Tim Bonnemann

@David

Excellent point. That’s probably the ultimate litmus test and goes back to what impact exactly any of these initiatives will have. If the key stakeholders aren’t on board (and agency leadership is one of those stakeholder groups) impact will be marginal.

Profile Photo Keith Moore

Tim these are great things to monitor. I suggest close weekly monitoring participation on the input. I have noticed one or two websites that clearly are simply checking a box to meet a deadline and not aligning the OGD with their core mission. So setting benchmarks is an essential contribution to any assessment of OGD progress.

Profile Photo Alex Moll

Good work monitoring, Tim. Good blog post too. Good to see you in this community. Building off Mr. Kuehn’s remarks, I think it would be interesting if agencies would begin the following steps to improve quality of process:

1. Synthesize ideas from brainstorming, but targeted or categorized under various questions or problems per transparency, collaboration, participation.
2. Be action and problem-solving oriented. Connect people’s ideas–proactively.
3. Correlate ideas toward agency missions and goals.
4. Demonstrate the effect of people’s input–continually, not just at the end of three months.
5. Merge ideascales, both from former effort last summer 2009 to this year.
6. Feedback loops from new synthesis of knowledge

These are six steps I would add to your 10 above. Steve’s right too: Agencies need to ‘drive’ audiences, but they also need to cultivate online community and interaction of people’s ideas. I would invite agencies to ‘build’ audiences too. Kuehn’s right too: CIO’s perspective cannot be the only one to achieve long-term success; we need the cultural piece as well.

One of the most fundamental pieces of success overlooked is both new web 2.0 technologies and softwares that really build knowledge cumulatively, usually with some manual operation, naturally. Effective problem-solving, whether short or long term for at least non-rountine problems, requires synthesis of input into re-emerging presentations for public feedback. That’s a missing link in the entire online process, whether IdeaScale or otherwise. Those are some initial thoughts…always thinking.

Profile Photo Tim Bonnemann

Summarizing and synthesizing the conversations, ideally on a daily basis, and feeding the results back to the participants is one of the biggest opportunities for improvement for these kinds of idea collection and discussion efforts. It’s a manual process, of course, and may require skilled facilitators but the benefits to the participants are obvious.

I address this challenge in more detail here: 14 Ways to Make Online Citizen Participation Work: “Keep Folks in the Loop!”

We saw a little bit of that during phase 2 of the Open Government Dialogue: each new topic was kicked off with a blog post that summarized the previous one.