We (the National Academy of Public Administration) just released our final report on the Worker Dialogue, an online dialogue we partnered with the Department of Energy’s Office of Health, Safety and Security to host this past summer. The report contains some valuable insights and lessons learned about how to engage the workforce in decision making — in this case soliciting ideas from front line union workers on how to improve health, safety and security across the DOE complex.
Outlined in the report are the themes that emerged from this specific discussion, but perhaps more of value to Govloopers are the broader lessons learned about how to “do” engagement/participation online, what data provide good analytics, and why government should continue this type of process into the future.
We’d love to hear your feedback, either here or on the document on Slideshare!
Pretty dense document at 80 pages
Some interesting things I saw
-Participation varied a lot by area – this is normal for dialogue but often not discussed
-Difficulty of having overlapping areas for dialogue
-By requiring people to fill out 8 questions and register before participating, they lost a lot of folks
-Difficult to know how to reach employees directly – by going through unions, it required an extra step and information may not have reached core audience
@Daniel: Would you expand a bit more on “losing” participants by requiring them to fill out eight questions and register before participating?
Based on prior experience facilitating on-line communities and discussions, it seems to be a fine line between wanting participants to have thorough member profiles to accurately reflect their “deep smarts” and for other members to locate them as SMEs through a keyword search, with making the registration process so cumbersome that it hinders participation.
@Steve – Thanks for the distillation. Dense, yes.
@Michele – We concluded that we may have “lost” potential participants due to the high barrier to entry to the Dialogue. Those questions were demographic ones that we used as descriptive data on the population of participants — the responses weren’t publicly displayed like a profile or user account, where users might actually WANT to describe themselves at length.