Evaluation, both quantitative and qualitative, is a great topic for discussion.
Data and evaluation are not the same thing, nor always related. However, a solid evaluation plan always includes data of all kinds. That data tells a story.
Evaluation Resources – Books, Articles, Tools, etc
January 26, 2010 at 5:55 am #90078
So, Andrea suggested it might be useful to share resources that we’ve found helpful and which make evaluation more accessible. Suggest we do that here, since it’s probably easier (especially for later searching) to keep everything together in one dedicated discussion, rather than having to pick out relevant posts out of a growing list comments on many different topics. When listing resources, would also suggest including some info on how or why you find them helpful.
Here’s one. I found this report by the California Endowment called “The challenge of assessing policy and advocacy activities” well worth the read. It’s not short (about 50 pages) but it’s clear and much of it is appendix and examples of what various nonprofits are doing around program evaluation related to policy and advocacy work. What I liked most is that the report focuses on what they call “prospective” or forward-looking evaluations. Rather than evaluations that only look back after a program cycle is completed, a prospective approach asks how we can get real time information to make our programs better as we go, making mid-course corrections as needed. It’s about setting up feedback systems and measures to help you improve. With a backward looking eval, it’s often too late for that.
Final thought. Though ease of use is the main goal here, not sure we want to shun things just because they may be a bit technical. Sometimes technical stuff is very useful, so maybe that becomes an opportunity for someone to translate the tech into English. 😉
March 6, 2010 at 4:48 pm #90086
Thanks Joshua for the great suggestion. I am a big fan of planning for results, counting wins big and small, and working very closely, alongside the group for whom the program is designed as a collaboration.
It only makes sense to make corrections in plans and results as things proceed. We are often taught to create a plan, get started and change as needed. Ditto on expected results. I do think documenting as we go is critical so we understand the what and why of the changes and for future efforts.
Technical and academic work is really important. I just want to “translate” it for general use and not have it be a reason folks don’t evaluate or understand how to increase their results.
I would encourage people to follow your lead and list good books, tools and resources. I will think about how to describe a planning process which integrates evaluation into the entire process, never as an after thought.
May 30, 2010 at 5:24 pm #90084
Awesome idea. I have a delicious feed with some of my favorite evaluation & performance measurement resources: http://delicious.com/wesleyl/performance_measurement
Here are some of my absolute faves:
Book – Practical Evaluation, Michael Q Patton
USAID’s reports are some of the best I’ve seen in terms of reporting tangible results, consistently, visually and with concrete examples of how they are progressing towards the very lofty goal of achieving global peace and security. It doesn’t get much tougher than that!
Here’s an easy how-to guide on creating a logic model: http://www.wkkf.org/knowledge-center/resources/2006/02/WK-Kellogg-Foundation-Logic-Model-Development-Guide.aspx
And an even shorter paper that describes Results-based management theory in very practical and simple terms: http://www.schacterconsulting.com/docs/means_ends.pdf
I also blog about this, mostly about performance measurement of online service delivery : http://usability4government.wordpress.com
I’m also trying to blog regularly here on GovLoop about this topic. I’m so excited to find other people who are interested. Obviously I’m over-committed in every way shape and form but I’m trying…There are just too many interesting things going on. Maybe we should just link this spot to delicious feeds or something…
May 30, 2010 at 8:06 pm #90082
I would love to get more people involved with this thinking. What’s involved with delicious feeds? Thanks for your contribution of good reads, I am looking forward to reading them.
It is my opinion that while data may be way more plentiful, how that all translates to the public in meaningful ways is a challenge for everyone.
Thinking outside the professional box so to speak.
June 1, 2010 at 4:08 am #90080
Laura, terrific refs! Here’s one more that I came across in the last month, which is my new fave for discussing how to measure impact without any initial mention of logic models or other terms that turn off my programming friends: http://bit.ly/aH4CsM Andrea, you may like this too.
The article is short (4 pages) and summarizes a relatively successful approach taken in England under Blair, which OMB is now trying to adapt. I’ve use the good questions the article raises to help make the point that most of the “work” in evaluation is not technical…it’s about program “content” and it requires programming people to clarify the details of how their programs are supposed to work, what the key milestones are, etc. The approach has really engaged people.
You must be logged in to reply to this topic.