On August 12th, I had the good fortune to participate in the Innovation in Evaluation roundtable. And in the spirit of full disclosure, I have to confess that I have been a professional evaluator for more than 20 years, and have taught courses and workshops, written books and articles, and have consulted with a wide range of public, private, and government organizations on various evaluation topics. My point of view is that evaluation is about asking questions critical to the decision making processes, and that an evaluation's findings should contribute to individual, group, and organizational learning. (Examples of my writings can be found here.)
There is little doubt that a strong wind is blowing these days-and that it often takes the form of "evidence-based practices," "what works," and finding "proof points," that suggest causal relationships between philanthropic giving and social impact. Perhaps, it is just human nature to want to bring order out of chaos, to make simple the complex, or control that which is dynamic and ever changing. Yet, the world in which programs, initiatives, and social change occurs is not static, predictable, or manageable. Too often, I have seen evaluation approaches and designs couched in the language of "rigor" that ignore the human element-what it means to live through and into the social problems and solutions that are at the heart of philanthropic giving. While the results may produce statistically significant findings, they often do little to answer questions having to do with how or why the program did or did not make a difference or how it might better achieve its goals.
I believe that the topic of this week's blog, "How do we put people at the center of evaluation?" is fundamentally about what it means to design and implement evaluations in ways that honor the voices and lived experiences of those who are participants or recipients of the services, programs, and policies the field supports and funds. While I do think there are times when randomized control trials (RCTs) or quantitative designs may be appropriate, I think we must be extremely careful not to a.) over promise what these designs can deliver, and b.) ignore more qualitative ways of knowing. (For an excellent editorial on the need to use alternative evaluation approaches see here).
It is through the systematic collection and analysis of qualitative data (in the form of words and pictures), where the human spirit lives in all that we do. If we truly want to understand the ways in which our work adds value and meaning, and impacts those whom we hope to affect, then local context matters (IDEO's Jocelyn Wyatt's blog entry on this topic is a powerful example of what this looks like in practice). As such, RCTs are the antithesis of thinking locally. To illustrate the power of putting people at the center of evaluation consider the following poem constructed by Cheryl MacNeil, an evaluator and faculty member of the Sage Colleges. It was constructed from a series of focus group interviews with three different constituencies who were involved with a government-funded self-help program.
Poetic Representation of "Role Identity"
Which side of the line am I on?
a psychiatric survivor a full-time worker running support groups getting a paycheck
learn to play politics case management covering for staff keeping my distance
mixing oil with water walking a fine line a political dance a dance with the system
I don't want to sit in staff meetings but where is the voice? why doesn't anyone ask? have you tried this?
I appreciate the need and desire to quantify, measure, and account for the many things we do in the philanthropic sector, especially as we strive for scale and replication. However, the strong wind that has propelled us in the direction of quantification has narrowed our thinking and views about what are feasible, culturally appropriate, and meaningful evaluation designs and methods. Yet, I don't think it's a question of choosing qualitative over quantitative, or process over impact, but rather, the need to be much more deliberative about what we want to know, why we want to know, from whom we need to know, when we need to know, and how we will use what we learn from the findings. Ultimately, by putting people at the center of evaluation, we are forced to ask better questions, listen more deeply, see more clearly, and understand more fully.
Hallie Preskill directs the Strategic Learning and Evaluation Center at FSG Social Impact Advisors and is based in Seattle, WA.