\r\n\r\nWhy do we evaluate? Sometimes it's for reflective validation: qualifying the success of a program after it is complete....
Why do we evaluate? Sometimes it's for reflective validation: qualifying the success of a program after it is complete. Other times it's for active learning: seeing what is working well and what could be improved, and using this insight to change things for the better.
Evaluation for validation has an important role in comparing different approaches: Which approach has the most impact? Which gives the best value for money? How can this affect strategy moving forward? The downside of this type of evaluation is that it often doesn't produce conclusions until months or years after the actual project has ended-when the opportunity to change course or affect the project outcome is gone. Evaluation for active learning, on the other hand, allows you to take action as soon as a problem is identified. In design and innovation, evaluation for learning is a natural and essential part of the process.
In its most basic form, evaluation for learning is the intuitive thought process of trial and error that occurs within a designer's mind. But in a complex project, it's often helpful to put structure around evaluation since it would be unmanageable to have too many pieces of the design under consideration at once. Structure can come in the form of a specific set of hypotheses or prototypes to evaluate, and a schedule governing the times for evaluation and the times for incorporating learnings.
One way we use evaluation at IDEO is to test design ideas at a very early stage before we invest in further development. In a project about how nurses exchange information when they change shifts, an IDEO and Kaiser team tested rough prototypes in a hospital nursing unit. Three times a day, at each shift change, we gathered feedback and iterated on the concepts. By the end of the prototyping and field testing phase, we had a concept that felt right to the nurses; the early stage evaluation had led to a product that was highly intuitive to use. It has now spread to all hospitals in the network, and has been declared a best practice by the Institute for Healthcare Improvement.
Another way we use evaluation is to guide programs at a larger scale. Ripple Effect is a collaboration between IDEO, Acumen Fund, and local water providers in India and Kenya to improve access to drinking water for the poor. (Click here for a short slide presentation on evaluation and the Ripple Effect project.) The project started in India, where we visited organizations in the field, gathered them together for a collaborative design workshop, and provided design business support as they developed new concepts in areas such as water distribution, health awareness, and safe water storage. Throughout the process we were evaluating: collecting stories from the field to inform development, seeking direct feedback from the project's users on ideas, and later evaluating the success of the pilots themselves. The evaluation not only fed into the innovations, but also informed the process as a whole. We are now conducting the second phase of work in Kenya, which has been redesigned significantly to incorporate learning from India-for example, we are increasing the amount of time we spend in the field in preparation for the pilots, and we are emphasizing the parts of the design process that the Indian organizations found most helpful.
I'd love to pose these questions to our readers:
Guest blogger Sally Madsen is a designer and project leader at IDEO. On Friday, look for a response on this topic from Lakshmi Karan of the Skoll Foundation.