Much of the work we do in the philanthropic and nonprofit sectors is about changing systems to accelerate social progress. Whether we focus on changing or influencing belief systems, operational systems, health delivery systems, educational systems, financial systems, or any other host of systems, ultimately, we must think holistically and expansively if we want to create meaningful and sustainable change. While it is one thing to conceive of an initiative to stimulate systems change, it is quite another to design and evaluate these efforts in ways that address both the effectiveness of the design process and products, as well as the extent to which the short and long term outcomes have been realized.
By its very definition, a system is an arrangement (pattern, design) of parts which interact with each other within the system's boundaries (form, structure, organization) to function as a whole. The saying, "the whole is greater than the sum of its parts," reflects the notion that it is not enough to focus our evaluative gaze on single goals, objectives, actors, processes, activities, and the like, without attempting to understand the larger system in which the initiative lives.
At FSG Social Impact Advisors, we have become increasingly involved in evaluating initiatives that involve multiple sites and varied project implementation strategies. These projects include a diverse set of stakeholders and implementers, all of whom have a key role in creating systems change-from transforming a downtown community, to creating more informed and engaged citizens, to ensuring healthy oceans and sustainable fishing. We have come to understand that evaluating systems change requires us to consider the following questions before we even begin developing an evaluation approach.
\n
What is the system?
\n
What is the system comprised of (activities, structures, processes)
\n
Who is in the system? What role do different players have in the system? How do members of the system interact?
\n
How do actors communicate within the system?
\n
What external forces influence the system we are studying?
\n \n
Once we explore and map the answers to these and other questions that emerge from this discussion, we can begin designing the evaluation. At the same time, we are learning that evaluating systems change initiatives requires a willingness to be comfortable with ambiguity, a willingness to embrace emergence, and a commitment to engaging stakeholders throughout the evaluation process. In essence, evaluations that adopt a systems perspective require being attuned to small changes, exploring interactions among variables (quantitative and qualitative), and looking for patterns in the seemingly disordered data.
Some wonderful folks from the Ball Foundation, who partner with mid-size urban school districts committed to transforming schooling and learning, take a systems view of their work and the evaluation of it. They suggest that, "most traditional ways of generating metrics assume a machine metaphor-if we measure discrete parts in isolation and work to reduce variation and ensure compliance, we will ultimately get the results we seek. This is not necessarily a bad metaphor; the problem is that it is not entirely suited for the more complex and human systems like school systems and other organizations that we live and work in. While some parts of the system may lend themselves to machine-like metrics, it is essential that we expand our perspective to include a systems view." They suggest that when thinking about metrics, we:
\n
Measure results in ways that are descriptive as well as quantitative
\n
Make meaning around holistic system relationships, dependencies and connections
\n
Provide feedback, generate learnings and guide direction
\n
Be adaptive as goals evolve and emerge
\n
Create measures from within-capacity built for people doing the work to create the measures and make meaning of them
\n
Define accountability as internal, arising from values, principles and commitments
\n \n
In order to accomplish the above, we need to look beyond the traditional tools and methods of evaluation. In addition to using traditional methods such as surveys and interviews, we should consider options such as mapping tools, social network analysis and story-telling. A distinguishing characteristic of these methods is that they explore patterns, interactions, and relationships between the parts; or in other words, how the parts come together in a dynamic way to form the whole. For example, in the Ball Foundation's formative evaluation process in school districts, in addition to measuring changes in competencies of educators and performance of students through traditional evaluation methods, they are creating an Organizational Capacity Mapping Toolthat explores changes in patterns of how teachers, administrators, students, and parents come together to interact, make decisions, solve problems, allocate resources, and communicate with each other. Such a tool is by design collaborative, and engages stakeholders not just in generating the data, but in making meaning around it.
Evaluating with a systems perspective also means being intentional about the learning throughout an evaluation process. It means being open to learning from unexpected outcomes, acknowledging that evaluation designs cannot predetermine all factors that will be of interest, being committed to communicating and reporting evaluation results in user-friendly, accessible ways, using evaluation's findings to inform action, and focusing on "differences that make a difference." (See Virginia Lacayo's powerful article on systems and evaluation.)
As I near the end of this blog entry, I realize that what I have written may appear to be answers to the question, "How might we zoom out to evaluating with a systemic view?" In reality, however, I have many more questions than answers. Some of the questions I hope we will explore are:
\n
How do we define the boundaries of any initiative's system-in terms of processes and outcomes?
\n
How do we know what variables to focus on when there are so many?
\n
How can we begin to actively co-create evaluation questions and processes with stakeholders, and engage them in collaborative meaning-making that builds commitment for action?
\n
Given the complexity of human systems, and the varied interactions of variables, causation and attribution will sometimes be difficult to attain. How willing are we to accept contribution as an evaluation outcome?
\n
Are particular evaluation methods most effective for evaluation systems change?
\n
How willing are we to let an evaluation's questions and findings emerge through an evaluation process, vs. establishing them a priori?
\n \n
Hallie Preskill directs the Strategic Learning and Evaluation Center at FSG Social Impact Advisors and is based in Seattle, WA. Note: Many thanks to my Ball Foundation colleagues who contributed to this entry.