Building Capacity for Evaluation: We’re Bypassing Critical First Steps

David Goodman, Director of Impact at Fluxx Articles By This Author

Nov 3, 2016 2:29:48 PM

benchmarking_blog.jpg

This post also appears on blog for the Center for Effective Philanthropy.

Any time the philanthropic sector has an opportunity to learn more about each other and the work that we do, we all benefit. A perfect example is the latest report from the Center for Effective Philanthropy, Benchmarking Foundation Evaluation Practices.

The key takeaway: The sector has limited capacity to take on meaningful evaluation. And foundations bypass developmental or formative evaluation at their own risk. Without undertaking this important work up front, capacity for evaluation practices will remain woefully inadequate.

While the evaluation practices report provides many valuable insights into the evaluation capacities, experience, and expenditures of over 100 foundations, it also confirms that many foundations lack or possess limited capacities to conduct evaluation in a meaningful and appropriate way.

For example, 66 percent of foundations surveyed for the report do not have a dedicated evaluation department. Only five percent of all respondents have an advanced degree in evaluation. And given the variation in rigor and methodological emphasis of different graduate programs, we simply cannot assume that the 50 percent of survey respondents with advanced degrees all share the same research or evaluation expertise.

<<Foundations: Accelerate Your Journey From Data to Impact>>

At the same time, we all know that foundation staff are already spread thin and many wear multiple hats. The report tells us that of those foundations that do not have a dedicated evaluation unit, much of the evaluation-related work is conducted by members of the program or operations staff or members of the executive team. While some of these individuals may have been formally trained as researchers or evaluators, we can assume that some have not and are nonetheless responsible for conducting or managing evaluations. For both trained and untrained evaluators, they are both conducting evaluation related activities on top of their other foundation responsibilities.

So where does that leave us now? How do we take the information and insights from the CEP report and turn them into action to help build critical research, evaluation, and learning capacities across the philanthropic sector?

The key is revealed in the CEP report itself. Consider: 80 percent of survey respondents never or only occasionally engage in developmental or formative evaluation for foundation initiatives, and even less engage in this kind of evaluation with their grantees.

It should not be a shock to us that in this era of Impact! Impact! Impact! – or what I call the #impactindustrialcomplex – we see many foundations bypass the critical steps of developmental or formative evaluation. This is where, I believe, valuable opportunities to build data and evaluation capacities reside, as well as opportunities to alleviate the burden on their staff tasked with conducting rigorous and meaningful evaluation.

Build a Vision

I get it. Developmental and formative evaluations are hard and often frustrating – but, alas, essential. Developmental and formative evaluations are where we sit down as a team and build consensus around what we want to do, how we plan to do it, and how this all holds up once we implement it. It is also where we do the hard work of researching what outputs, outcomes, and indicators are appropriate for what we plan to do and outline the relationships between them that represents our “story of change.” Additionally, this is where we observe the differences in goals, preferences, expertise, and experience associated with evaluation that exist across (and sometimes within) foundation teams. It is also where we  discuss and reconcile these differences and build consensus on a shared vision for the initiative and our story of change.

But this is why this process is so valuable! We build valuable knowledge and understanding of our initiatives  as we research and choose appropriate components (activities, outputs, and outcomes) and meaningful ways to measure them (indicators). We also better understand the context in which they are implemented and the potential factors that may contribute to our overall impact.

Validate an Approach

These often overlooked evaluations (developmental and formative) also provide us with the opportunity to validate our framework and to test it to make sure that the components are appropriate and meaningful and remain so after implementation. This is critical because without validation we can have little confidence that the result of a summative evaluation can be attributed solely to our initiative  and not some other factor that we did not consider. So, yes, all of those “successful” impact evaluations completed by foundations that bypassed the developmental and formative work may simply be the result of factors that have nothing to do with the initiative itself.

Create a Culture of Collaboration

Through these evaluation processes foundations also learn about their colleagues and better understand their work and how it may contribute to their own. In particular, staff learn about the data and knowledge possessed by different foundation teams, as well as the evaluation expertise and experience of their colleagues. This builds foundational capacity by allowing staff to leverage the knowledge and experience of colleagues, learn from one another, and (hopefully) foster a culture of collaboration.

Learn and Share

The knowledge and information captured during these evaluations can also be the foundation of a knowledge management and sharing system within the foundation. Not only does this build capacity by collecting and organizing data and information that would normally reside in the head of a program or grant officer, it also helps reduce the time and resources needed to locate and access data, information, or expertise. This is especially important for foundations that traditionally work in silos or have staff that wear many hats. It also helps prevent the loss of capacity when a staff member leaves the foundation or transitions to another role.

Obviously these are just a few examples of how to build evaluation capacity within a foundation as a result of engaging in the critical developmental and formative evaluations, but there are many other ways as well. The key is to take the first step, and the CEP report provides us with a great place to start. Any attempt to enhance your foundation’s knowledge, understanding, and use of data or evaluation will begin that process.

We know that evaluation is challenging and often frustrating, especially developmental and formative evaluations, but they are not impossible and definitely not without considerable rewards.There are some great examples of foundations that have engaged in these critical evaluations and will be the first to tell you how beneficial they were. If you are such a foundation, I’d love to hear from you. Let’s join together in the ongoing quest to build the capacities of the sector.

Dr. David Goodman is the Director of Impact at Fluxx. David is passionate about finding ways to better use data, metrics, and research to achieve greater impact and sustainability. In his role at Fluxx, David engages with both grantmakers and nonprofits to empower them to use data to turn a single success story into a thousand similar success stories. David brings to Fluxx extensive experience leading and assisting teams in the design, implementation, analysis, and oversight of research and evaluations for the public, private, and nonprofit sectors. David earned a Ph.D. in political science from Rice University in Houston, Texas, and graduated with B.A. and M.A. in political science from California State University, Northridge. David can be reached at david@fluxxlabs.com, and on Twitter @MeasureDoc.

 Click here for the complete guide.