Charity Navigator ranks nonprofits based on their overhead ratio (as in the amount of money spent on operations versus the mission), but as a recent study from North Carolina State University shows, a low overhead ratio is not always an effective way to measure a nonprofit's efficiency. Accordingly, funders, volunteers, and the general public often have the pervasive idea that all of a nonprofits funds should flow directly to the cause. This can lead to the dreaded Starvation Cycle: A state in which a nonprofit consistently keeps overhead costs so low that it impedes their current and future impact potential.
Measuring impact is challenging. There’s no two ways about it. Some funders skip important steps in development while others fail to pay enough attention to their grantees’ capacity or preferences when designing and implementing evaluations. What’s more, the variety of approaches to the work across the sector constrains the way organizations assess and even define impact.
As we finish up yet another informative and exciting GMN ... err … PEAK Grantmaking conference in Hollywood and head back to our day jobs, I’m left reflecting on my experiences talking to and learning from a crowd of really smart people, committed to solving some really important issues in society.
It’s no secret that grantmakers continue to struggle with how to approach evaluation and create a path towards more meaningful and informative impact measurement. With good reason! This is tough work.
More Than Words: Key Takeaways from J.P. Morgan and the “How” of Measurement and Impact in Philanthropy
That’s why a recent event hosted by The Philanthropy Centre at J.P Morgan in San Francisco was such a breath of fresh air. The gathering, How to Maximize Philanthropic Impact, was squarely focused on ensuring that the ideas presented there moved beyond just “talk.”
David Goodman participated on the Vendor Plenary Panel at the 2016 Technology Affinity Group (TAG) annual conference, where he first discussed his views on data, evaluation, and the sometimes over-emphasis on "impact" in philanthropy.
While my background is in research and evaluation, I’ve spent a great deal of time working with foundations and nonprofits to build their capacity to understand and use research and evaluation. I’m very excited to be able to bring my experience to the philanthropic sector at a time when there is a growing realization that it can benefit from the expertise of researchers and evaluators from other fields or disciplines.
This is a good thing. It gets me out of bed in the morning.
Yet, as much as I am excited about the emphasis on data, measurement, and impact, there is also a part of me that wants to pause – just for a moment – to talk about the disproportionate emphasis on “impact” alone.
First off, what does it really mean to be data-driven? And secondly, how do you get there? Two great questions I’ve thought a lot about. In fact, I asked folks from more than 100 foundations the same questions.
Several themes developed and a roadmap of proven strategies emerged.
Whether you call it “all hands on deck” or “all oars in the water,” it has always made sense to me that we employ all the tools at our disposal to stanch the flow of intractable problems we face as a society.
Like many of you, we have been hearing more and more about the need for evaluating grant outcomes and assessing the impact of philanthropic work. In fact, you cannot attend a conference, read an article, or follow a blog or anyone on Twitter these days without hearing the drumbeat for "Impact! Impact! Impact!"
At Fluxx, we are in full agreement on the need for evaluation as a means to inform funding decisions, to improve the development and implementation of grant services, and to sustain impact over time. The problem that we see, however, is that most of the guidance around assessing impact, and evaluations more generally, are "quick fixes" that lack a meaningful evaluative framework, and neglect to understand that the success and appropriateness of any evaluation are heavily influenced by the program being implemented and the capacity of the organization conducting the evaluation.