Wednesday, June 08, 2011

The Elusive Craft of Evaluating Advocacy

An interesting article on the challenges of evaluating advocacy work:

Most successful foundations and nonprofits understand the importance of advocacy. Over the last decade, foundations have put more resources into advocating for the policies they believe in, with some notable successes. Yet grantmakers have often hesitated to plunge in. Sometimes they worry about appearing too political or partisan. But more often they hesitate because effective advocacy is difficult, and evaluating whether various approaches are working is even harder.

That is not the case when it comes to service delivery programs—such as well-baby clinics or job-training classes—where foundations, universities, and government agencies have developed sophisticated tools for evaluating the effectiveness of these efforts. The tools range from controlled experiments, to extracting from experience best practices that can be adapted from one successful program to another, to a more malleable form of evaluation based on assessing the theory of change underlying an initiative. The development, refining, and implementation of these tools constitute a growing industry.

Unfortunately, these sophisticated tools are almost wholly unhelpful in evaluating advocacy efforts. That's because advocacy, even when carefully nonpartisan and based in research, is inherently political, and it's the nature of politics that events evolve rapidly and in a nonlinear fashion, so an effort that doesn't seem to be working might suddenly bear fruit, or one that seemed to be on track can suddenly lose momentum. Because of these peculiar features of politics, few if any best practices can be identified through the sophisticated methods that have been developed to evaluate the delivery of services. Advocacy evaluation should be seen, therefore, as a form of trained judgment—a craft requiring judgment and tacit knowledge—rather than as a scientific method. To be a skilled advocacy evaluator requires a deep knowledge of and feel for the politics of the issues, strong networks of trust among the key players, an ability to assess organizational quality, and a sense for the right time horizon against which to measure accomplishments. In particular, evaluators must recognize the complex, foggy chains of causality in politics, which make evaluating particular projects—as opposed to entire fields or organizations—almost impossible.

If foundations embraced the judgment-laden character of the effort—rather than giving up on advocacy or feeling they are falling short when their evaluations lack the scientific patina of service delivery program evaluations—the benefits would be profound. Funders could structure programs, often involving multiple unlikely bets, in ways that are more likely to succeed. Advocates could feel comfortable changing course as necessary. And foundations would be more likely to take chances on big efforts to change policy and public assumptions, rather than retreating to the safer space of incremental change.


The Elusive Craft of Evaluating Advocacy

The political process is chaotic and often takes years to unfold, making it difficult to use traditional measures to evaluate the effectiveness of advocacy organizations. There are, however, unconventional methods one can use to evaluate advocacy organizations and make strategic investments in that arena.

By Steven Teles & Mark Schmitt

 Summer 2011   4 comments | Comment on this article

 Subscribe in a reader