12.07.2015 Views

PDF (28 pages; 360 KB) - GSDRC

PDF (28 pages; 360 KB) - GSDRC

PDF (28 pages; 360 KB) - GSDRC

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Impact evaluationImpact evaluation: an introductionThe recent emphasis on accountability and results-based management has stimulated interest in evaluating not just theprocess, outputs and outcomes of development programmes, but also their impact (ultimate effect) on people’s lives.Impact evaluations go beyond documenting change to assess the effects of interventions on individual households,institutions, and the environment, relative to what would have happened without them – thereby establishing thecounterfactual and allowing more accurate attribution to interventions.This counterfactual approach to evaluation is increasingly advocated as the only reliable way to develop an evidence baseon what works and what does not in development. There are some 800 quantitative impact evaluations in existence acrossa wide range of sectors, and more are in progress or being commissioned. There is growing consensus that more rigorousquantitative approaches such as randomised control trials should be used more widely, but they are not appropriate in allcontexts.There is growing consensus that where RCTs are not appropriate, there remain a range of quantitative counterfactualapproaches for large n interventions (where there are many units of assignment to the intervention – such as families,communities, schools, health facilities, even districts). It is possible to collect outcomes data using qualitative methods,within the context of a counterfactual evaluation design. For small n interventions (where there are few or only one unit ofassignment – such as an intervention carried out in just one organisation, or one which affects everyone in the relevantpopulation – mixed methods that combine quantitative and qualitative methods may be appropriate. All impactevaluations should collect information along the causal chain to explain not just whether the intervention was effective,but why, and so enhance applicability/generalisabiltiy to other contexts.Lucas, H. and Longhurst, H., 2010, 'Evaluation: Why, for Whom and How?', IDS Bulletin, vol. 41, no. 6, pp. <strong>28</strong>-35http://www.gsdrc.org/go/display&type=Document&id=4040This article discusses theoretical approaches to evaluation and draws on experiences from agriculture and health. It notesthat different stakeholders may have varying expectations of an evaluation and that alternative approaches to evaluationare more suited to meeting some objectives than others. Randomised control trials, or well-designed quasi-experimentalstudies, probably provide the most persuasive evidence of the impact of a specific intervention but if the primary aim issystematic learning a Theories of Change or Realistic Evaluation approach may be of greater value. If resources permit,different approaches could be combined to cover both accountability and learning objectives. As there will be trade-offsbetween objectives, transparency and realistic expectations are essential in evaluation design.White, H., 2009, 'Some Reflections on Current Debates in Impact Evaluation', Working Paper 1, 3ie, New Delhihttp://www.gsdrc.org/go/display&type=Document&id=4103There is a debate in the field of impact evaluation between those promoting quantitative approaches and those calling fora larger range of approaches to be used. This paper highlights four misunderstandings that have arisen in this debate. Theyinvolve: 1) crucially, different definitions of 'impact' – one based on outcomes and long term effects, and one referring toattribution; 2) confusion between counterfactuals and control groups; 3) confusion of 'attribution' with sole attribution;and 4) unfounded criticism of quantitative methods as 'positivist' and 'linear'. There is no hierarchy of methods, butquantitative approaches are often the best available.Asian Development Bank, 2011, 'A Review of Recent Developments in Impact Evaluation', Asian Development Bank,Manilahttp://www.gsdrc.org/go/display&type=Document&id=4220How can impact be credibly attributed to a particular intervention? This report discusses the merits and limitations ofvarious methods and offers practical guidance on impact evaluation. A rigorously conducted impact evaluation producesreliable impact estimates of an intervention through careful construction of the counterfactual using experimental or nonexperimentalapproaches.Attribution and the counterfactual: the case for more and better impactevaluationDevelopment interventions are not conducted in a vacuum. It is extremely difficult to determine the extent to whichchange (positive or negative) can be attributed to the intervention, rather than to external events (such as economic,demographic, or policy changes), or to interventions by other agencies.Impact evaluations attempt to attribute change to a specific programme or policy and establish what would havehappened without the intervention (the counterfactual) by using scientific, sometimes experimental, methodologies suchas randomised control trials or comparison groups.Impact evaluation 8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!