13.11.2012 Views

GAO-12-208G, Designing Evaluations: 2012 Revision

GAO-12-208G, Designing Evaluations: 2012 Revision

GAO-12-208G, Designing Evaluations: 2012 Revision

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Assess the Relevance and<br />

Quality of Available Data<br />

Sources<br />

Chapter 3: The Process of Selecting an<br />

Evaluation Design<br />

especially well or especially poorly. Findings supported by a number of<br />

soundly designed and executed studies add strength to the knowledge<br />

base exceeding that of any single study, especially when the findings are<br />

consistent across studies that used different methods. If, however, the<br />

studies produced inconsistent findings, systematic analysis of the<br />

circumstances and methods used across a number of soundly designed<br />

and executed studies may provide clues to explain variations in program<br />

performance (<strong>GAO</strong> 1992b). For example, differences between<br />

communities in how they staff or execute a program or in their client<br />

populations may explain differences in their effectiveness.<br />

A variety of statistical approaches have been proposed for statistically<br />

cumulating the results of several studies. A widely used procedure for<br />

answering questions about program impacts is “meta-analysis,” which is a<br />

way of analyzing “effect sizes” across several studies. Effect size is a<br />

measure of the difference in outcome between a treatment group and a<br />

comparison group. (For more information, see Lipsey and Wilson 2000.)<br />

Depending on the program and study question, potential sources for<br />

evidence on the evaluation question include program administrative<br />

records, grantee reports, performance monitoring data, surveys of<br />

program participants, and existing surveys of the national population or<br />

private or public facilities. In addition, the evaluator may choose to<br />

conduct independent observations or interviews with public officials,<br />

program participants, or persons or organizations doing business with<br />

public agencies.<br />

In selecting sources of evidence to answer the evaluation question, the<br />

evaluator must assess whether these sources will provide evidence that<br />

is both sufficient and appropriate to support findings and conclusions on<br />

the evaluation question. Sufficiency refers to the quantity of evidence—<br />

whether it is enough to persuade a knowledgeable person that the<br />

findings are reasonable. Appropriateness refers to the relevance, validity,<br />

and reliability of the evidence in supporting the evaluation objectives. The<br />

level of effort required to ensure that computer-processed data (such as<br />

agency records) are sufficiently reliable for use will depend on the extent<br />

to which the data will be used to support findings and conclusions and the<br />

level of risk or sensitivity associated with the study. (See <strong>GAO</strong> 2009 for<br />

more detailed guidance on testing the reliability of computer-processed<br />

data.)<br />

Page 22 <strong>GAO</strong>-<strong>12</strong>-<strong>208G</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!