12.07.2015 Views

A Framework for Evaluating Systems Initiatives

A Framework for Evaluating Systems Initiatives

A Framework for Evaluating Systems Initiatives

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2 <strong>Evaluating</strong> Components<strong>Initiatives</strong> that concentrate on a system’s components attempt to improve the system by shoringup its individual subsystems, programs, or interventions. For example, these initiatives may pilotnew programs, expand access to particular programs or services, or introduce quality improvementinitiatives.Evaluation QuestionsEvaluations of initiatives focused on components share much in common with traditional programevaluations, as they both assess individual programs or interventions. Also like program evaluations,initiative evaluations in this area address questions in two main areas—program implementation andprogram impacts. Key questions include:1) Did the initiative design and implement system components as intended?2) Did the components produce their intended impacts <strong>for</strong> beneficiaries?Evaluation MethodologiesAgain, because the focus is on individual programs or interventions, evaluations here can borrowfrom traditional program evaluation approaches which feature the systematic application of socialscience research designs and methods to assess the implementation and effectiveness of programsor interventions.Evaluations that examine implementation use some <strong>for</strong>m of program monitoring or process evaluation.Program monitoring addresses questions about 1) the extent to which the program is reaching itstarget population, 2) whether program delivery matches design expectations, and 3) what resourceshave been used to deliver the program. 34 Program monitoring often goes hand in hand with impactassessments (see below) as monitoring addresses questions about why a program was or was noteffective. A wide array of both quantitative and qualitative methods can be used <strong>for</strong> programmonitoring, such as observations, participant surveys or focus groups, staff member interviews, anddocument or record reviews.Evaluations that examine questions about program impacts may use experimental or quasiexperimentaldesigns that employ a range of possible quantitative or qualitative methods, althoughquantitative data generally prevail in impact assessments. These designs assign (randomly or nonrandomly)individuals to participant and non-participant groups and then compare those groupsusing, <strong>for</strong> example, repeated measurements. Experimental or quasi-experimental designs generallyprovide the most definitive attributions of causality and remain the program evaluation “goldstandard.” They are expensive to construct and implement, however, and random assignment maynot be appropriate <strong>for</strong> programs that feature enrollment inclusivity and openness because theyviolate the program’s design.34Rossi, P., & Freeman, H. (1993). Evaluation: A Systematic Approach. Newbury Park, CA: Sage Publications.20A <strong>Framework</strong> <strong>for</strong> <strong>Evaluating</strong> <strong>Systems</strong> <strong>Initiatives</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!