16 <strong>Improving</strong> <strong>the</strong> <strong>Effectiveness</strong> <strong>of</strong> <strong>Juvenile</strong> <strong>Justice</strong> <strong>Programs</strong>: A <strong>New</strong> Perspective on Evidence-Based Practice
IV. Evidence-Based Practice: More Than One Approach Three main approaches can be used to translate research evidence on effective programs into practice for everyday use by practitioners and policymakers. The first approach is direct evaluation <strong>of</strong> each individual program used in practice to confirm its effectiveness and, if it is found ineffective, to use that evidence to improve or terminate it. A second is to implement with fidelity a program from a list <strong>of</strong> model programs certified by an authoritative source as having acceptable evidence <strong>of</strong> effectiveness. A third approach is to implement a type <strong>of</strong> program that has been shown to be effective on average by a meta-analysis <strong>of</strong> many studies <strong>of</strong> that program type, but to do so in <strong>the</strong> manner that <strong>the</strong> research indicates will yield that average effect or better. A. Direct Evaluation <strong>of</strong> <strong>the</strong> Effects <strong>of</strong> <strong>the</strong> Program as Implemented The form <strong>of</strong> evidence about effectiveness that is most specific to a program as it is actually practiced is an impact evaluation <strong>of</strong> that specific program conducted in situ. With accompanying process information about <strong>the</strong> nature <strong>of</strong> <strong>the</strong> services delivered and <strong>of</strong> <strong>the</strong> juveniles receiving those services, such research can assess program effects on selected outcomes in relation to <strong>the</strong> way <strong>the</strong> program is implemented. To provide <strong>the</strong> most valid results, an impact evaluation must use a control group <strong>of</strong> comparable juveniles who do not receive <strong>the</strong> program, preferably assigned randomly to program and no program conditions. When well executed, an impact evaluation <strong>of</strong> this sort will provide a credible assessment <strong>of</strong> <strong>the</strong> effects <strong>of</strong> a program for juvenile <strong>of</strong>fenders on <strong>the</strong>ir recidivism and any o<strong>the</strong>r measured outcomes <strong>of</strong> interest. The process component, in turn, will help identify implementation problems that may need to be corrected to improve <strong>the</strong> effects and subgroups <strong>of</strong> <strong>of</strong>fenders who respond more or less positively to <strong>the</strong> program. A fully developed impact evaluation, <strong>the</strong>refore, not only assesses <strong>the</strong> program effects on <strong>the</strong> intended outcome but provides <strong>the</strong> basis for diagnosing any shortfalls in those effects as well as guidance for program improvement. The main advantage <strong>of</strong> direct evaluation is that <strong>the</strong> results apply in a very specific manner to that particular program as practiced. The main advantage <strong>of</strong> direct evaluation is that <strong>the</strong> results apply in a very specific manner to that particular program as practiced. The evidence base supporting its effectiveness, <strong>the</strong>refore, is not drawn from studies done elsewhere and extrapolated to that particular program with <strong>the</strong> associated questions about how well that evidence applies. The disadvantages <strong>of</strong> direct evaluation as an approach to evidence-based practice, however, are considerable. First, impact evaluation requires resources, technical expertise, and favorable conditions with regard to <strong>the</strong> ability to create a control group and collect <strong>the</strong> desired process and outcome measures. For a juvenile justice system that uses many programs, mounting a credible impact evaluation <strong>of</strong> each would be prohibitively expensive. In addition, <strong>the</strong> specificity <strong>of</strong> <strong>the</strong> evaluation can also be a limitation. If <strong>the</strong> program changes in any significant way, e.g., through program improvement efforts or natural changes in its clientele, <strong>the</strong> results <strong>of</strong> <strong>the</strong> evaluation may no longer apply. Providing evidence that <strong>the</strong> altered program is still effective <strong>the</strong>n requires a new impact evaluation. As a practical matter, direct impact evaluations are generally conducted only for innovative or relatively unique programs that have not already been tested, or to replicate <strong>the</strong> findings <strong>of</strong> such evaluations when those programs are applied in different circumstances. Indeed, it is research <strong>of</strong> this sort that provides <strong>the</strong> evidence for model program <strong>Improving</strong> <strong>the</strong> <strong>Effectiveness</strong> <strong>of</strong> <strong>Juvenile</strong> <strong>Justice</strong> <strong>Programs</strong>: A <strong>New</strong> Perspective on Evidence-Based Practice 17