Preventing Childhood Obesity - Evidence Policy and Practice.pdf
Preventing Childhood Obesity - Evidence Policy and Practice.pdf
Preventing Childhood Obesity - Evidence Policy and Practice.pdf
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Chapter 19<br />
The final stage of formative evaluation is to conduct<br />
an exploratory trial, in which a final draft or Beta - test<br />
version of the intervention is tested in a small - scale<br />
summative evaluation. This is not designed or powered<br />
to identify an estimate of intervention effectiveness,<br />
but is intended to further test feasibility <strong>and</strong> acceptability,<br />
identify any important remaining barriers or<br />
problems that need to be addressed, <strong>and</strong> to allow<br />
testing <strong>and</strong> estimation of key components of the summative<br />
evaluation methodology, such as outcome<br />
measurement, recruitment <strong>and</strong> retention rates. In<br />
many cases, the intervention development, pilot<br />
testing <strong>and</strong> exploratory trial phases may indicate that<br />
the intervention is not acceptable to the target population<br />
or to stakeholders in potential future implementation,<br />
or that there is no evidence that it is bringing<br />
about the anticipated changes <strong>and</strong> effects. This will<br />
lead to further modification or ab<strong>and</strong>onment of the<br />
intervention. In other cases, the intervention will<br />
appear to be feasible, acceptable <strong>and</strong> potentially effective,<br />
<strong>and</strong> will thus be ready for large - scale summative<br />
evaluation.<br />
Summative e valuation<br />
Once a thorough process of formative evaluation has<br />
been completed, an important remaining question,<br />
which for most policy decision makers is the most<br />
important question, is whether or not the intervention<br />
works. This requires an estimate of the intervention ’ s<br />
effect, which may then be compared to that of competing<br />
interventions, often using some form of economic<br />
analysis trading off costs <strong>and</strong> benefits of<br />
alternative programs. Pawson 20 argues that, in the case<br />
of complex social programs, it is futile to attempt an<br />
experimental evaluation, comparing a “ policy on ”<br />
condition with a “ policy off ” condition, since such<br />
programs are constantly being manipulated <strong>and</strong> renegotiated<br />
<strong>and</strong> are never stable. With such large - scale<br />
complex programs, Pawson states that “ the hallowed<br />
comparison of treatment <strong>and</strong> controls is … that<br />
between a partial <strong>and</strong> complete mystery ” .<br />
It is undoubtedly the case that many large - scale<br />
interventions are not feasibly or sensibly evaluated<br />
using experimental or quasi - experimental control<br />
group designs. This will generally be the case for mass<br />
media interventions applied to whole populations,<br />
<strong>and</strong> to large - scale complex social programs that will<br />
be substantially modified during the course of their<br />
evaluation. It is also unlikely to be worth while implementing<br />
a scientifically rigorous research design of an<br />
intervention that is so politically contentious that its<br />
evaluation is “ doomed to success ” , the political cost<br />
of a negative evaluation being too high. 21<br />
Efficacy <strong>and</strong> e ffectiveness<br />
However, it is also very important to recognize that it<br />
is not necessary for an intervention to be highly st<strong>and</strong>ardized<br />
<strong>and</strong> uniformly delivered in all instances, in<br />
order for a valuable experimental summative evaluation<br />
to take place. This would be the case in an efficacy<br />
trial, which seeks to identify the impact of the intervention<br />
when delivered in ideal circumstances.<br />
However, since health promotion interventions are so<br />
dependent on context adaptation <strong>and</strong> the quality of<br />
delivery, the value of efficacy trials is limited. 22 For<br />
example, smoking education interventions have been<br />
found to work well in efficacy trials, when delivered<br />
by enthusiastic teachers with ample curriculum time,<br />
yet when implemented in the real world they have not<br />
been found to be effective. 23 In an effectiveness<br />
trial, a pragmatic approach is taken, with the intervention<br />
delivered in the trial in the same (variable)<br />
way as would realistically be achieved in the real<br />
world.<br />
It is often argued that health promotion interventions<br />
are so dependent on the way they are implemented,<br />
<strong>and</strong> upon the context (environment, policy<br />
etc.) within which they are delivered, that context -<br />
dependent adaptation is crucial to maximize effectiveness<br />
<strong>and</strong>, therefore, r<strong>and</strong>omized controlled trials are<br />
not suited to their evaluation. However, the RCT<br />
design actually has the advantage that the r<strong>and</strong>omization<br />
process ensures that systematic differences in<br />
external influences between groups do not occur<br />
<strong>and</strong> thereby ensures that an unbiased estimate of the<br />
average effect of the intervention is obtained. It is,<br />
however, crucially important in an effectiveness trial<br />
of a complex community intervention to conduct a<br />
comprehensive qualitative investigation within the<br />
trial, so that these variable factors can be monitored.<br />
Thus, the qualitative research provides information<br />
on the factors that support or attenuate the effectiveness<br />
of the intervention. To undertake a trial of a<br />
complex intervention without an embedded qualitative<br />
process evaluation would be to treat the interven-<br />
162