21.11.2014 Views

Preventing Childhood Obesity - Evidence Policy and Practice.pdf

Preventing Childhood Obesity - Evidence Policy and Practice.pdf

Preventing Childhood Obesity - Evidence Policy and Practice.pdf

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 19<br />

Box 19.1 Methodological debate:<br />

polarization or pragmatism?<br />

• Historically, there has been debate over the relative<br />

merits of quantitative <strong>and</strong> qualitative research<br />

methods in the evaluation of social interventions.<br />

• The period from the late 1960s to the early 1980s<br />

was a “ golden age of evaluation ” with 245 “ r<strong>and</strong>omized<br />

field experiments ” conducted in areas such<br />

as criminal justice, social welfare, education <strong>and</strong><br />

legal policy. 5<br />

• Pragmatic mixed method approaches, where<br />

methods or combinations of methods are pragmatically<br />

chosen to address the specific research question,<br />

6 has been lacking but is now developing.<br />

• Public health is necessarily cross - disciplinary requiring<br />

the combination <strong>and</strong> integration of research<br />

methods from a diversity of contributing disciplines. 7<br />

• More recently, there has been a call for a transdisciplinary<br />

science approach using a shared<br />

con ceptual framework to draw together the most<br />

rig orous <strong>and</strong> appropriate discipline - specific theories,<br />

models, methods <strong>and</strong> measures for the question<br />

being posed. 8<br />

A further lesson to be learned from experience elsewhere<br />

is that the term “ evaluation ” covers a wide<br />

range of activities, which vary greatly across a number<br />

of dimensions. While this chapter focuses on the evaluation<br />

of community interventions, within that focus,<br />

it is important to recognize that evaluation projects<br />

will vary according to the purpose of the evaluation,<br />

the resources available to conduct the evaluation, <strong>and</strong><br />

the complexity of the intervention to be evaluated. We<br />

consider each of these three dimensions, with a<br />

primary focus on the evaluation of complex community<br />

interventions, <strong>and</strong> the key stages in the evaluation<br />

of such interventions.<br />

Evaluation: p urpose <strong>and</strong> r esources<br />

In planning any evaluation, it is important to consider<br />

why that evaluation is taking place. Many evaluations,<br />

particularly those carried out by practitioners rather<br />

than researchers, are undertaken primarily as an exercise<br />

in accountability, with an emphasis on documenting<br />

or measuring what happened, with possibly<br />

some attempt to identify the impact, of a particular<br />

funded activity. Such evaluations are of limited scope<br />

<strong>and</strong> are not really the concern of this chapter, as<br />

they are more appropriately conducted within a<br />

project management or performance assessment<br />

framework, rather than being considered as evaluative<br />

activities. Any true evaluation should aim to produce<br />

learning <strong>and</strong>/or improvement. A good professional<br />

ethic requires that lessons are learned regarding<br />

the process <strong>and</strong> impact of an intervention <strong>and</strong> that<br />

there is continuous ongoing assessment of whether<br />

the intervention is working as anticipated <strong>and</strong> having<br />

the desired outcomes. It is critical that the possibility<br />

that interventions can do harm is not rejected. Many<br />

well - intentioned interventions have been found to<br />

be doing more harm than good in terms of their<br />

main purpose, 1 while others have unanticipated<br />

impacts or are detrimental to subgroups of the target<br />

population. It is also important that professionals<br />

strive to improve the quality of interventions, whether<br />

by improving their reach, effectiveness, efficiency or<br />

equity.<br />

What i s e valuation?<br />

There has been much debate as to the definition of<br />

“ evaluation ”, <strong>and</strong> how it is distinct from “ research ”.<br />

Shaw 2 proposes a three - level taxonomy, in which<br />

“ evaluation ” (which we refer to as practitioner evaluation)<br />

is characterized by a focus on practical problems<br />

with the objective of informing practice<br />

immediately <strong>and</strong> locally. It is usually undertaken by<br />

practitioners with little emphasis on scientific rigor,<br />

an enhanced form of reflexive professional practice.<br />

“ Evaluation research ” uses stronger methods <strong>and</strong><br />

seeks to have an impact on practice to improve effectiveness<br />

<strong>and</strong> efficiency, with dissemination through<br />

professional <strong>and</strong> policy networks <strong>and</strong> in the grey literature.<br />

And Shaw ’s third level is “applied research ”,<br />

which is led by researchers using strong methods <strong>and</strong><br />

is disseminated through peer - reviewed scientific<br />

papers with the aim of producing generalizable knowledge<br />

with an impact on theory <strong>and</strong> practice over a<br />

long - term period.<br />

This chapter adopts a definition of evaluation in<br />

line with Pawson <strong>and</strong> Tilley, 3 who see the purpose of<br />

evaluation “ as informing the development of policy<br />

<strong>and</strong> practice ” 4 rather than focusing simply on measurement<br />

or increased underst<strong>and</strong>ing.<br />

158

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!