Campaigns to End Violence against Women and Girls - Virtual ...
Campaigns to End Violence against Women and Girls - Virtual ...
Campaigns to End Violence against Women and Girls - Virtual ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Advanced research designs for impact assessment<br />
To establish robust links between campaign exposure, <strong>and</strong> impact in terms of attitudes<br />
<strong>and</strong> behaviour-change, the following methods can generate a reliable comparison<br />
between members of the target audience that have been exposed <strong>to</strong> the campaign <strong>and</strong><br />
those who have not been exposed:<br />
Experimental trial is widely considered <strong>to</strong> be the most accurate way <strong>to</strong> assess impact,<br />
e.g. of a communication intervention. Popularly known in evaluation circles as RCTs<br />
(r<strong>and</strong>om controlled trials) <strong>and</strong> much used in scientific research, this method identifies<br />
the difference between what a campaign achieved <strong>and</strong> what would have been achieved<br />
without the campaign.<br />
Example: The evaluation of a media campaign on reducing sexual violence in high<br />
schools could for example use an approach <strong>to</strong> assess impact by conducting a formal<br />
experimental trial involving a control <strong>and</strong> an experimental group. The evaluation could<br />
be conducted in two different <strong>to</strong>wns with similar socio-economic conditions. In each high<br />
school, the 11th grade males <strong>and</strong> females could be administered an initial KAP survey<br />
on the <strong>to</strong>pic (pre-test). After campaign exposure over a predetermined spell of time at<br />
the “experimental” school (the students in the control high school would not be exposed<br />
<strong>to</strong> the campaign), 11th graders at both schools would be administered the same survey<br />
again (post-test). Comparing the pre- <strong>and</strong> post-test results at the two schools will allow<br />
for the drawing of certain conclusions on the effectiveness of the campaign – bearing in<br />
mind that other fac<strong>to</strong>rs not considered in research design may have exerted an influence<br />
as well.<br />
Source: Source: Potter, S. (2008): Incorporating Evaluation in<strong>to</strong> Media Campaign<br />
Design. Harrisburg, PA, on VAWnet, a project of the National Resource Center on<br />
Domestic <strong>Violence</strong>/Pennsylvania Coalition Against Domestic <strong>Violence</strong>.<br />
Experimental trials can produce reliable results if they are administered at a significant<br />
scale <strong>and</strong> with scientific rigor, which requires substantial resources in terms of time,<br />
skills <strong>and</strong> money. Ethical issues need <strong>to</strong> be taken in<strong>to</strong> account as well – by definition,<br />
the control group will be excluded from campaign activities <strong>and</strong> any potential benefits<br />
involved.<br />
There are a series of alternative methods <strong>to</strong> RCT that are accepted as producing equally<br />
valid results while dem<strong>and</strong>ing less time <strong>and</strong> offering more flexibility:<br />
� Repeated Measures: The same measurement, e.g. the same questionnaire, is<br />
administered <strong>to</strong> the same individuals or groups at intervals, e.g. every six months, <strong>to</strong><br />
verify how their response has evolved since the beginning <strong>and</strong> at different phases of<br />
the campaign.<br />
� Staged Implementation: If a campaign is rolled out in different phases with a<br />
substantial time lag, the evaluation can compare areas exposed <strong>to</strong> the campaign in<br />
its early stages <strong>to</strong> areas that have not yet been exposed.<br />
� Natural Variations in Treatment: In a large-scale campaign, implementation in<br />
some areas is bound <strong>to</strong> “fail” or not roll out exactly as intended. If these variations<br />
can be adequately tracked <strong>and</strong> measured, they can provide useful comparisons for<br />
impact assessment.<br />
291<br />
<strong>Campaigns</strong> December 2011