11.07.2015 Views

European guidelines for youth AIDS peer education - University of ...

European guidelines for youth AIDS peer education - University of ...

European guidelines for youth AIDS peer education - University of ...

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Questionnaires can be descriptive in that they ask straight<strong>for</strong>ward questions about who people are, to whatdegree they engage in risk behaviours, what their attitudes are about various issues, what their opinions areabout the project and what they have gained from it. Analytical questionnaires attempt to measuredemographics, mediating variables, risk behaviours and details <strong>of</strong> contact with the project, then search <strong>for</strong>relationships between them to gain understanding.A high response rate to the survey questionnaires is very important. If only 50% return the questionnairesthen one never knows how the project affected the other half or who they are. There may be a bias in thenon-respondents that is related to risk behaviour or the project. In addition, questionnaires need to becarefully <strong>for</strong>mulated and understood by the young people filling them out. One needs to be sure that thequestions are actually measuring what was intended as well. There<strong>for</strong>e, the questionnaires should be pilotedin a sample <strong>of</strong> the target group who are interviewed or debriefed afterwards.Peer educator activities can involve a wide range <strong>of</strong> in<strong>for</strong>mal and <strong>for</strong>mal influences that are difficult tocapture using only questionnaires. Moreover, it is difficult to translate people’s subjective thinking andfeelings into quantitative variables that can be measured. A combination <strong>of</strong> quantitative and qualitativemethods can provide a wealth <strong>of</strong> in<strong>for</strong>mation if time and costs allow it.The advantages <strong>of</strong> the objectives-based model are they are pragmatic, produce tangible evidence, enableprogress to be seen and are more easily understood by funders and administrators. The disadvantages arethat they are inflexible compared to the reflective practitioner model, provide little in<strong>for</strong>mation on whichaction caused which effect (weak on causation), require time and special training, and may not be relevantto some kinds <strong>of</strong> <strong>peer</strong> <strong>education</strong>.c. Comparative modelLet us suppose that significant increases in condom use were measured in a target group using pretest/post-testsurveys. This may feel like good news and one might assume that the project had a positiveeffect on the target group. On the other hand, maybe the change in behaviour was due to some otheroutside influence. Perhaps the young people became more mature or they were influenced by someone inthe group become HIV-infected. Another possibility is that having the young people fill in the firstquestionnaire influenced them by focusing them on what they were supposed to learn or how to change.The above objectives-based model may be satisfactory in drawing in<strong>for</strong>mation that will contribute to theproject but gives no evidence that it worked.To answer that question it is necessary to design the evaluation and the data analysis in such a way as toisolate the particular effect <strong>of</strong> the project interventions and control <strong>for</strong> the influence <strong>of</strong> other non-projectvariables. One way to do so is to use a very similar group <strong>of</strong> young people that do not experience the projectand to measure them with the same survey at the same time points. These groups must be randomlyassigned to either receiving the project (the so-called ‘experimental group’) or acting as the ‘control group’.For example, it may be possible to use several similar sites <strong>for</strong> a project, then randomly assign two or moreto either the experimental or control groups.The meaningfulness <strong>of</strong> the data gathered with this design depends on the similarity between the twogroups. In essence, the control group stands <strong>for</strong> the experimental group as it would have been if unaffectedby the <strong>peer</strong> <strong>education</strong> project. However, there can still be innate differences between the two groups thatare the real reasons behind the change rather than the project. There is also the risk that the effect <strong>of</strong> theproject on the first group ‘spills over’ to the second through social contacts and indirectly influences thatgroup as well.One solution is to add more groups to the randomly assigned experimental and control categories. The moregroups used and the more random their assignment to one <strong>of</strong> the categories, the stronger the evidence thatthe project may be responsible <strong>for</strong> the change. This is the so-called randomised controlled trial. This method,together with at least a pre-test/post-test measurement <strong>of</strong> both experimental and control group(s), isconsidered the ‘gold standard’ in intervention research.44 <strong>European</strong> <strong>guidelines</strong> <strong>for</strong> <strong>youth</strong> <strong>AIDS</strong> <strong>peer</strong> <strong>education</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!