12.07.2015 Views

Full report. - Social Research and Demonstration Corp

Full report. - Social Research and Demonstration Corp

Full report. - Social Research and Demonstration Corp

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

learn$ave Project: Final Report• Age group• Highest level of education(attained prior to project enrolment)• Marital status• Whether or not there were children under 18 years of agein the household• Immigration status• Whether or not activity limitations were <strong>report</strong>ed(disability)• Labour force participation (employed by others; selfemployed;unemployed or out of the labour force)• Household income(during year before project enrolment)• Monthly payments for household expenses• Difficulty making payments• Whether or not there was a household budget• Future time perspectiveThe regression adjustment procedure used the PROCGLM comm<strong>and</strong> in the Statistical Analysis System (SAS).The GLM procedure uses the method of ordinary leastsquares to fit General Linear Models. This was applicableeven in the case of binary outcome variables where biascould arise from using linear regression.Response bias testingThis section is focused on the question of whetheror not survey attrition — a phenomenon common tolongitudinal surveys such as in the learn$ave project —has created any bias in the data used to observe trends<strong>and</strong> estimate impacts of the intervention being tested inthis project. In this section, the analysis is limited to the2,269 learn$ave enrollees at baseline who completedthe 54-month survey, which includes 842 learn$ave-onlygroup members, 859 learn$ave-plus group members<strong>and</strong> 568 control group members. This represents a 63.3per cent survey response rate from the original baselinesample of 3,584 enrollees. However, the response rateswere appreciably higher in the program groups (70.5per cent <strong>and</strong> 71.9 per cent in the learn$ave-only <strong>and</strong>learn$ave-plus groups, respectively) than in the controlgroup (47.5 per cent).The substantial difference in response rates betweenthe program <strong>and</strong> control groups in the 54-monthsurvey raises a concern as to whether or not the groupsremained comparable <strong>and</strong> therefore able to generatereliable estimates of impacts. As shown in the learn$aveimplementation <strong>report</strong> (Kingwell et al., 2005) r<strong>and</strong>omassignment was implemented successfully as there wereno systematic differences between program <strong>and</strong> controlgroups at baseline. While there were some differencesdue to sampling variation, it was determined that theywould not result in biased impacts if every person in thebaseline sample responds to follow-up surveys. However,not all participants did respond to follow-up surveys (e.g.,36.7 per cent of the baseline sample did not respond tothe 54-month follow-up survey). This could affect impactestimates if the non-response occurred systematically,i.e., if it was concentrated in a certain subgroup of thesample. If the composition of the samples was differentfrom survey to survey, estimates derived from thedifferent waves would not be directly comparable. Moreimportantly, if non-response affected program <strong>and</strong> controlgroups differently in a survey, the estimated programimpacts derived from the survey sample might be biased.While there is no direct way of assessing of the severityof non-response bias, observable characteristics of respondents<strong>and</strong> non-respondents can be used to evaluatewhether there is systematic differences in survey attrition.Here, the extent to which estimates may have beenaffected by potential non-response bias by comparing thebaseline characteristics of: (1) respondents in the 54-month <strong>and</strong> baseline survey samples; (2) respondents <strong>and</strong>non-respondents to the 54-month survey, across program<strong>and</strong> control groups. Ultimately, if there is no substantialdifference in baseline characteristics of respondentsbetween program <strong>and</strong> control groups (collectively knownas research groups), non-response is likely to be independentof membership in these groups, <strong>and</strong> estimatedimpacts are not likely to suffer from non-response bias.The first question to address is whether or notnon-responses to the 54-month survey were distributedr<strong>and</strong>omly <strong>and</strong> independent of observed characteristicsfor participants. Specifically, how different were follow-upsurvey respondents from the original baseline sample?For each subgroup characteristic, examining the first twosets of columns in Table D.1 by research group (e.g., malesin the control group at 54 months compared to males inthe control group at baseline) indicates that 54-monthsurvey respondents were more likely, than the baselineenrolees, to be women, married, have one or morechildren, or hold a university degree. However, overall,the differences between these two samples are not greatnor were they large between other survey samples (notshown). Thus, differences in impact estimates acrosssurvey samples should be interpreted with some cautionsince some of the difference may have been the result ofunbalanced survey attrition.The second question to address is whether or notcertain subgroups of participants responded differentlybetween the program <strong>and</strong> control groups. In Table D.2,differences in characteristics among respondents arecompared to those of non-respondents for each researchgroup <strong>and</strong> subgroup characteristic. For each subgroup128 | Appendix D <strong>Social</strong> <strong>Research</strong> <strong>and</strong> <strong>Demonstration</strong> <strong>Corp</strong>oration

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!