05.12.2016 Views

Is headspace making a difference to young people’s lives?

Evaluation-of-headspace-program

Evaluation-of-headspace-program

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2. Evaluation Methodology<br />

2.4 Statistical analysis techniques<br />

The outcomes analysis draws upon data contained within the hCSA and the <strong>young</strong> people surveys.<br />

The analysis has been conducted using two distinct approaches. These are:<br />

• Difference-in-<strong>difference</strong> (DID) approach, and<br />

• Clinically significant change (CSC) method.<br />

Further information on how these methods have been specifically applied is contained in Appendix C;<br />

however, summarised information is also provided here.<br />

2.4.1 The Difference-in-<strong>difference</strong> Method<br />

The <strong>young</strong> people survey data has been analysed using a <strong>difference</strong>-in-<strong>difference</strong> (DID) approach.<br />

This non-experimental method is commonly used <strong>to</strong> evaluate the impact of programs or interventions.<br />

The simplest design for a DID analysis calculates the effect of a treatment (<strong>headspace</strong>) on an<br />

outcome (for example psychological distress). This is done through a comparison of the average<br />

change over time in the outcome variable for the treatment group <strong>to</strong> the average change over time for<br />

the comparison group.<br />

The specific objective of the DID method for the evaluation is <strong>to</strong> assess the changes in <strong>young</strong><br />

<strong>people’s</strong> mental health, physical health, drug and alcohol use and social inclusion outcomes after<br />

using <strong>headspace</strong> services relative <strong>to</strong> other comparable <strong>young</strong> people that did not receive treatment at<br />

a <strong>headspace</strong> centre. A scoping analysis of the survey data demonstrated <strong>difference</strong>s in the profiles<br />

of the ‘<strong>headspace</strong> treatment’ group compared <strong>to</strong> those captured within the comparison surveys,<br />

illustrating that the <strong>headspace</strong> survey clients were quite different <strong>to</strong> the general population. To<br />

address this limitation, the evalua<strong>to</strong>rs sought <strong>to</strong> match the groups through propensity score matching<br />

– a statistical matching technique that aims <strong>to</strong> better align intervention and comparison cohorts.<br />

Two groups were extracted from the comparison surveys <strong>to</strong> match the ‘<strong>headspace</strong> treatment’ group,<br />

those that received some other mental health treatment (the ‘other treatment’ group) and those<br />

that received no treatment (the ‘no treatment’ group). A number of variables were tested <strong>to</strong> align<br />

the groups, with four key variables - K10 score, age, gender, and days out of role - confirmed as<br />

benchmarks for the matching technique. The propensity score matching has resulted in a smaller<br />

sample but closer alignment between the groups of interest. Further information on the treatment<br />

groups and propensity score matching, including the age and sex distributions of matched groups is<br />

available in Appendix C.<br />

The results of the DID analyses are presented in Chapter 4 below. Difference-in-<strong>difference</strong> estimates<br />

are defined as the <strong>difference</strong> in the average outcome in the ‘<strong>headspace</strong> treatment’ group at two<br />

points in time, that is at wave 1 and wave 2 data collection, minus the <strong>difference</strong> in the average<br />

outcome in the matched comparison groups (‘other treatment’ and ‘no treatment’ groups). Six key<br />

outcomes variables contained within the survey data are used <strong>to</strong> assess changes in mental and<br />

physical health, social and vocational participation, and alcohol and drug use (see description in<br />

Table 2.3 below).<br />

Chapter 4 also reports on whether the <strong>difference</strong>s in outcomes between the matched groups<br />

are statistically significant (determined by using an orthodox t-test). Finally, effect sizes are also<br />

reported. Effect sizes can be expressed in a number of ways, with Cohen’s d commonly reported as<br />

a standard indica<strong>to</strong>r in clinical evaluation. The Cohen effect size measure presents a standardised<br />

<strong>difference</strong> in means across the course of an intervention (that is, the ratio of mean <strong>difference</strong> <strong>to</strong> a<br />

pooled standard deviation measure).<br />

Social Policy Research Centre 2015<br />

<strong>headspace</strong> Evaluation Final Report<br />

14

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!