12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

538 QUANTITATIVE DATA ANALYSIS<br />

Box 24.30<br />

Ascatterplotwithgridlinesandregressionline<br />

Hours of study<br />

6<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

20<br />

30<br />

40<br />

50<br />

60<br />

Level of achievement<br />

line. In fact this is all calculated automatically by<br />

SPSS.<br />

Let us lo<strong>ok</strong> at a typical SPSS output here<br />

(Box 24.31).<br />

This table provides the R square. The R square<br />

tells us how much variance in the dependent<br />

variable is explained by the independent variable<br />

in the calculation. First, it gives us an R square<br />

value of 0.632, which indicates that 63.2 per<br />

cent of the variance is accounted for in the<br />

model, which is high. The adjusted R square<br />

is more accurate, and we advocate its use, as<br />

it automatically takes account of the number of<br />

independent variables. The adjusted R square is<br />

usually smaller than the unadjusted R square, as it<br />

also takes account of the fact that one is lo<strong>ok</strong>ing at<br />

Box 24.31<br />

AsummaryoftheR,RsquareandadjustedR<br />

square in regression analysis<br />

Model summary<br />

Adjusted SE of the<br />

Model R R square R square estimate<br />

1 0.795 a 0.632 0.625 9.200<br />

a. Predictors: (Constant), Hours of study<br />

70<br />

80<br />

asampleratherthanthewholepopulation.Here<br />

the adjusted R square is 0.625, and this, again,<br />

shows that, in the regression model that we have<br />

constructed, the independent variable accounts<br />

for 62.5 per sent of the variance in the dependent<br />

variable, which is high, i.e. our regression model<br />

is robust. Muijs (2004: 165) suggests that, for a<br />

goodness of fit with an adjusted R square:<br />

0.5: strong fit<br />

Second, SPSS then calculates the analysis of<br />

variance (ANOVA) (Box 24.32). At this stage<br />

we will not go into all of the calculations<br />

here (typically SPSS prints out far more than<br />

researchers may need; for a discussion of df<br />

(degrees of freedom) we refer readers to the<br />

earlier section). We go to the final column here,<br />

marked ‘Sig.’; this is the significance level, and,<br />

because the significance is 0.000, we have a very<br />

statistically significant relationship (stronger than<br />

0.001) between the independent variable (hours<br />

of study) and the dependent variable (level of<br />

achievement) (Box 24.32).<br />

This tells us that it is useful to proceed with<br />

the analysis, as it contains important results.<br />

SPSS then gives us a table of coefficients, both<br />

unstandardized and standardized. We advise to<br />

opt for the standardized coefficients, the Beta<br />

weightings. The Beta weight (β) istheamount<br />

of standard deviation unit of change in the<br />

dependent variable for each standard deviation<br />

unit of change in the independent variable. In<br />

the example in Box 24.33 the Beta weighting<br />

is 0.795; this tell us that, for every standard<br />

deviation unit change in the independent variable<br />

(hours of study), the dependent variable (level of<br />

achievement) will rise by 0.795 (79.5 per cent)<br />

of one standard deviation unit, i.e. for every<br />

one unit rise in the independent variable there<br />

is just over three-quarters of a unit rise in the<br />

dependent variable. This also explains why the<br />

slope of the line of best fit is steep but not quite 45<br />

degrees – each unit of one is worth only 79.5 per<br />

sent of a unit of the other (Box 24.33).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!