06.09.2021 Views

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

Learning Statistics with R - A tutorial for psychology students and other beginners, 2018a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

sugar 2.13 2 4.0446 0.045426 *<br />

milk 1.00 1 3.8102 0.074672 .<br />

sugar:milk 5.94 2 11.2769 0.001754 **<br />

Residuals 3.16 12<br />

---<br />

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1<br />

Oh, that’s not good at all. In the case of milk in particular, the p-value has changed from .002 to .07.<br />

This is a pretty substantial difference, <strong>and</strong> hopefully it gives you a sense of how important it is that you<br />

take care when using Type III tests.<br />

Okay, so if the p-values that come out of Type III analyses are so sensitive to the choice of contrasts,<br />

does that mean that Type III tests are essentially arbitrary <strong>and</strong> not to be trusted? To some extent<br />

that’s true, <strong>and</strong> when we turn to a discussion of Type II tests we’ll see that Type II analyses avoid this<br />

arbitrariness entirely, but I think that’s too strong a conclusion. Firstly, it’s important to recognise that<br />

some choices of contrasts will always produce the same answers. Of particular importance is the fact that<br />

if the columns of our contrast matrix are all constrained to sum to zero, then the Type III analysis will<br />

always give the same answers. This means that you’ll get the same answers if you use contr.Helmert or<br />

contr.sum or contr.poly, but different answers <strong>for</strong> contr.treatment or contr.SAS.<br />

> r<strong>and</strong>om.contrasts r<strong>and</strong>om.contrasts[, 1] r<strong>and</strong>om.contrasts[, 2] r<strong>and</strong>om.contrasts # print it to check that we really have an arbitrary contrast matrix...<br />

[,1] [,2]<br />

[1,] 0.38898807 -0.78454935<br />

[2,] -0.04337123 0.70004953<br />

[3,] -0.34561683 0.08449982<br />

> contrasts( coffee$sugar ) contrasts( coffee$milk ) mod.R Anova( mod.R, type = 3 )<br />

Anova Table (Type III tests)<br />

Response: babble<br />

Sum Sq Df F value Pr(>F)<br />

(Intercept) 434.29 1 1647.8882 3.231e-14 ***<br />

sugar 2.13 2 4.0446 0.045426 *<br />

milk 1.00 1 3.8102 0.074672 .<br />

sugar:milk 5.94 2 11.2769 0.001754 **<br />

Residuals 3.16 12<br />

---<br />

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1<br />

Yep, same answers.<br />

16.10.5 Type II sum of squares<br />

Okay, so we’ve seen Type I <strong>and</strong> III tests now, <strong>and</strong> both are pretty straight<strong>for</strong>ward: Type I tests are<br />

per<strong>for</strong>med by gradually adding terms one at a time, whereas Type III tests are per<strong>for</strong>med by taking<br />

the full model <strong>and</strong> looking to see what happens when you remove each term. However, both have some<br />

- 547 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!