27.10.2014 Views

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Bivariate Analysis: Test<strong>in</strong>g Relations 601<br />

(The F statistic was named for Sir Ronald Fisher, who developed the idea for<br />

the ratio of the between-group and the with<strong>in</strong>-group variances as a general<br />

method for compar<strong>in</strong>g the relative size of means across many groups.) We<br />

calculate the ratio of the variances as follows:<br />

F111.03/4.8522.89<br />

If you run the ANOVA on a computer, us<strong>in</strong>g SYSTAT or SPSS, etc.,<br />

you’ll get a slightly different answer: 23.307. The difference (0.42) is due to<br />

round<strong>in</strong>g <strong>in</strong> all the calculations we just did. Computer programs may hold on<br />

to 12 decimal places all the way through before return<strong>in</strong>g an answer. In do<strong>in</strong>g<br />

these calculations by hand, I’ve rounded each step of the way to just two decimal<br />

places. Round<strong>in</strong>g error doesn’t affect the results of the ANOVA calculations<br />

<strong>in</strong> this particular example because the value of the F statistic is so big.<br />

But when the value of the F statistic is below 5, then a difference of .42 either<br />

way can lead to serious errors of <strong>in</strong>terpretation.<br />

The moral is: Learn how to do these calculations by hand, once. Then, use<br />

a computer program to do the drudge work for you, once you know what<br />

you’re do<strong>in</strong>g.<br />

Well, is 22.89 a statistically significant number? To f<strong>in</strong>d out, we go to<br />

appendix E, which shows the values of F for the .05 and the .01 level of significance.<br />

The values for the between-group df are shown across the top of<br />

appendix E, and the values for the with<strong>in</strong>-group df are shown along the left<br />

side. Look<strong>in</strong>g across the top of the table, we f<strong>in</strong>d the column for the betweengroup<br />

df. We come down that column to the value of the with<strong>in</strong>-group df. In<br />

other words, we look down the column labeled 3 and come down to the row<br />

for 46.<br />

S<strong>in</strong>ce there is no row for exactly 46 degrees of freedom for the with<strong>in</strong>-group<br />

value, we use the nearest value, which is for 40 df. We see that any F value<br />

greater than 2.84 is statistically significant at the .01 (that is, the 1%) level.<br />

The F value we got for the data <strong>in</strong> table 20.3 was a colossal 22.89. As it turns<br />

out, this is statistically significant beyond the .001 level.<br />

Examples of ANOVA<br />

Camilla Harshbarger (1986) tested the productivity—measured <strong>in</strong> averagebushels-per-hectare—of<br />

44 coffee farmers <strong>in</strong> Costa Rica, as a function of<br />

which credit bank they used or if they used no credit at all. This is the sort of<br />

problem that calls for ANOVA because Harshbarger had four different<br />

means—one for each of the three credit sources and one for farmers who<br />

chose not to use credit. The ANOVA showed that there was no significant dif-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!