27.10.2014 Views

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

514 Chapter 17<br />

.6 .56<br />

k <br />

1 .56 .0909<br />

In other words, the 60% observed agreement between the two coders for the<br />

data <strong>in</strong> table 17.6 is about 9% better than we’d expect by chance. Whether<br />

we’re talk<strong>in</strong>g about agreement between two people who are cod<strong>in</strong>g a text or<br />

two people who are cod<strong>in</strong>g behavior <strong>in</strong> a time allocation study, 9% better than<br />

chance is noth<strong>in</strong>g to write home about.<br />

Carey et al. (1996) asked 51 newly arrived Vietnamese refugees <strong>in</strong> New<br />

York State 32 open-ended questions about tuberculosis. Topics <strong>in</strong>cluded<br />

knowledge and beliefs about TB symptoms and causes, as well as beliefs<br />

about susceptibility to the disease, prognosis for those who contract the disease,<br />

sk<strong>in</strong>-test<strong>in</strong>g procedures, and prevention and treatment methods. The<br />

researchers read the responses and built a code list based simply on their own<br />

judgment. The <strong>in</strong>itial codebook conta<strong>in</strong>ed 171 codes.<br />

Then, Carey et al. broke the text <strong>in</strong>to 1,632 segments. Each segment was<br />

the response by one of the 51 respondents to one of the 32 questions. Two<br />

coders <strong>in</strong>dependently coded 320 of the segments, mark<strong>in</strong>g as many of the<br />

themes as they thought appeared <strong>in</strong> each segment. Segments were counted as<br />

reliably coded if both coders used the same codes on it. If one coder left off a<br />

code or assigned an additional code, then this was considered a cod<strong>in</strong>g disagreement.<br />

On their first try, only 144 (45%) out of 320 responses were coded the same<br />

by both coders. The coders discussed their disagreements and found that some<br />

of the 171 codes were redundant, some were vaguely def<strong>in</strong>ed, and some were<br />

not mutually exclusive. In some cases, coders simply had different understand<strong>in</strong>gs<br />

of what a code meant. When these problems were resolved, a new,<br />

streaml<strong>in</strong>ed codebook was issued, with only 152 themes, and the coders<br />

marked up the data aga<strong>in</strong>. This time they were <strong>in</strong> agreement 88.1% of the<br />

time.<br />

To see if this apparently strong agreement was a fluke, Carey et al. tested<br />

<strong>in</strong>tercoder reliability with kappa. The coders agreed perfectly (k 1.0) on<br />

126 out of the 152 codes that they’d applied to the 320 sample segments. Only<br />

17 (11.2%) of the codes had f<strong>in</strong>al k values 0.89. As senior <strong>in</strong>vestigator,<br />

Carey resolved any rema<strong>in</strong><strong>in</strong>g <strong>in</strong>tercoder discrepancies himself (Carey et al.<br />

1996).<br />

How much <strong>in</strong>tercoder agreement is enough? As with so much <strong>in</strong> real life,<br />

the correct answer, I th<strong>in</strong>k, is: it depends. It depends, for example, on the level<br />

of <strong>in</strong>ference required. If you have texts from s<strong>in</strong>gle mothers about their efforts<br />

to juggle home and work, it’s easier to code for the theme ‘‘works full-time’’<br />

than it is to code for the theme ‘‘enjoys her job.’’

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!