27.10.2014 Views

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

Russel-Research-Method-in-Anthropology

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

512 Chapter 17<br />

sexual men and women offer each other <strong>in</strong> personal ads <strong>in</strong> the United States<br />

hasn’t changed much s<strong>in</strong>ce 1983 (Wiederman 1993; Willis and Carlson 1993;<br />

Goode 1996; Butler-Smith et al. 1998; Cameron and Coll<strong>in</strong>s 1998). Of course,<br />

we expect that people of different sexual orientations and people <strong>in</strong> different<br />

cultures and subcultures will seek and offer different resources <strong>in</strong> personal<br />

ads, which sets up a program of work on cross-cultural content analysis. (See<br />

de Sousa Campos et al. [2002] for a content analysis of personal ads <strong>in</strong> Brazil;<br />

see Parekh and Beres<strong>in</strong> [2001] for an analysis of ads <strong>in</strong> the United States,<br />

India, and Ch<strong>in</strong>a. And see Yancey and Yancey [1997] for a content analysis of<br />

personal ads of people seek<strong>in</strong>g <strong>in</strong>terracial relationships <strong>in</strong> the United States.<br />

See Smith and Stillman 2002 on differences <strong>in</strong> the personal ads of heterosexual,<br />

lesbian, and bisexual women; and see Kaufman and Phua 2003 on differences<br />

<strong>in</strong> the personal ads of gay and straight men.)<br />

Intercoder Reliability<br />

It is quite common <strong>in</strong> content analysis to have more than one coder mark<br />

up a set of texts. The idea is to see whether the constructs be<strong>in</strong>g <strong>in</strong>vestigated<br />

are shared—whether multiple coders reckon that the same constructs apply to<br />

the same chunks of text. There is a simple way to measure agreement between<br />

a pair of coders: you just l<strong>in</strong>e up their codes and calculate the percentage of<br />

agreement. This is shown <strong>in</strong> table 17.6 for two coders who have coded 10 texts<br />

for a s<strong>in</strong>gle theme, us<strong>in</strong>g a b<strong>in</strong>ary code, 1 or 0.<br />

TABLE 17.6<br />

Measur<strong>in</strong>g Simple Agreement between Two Coders on a S<strong>in</strong>gle Theme<br />

Units of Analysis (Documents/Observations)<br />

1 2 3 4 5 6 7 8 9 10<br />

Coder 1 0 1 0 0 0 0 0 0 1 0<br />

Coder 2 0 1 1 0 0 1 0 1 0 0<br />

Both coders have a 0 for texts 1, 4, 5, 7, and 10, and both coders have a 1<br />

for text 2. These two coders agree a total of six times out of ten—five times<br />

that the theme, whatever it is, does not appear <strong>in</strong> the texts, and one time that<br />

the theme does appear. On four out of ten texts, the coders disagree. On text<br />

9, for example, Coder 1 saw the theme <strong>in</strong> the text, but Coder 2 didn’t. Overall,<br />

these two coders agree 60% of the time.<br />

The total observed agreement, though, is not a good measure of <strong>in</strong>tercoder<br />

reliability because people can agree that a theme is present or absent <strong>in</strong> a text

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!