26.07.2013 Views

The Jeremiad Over Journalism

The Jeremiad Over Journalism

The Jeremiad Over Journalism

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>The</strong> code book and the coding scheme was specifically constructed to code for manifest, not latent,<br />

meaning in the texts and after two rounds of pilot-coding, a full-scale content analysis with a<br />

comprehensive intercoder-reliability-test by a second coder was conducted. 535<br />

Based on previous content analysis studies, Neuendorf recommends a reliability subsample size<br />

between 10% and 20% and advocates keeping this sample size larger than 50 and rarely ―larger than<br />

about 300.‖ 536 Consequently, to adhere to Neuendorf‘s guidelines, a pool of 100 articles out of the<br />

entire sample of 876 was randomly selected using the website random.org.<br />

<strong>The</strong> 100 articles (11,4 percent of the total number of articles) selected for a test of intercoder-<br />

reliability therefore falls well within Neuendorf‘s recommendation and follows her prescription that<br />

―the establishment of intercoder reliability is essential, a necessary criterion for valid and useful<br />

research when human coding is employed.‖ 537<br />

Through the statistical program PASW, the latest version of SPSS, a reliability-test taking into<br />

account chance agreement between coders was chosen. Neuendorf points out that no agreed-upon<br />

standards for presenting intercoder-reliability is prevalent, and therefore ―the best we can expect at<br />

present is full and clear reporting of at least one reliability coefficient for each variable measured in<br />

a human-coded content analysis.‖ 538<br />

To conform with Neuendorf‘s recommendations the reliability results in this study will be reported<br />

using Krippendorff‘s alpha as the reliability-coefficient and reporting this for each variable coded<br />

by the two coders. Furthermore, examples of the coding will be provided throughout the analysis, so<br />

the readers have an opportunity to follow the research process and the line of reasoning developed<br />

based on the content analysis results. Through this approach, a quantitative foundation for further<br />

discussion of the hypotheses‘ validity will be laid. As Neuendorf has pointed out, reporting<br />

intercoder-reliability variable by variable yields a more precise picture of the actual reliability, as<br />

variables with extremely high reliability cannot balance out variables with low reliability scores.<br />

535 Neuendorf, <strong>The</strong> Content Analysis Guidebook. Page 146.<br />

536 Ibid. Page 158-159. In Daniel Riffe, Stephen Lacy, and Frederick G. Fico‘s writing a ―random selection of content<br />

samples for reliability‖ is recommended. On the amount of text to be tested the authors state that advice has been<br />

ambiguous, but, ―One text (Wimmer & Dominick, 2003) suggests that between 10% and 25% of the body of content<br />

should be tested. Others (Kaid & Wadsworth, 1989) suggested that between 5% and 7% or the total is adequate.‖ Daniel<br />

Riffe, Stephen Lacy, and Frederick G. Fico, Analyzing Media Messages: Using Quantitative Content Analysis, 2nd ed.<br />

(New Jersey: Lawrence Erlbaum Associates, Publishers, 2005). Page 143.<br />

537 Neuendorf, <strong>The</strong> Content Analysis Guidebook. Page 142.<br />

538 Ibid. Page 144.<br />

172

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!