26.02.2015 Views

Download the eBook (8.25 MB) - ECREA Thematic Sections

Download the eBook (8.25 MB) - ECREA Thematic Sections

Download the eBook (8.25 MB) - ECREA Thematic Sections

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Diversity of Journalisms. Proceedings of <strong>ECREA</strong>/CICOM Conference, Pamplona, 4-5 July 2011<br />

Aims<br />

We should recall that reliability is a necessary, although not sufficient, condition of<br />

validity. When it comes to speaking of reliability, Stemler (2001) distinguishes between<br />

(a) stability, <strong>the</strong> intra-rater reliability – can <strong>the</strong> same coder get <strong>the</strong> same results try after<br />

try? or (b) reproducibility, <strong>the</strong> inter-rater reliability – do coding schemes lead to <strong>the</strong><br />

same text being coded in <strong>the</strong> same category by different people? However, as<br />

Krippendorff (1990: 192) affirms, we must bear in mind that “in content analysis,<br />

reliability and validity are related by <strong>the</strong> two following propositions (a) reliability sets<br />

limits on <strong>the</strong> potential validity of <strong>the</strong> research results; and (b) reliability does not<br />

guarantee <strong>the</strong> validity of <strong>the</strong> research results”. Thus, according to Andrén (1981: 46)<br />

"we find a connection between <strong>the</strong> concepts of objectivity and of reliability. This is only<br />

as it should be; it is natural to assume that an objective result is independent of <strong>the</strong><br />

subject who conducted <strong>the</strong> investigation. Here, however, we must distinguish between<br />

<strong>the</strong> factual or ontological problem – what makes <strong>the</strong> result true? – and <strong>the</strong> epistemic or<br />

methodological problem – how do we come to know that a result is true or false –”.<br />

Intercoder reliability<br />

Our communication, as we have said, is centered on analyzing <strong>the</strong> reproducibility of <strong>the</strong><br />

content analysis developed within <strong>the</strong> project as a whole. Therefore, with <strong>the</strong> aim of<br />

providing our study with <strong>the</strong> necessary intercoder reliability, we consider it a priority to<br />

carry out a test that measures that level of agreement before starting <strong>the</strong> analysis of<br />

<strong>the</strong> entire sample.<br />

As Lombard, Snyder and Campanella (2002:589) affirm “it is widely acknowledged that<br />

intercoder reliability is a critical component of content analysis and (although it does not<br />

ensure validity) when it is not established, <strong>the</strong> data and interpretations of <strong>the</strong> data can<br />

never be considered valid”. The United States General Accounting Office, GAO,<br />

(1996:64) states in one of its reports “this measure indicates how well two or more<br />

coders reached <strong>the</strong> same judgments in coding <strong>the</strong> data”. In its turn, <strong>the</strong> GAO (1996:36)<br />

considers its use necessary given that “in many circumstances, evaluators can make<br />

numerical estimates of intercoder reliability and use <strong>the</strong> results to judge <strong>the</strong> readiness<br />

of coders to proceed from training to actual coding”.<br />

263

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!