12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

RELIABILITY IN QUANTITATIVE <strong>RESEARCH</strong> 147<br />

themselves between the test and the retest<br />

times.<br />

Reliability as equivalence<br />

Within this type of reliability there are two main<br />

sorts. Reliability may be achieved first through<br />

using equivalent forms (also known as alternative<br />

forms) of a test or data-gathering instrument.<br />

If an equivalent form of the test or instrument<br />

is devised and yields similar results, then the<br />

instrument can be said to demonstrate this form of<br />

reliability. For example, the pretest and post-test<br />

in an experiment are predicated on this type of<br />

reliability, being alternate forms of instrument to<br />

measure the same issues. This type of reliability<br />

might also be demonstrated if the equivalent forms<br />

of a test or other instrument yield consistent results<br />

if applied simultaneously to matched samples (e.g.<br />

acontrolandexperimentalgrouportworandom<br />

stratified samples in a survey). Here reliability<br />

can be measured through a t-test, through the<br />

demonstration of a high correlation coefficient<br />

and through the demonstration of similar means<br />

and standard deviations between two groups.<br />

Second, reliability as equivalence may be<br />

achieved through inter-rater reliability. If more<br />

than one researcher is taking part in a piece of<br />

research then, human judgement being fallible,<br />

agreement between all researchers must be<br />

achieved, through ensuring that each researcher<br />

enters data in the same way. This would be<br />

particularly pertinent to a team of researchers<br />

gathering structured observational or semistructured<br />

interview data where each member of<br />

the team would have to agree on which data would<br />

be entered in which categories. For observational<br />

data, reliability is addressed in the training sessions<br />

for researchers where they work on video material<br />

to ensure parity in how they enter the data.<br />

At a simple level one can calculate the interrater<br />

agreement as a percentage:<br />

Number of actual agreements<br />

Number of possible agreements × 100<br />

Robson (2002: 341) sets out a more sophisticated<br />

way of measuring inter-rater reliability in coded<br />

observational data, and his method can be used<br />

with other types of data.<br />

Reliability as internal consistency<br />

Whereas the test/retest method and the equivalent<br />

forms method of demonstrating reliability require<br />

the tests or instruments to be done twice,<br />

demonstrating internal consistency demands that<br />

the instrument or tests be run once only through<br />

the split-half method.<br />

Let us imagine that a test is to be administered to<br />

agroupofstudents.Herethetestitemsaredivided<br />

into two halves, ensuring that each half is matched<br />

in terms of item difficulty and content. Each half<br />

is marked separately. If the test is to demonstrate<br />

split-half reliability, then the marks obtained on<br />

each half should be correlated highly with the<br />

other. Any student’s marks on the one half should<br />

match his or her marks on the other half. This can<br />

be calculated using the Spearman-Brown formula:<br />

Reliability =<br />

2r<br />

1 + r<br />

where r = the actual correlation between the<br />

halves of the instrument (see http://www.<br />

routledge.com/textbo<strong>ok</strong>s/9780415368780 –<br />

Chapter 6, file 6.6. ppt).<br />

This calculation requires a correlation coefficient<br />

to be calculated, e.g. a Spearman rank order<br />

correlation or a Pearson product moment correlation.<br />

Let us say that using the Spearman-Brown<br />

formula, the correlation coefficient is 0.85; in this<br />

case the formula for reliability is set out thus:<br />

Reliability = 2 × 0.85<br />

1 + 0.85 = 1.70<br />

1.85 = 0.919<br />

Given that the maximum value of the coefficient<br />

is 1.00 we can see that the reliability of this<br />

instrument, calculated for the split-half form of<br />

reliability, is very high indeed.<br />

This type of reliability assumes that the test<br />

administered can be split into two matched halves;<br />

many tests have a gradient of difficulty or different<br />

items of content in each half. If this is the case and,<br />

for example, the test contains twenty items, then<br />

the researcher, instead of splitting the test into two<br />

Chapter 6

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!