13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 4: Observation 125The way in which interobserver reliability is assessed depends on how behavioris measured. When events are classified according to mutually exclusivecategories (nominal scale), observer reliability is generally assessed using a percentageagreement measure. A formula for calculating percentage agreementbetween observers isNumber of times two observers agree 100Number of opportunities to agreeIn his study of childhood aggression, Hartup (1974) reported measures of reliabilityusing percentage agreement that ranged from 83% to 94% for observerswho coded type of aggression and the nature of antecedent events in narrativerecords. Although there is no hard-and-fast rule that defines low interobserverreliability, researchers generally report estimates of reliability that exceed 85%in the published literature, suggesting that percentage agreement much lowerthan that is unacceptable.In many observational studies, data are collected by several observers whoobserve at different times. Under these circumstances, researchers select asample of the observations to measure reliability. For example, two observersmight record behavior according to time-sampling procedures and observe atthe same time for only a subset of times. The percentage agreement for the timesin which both observers are present can be used to estimate the degree of reliabilityfor the study as a whole.When data are measured using an ordinal scale, the Spearman rank-ordercorrelation is used to assess interobserver reliability. When observational dataare measured on an interval or ratio scale, such as when time is the measuredvariable, observer reliability can be assessed using a Pearson Product-MomentCorrelation Coefficient, r. For example, LaFrance and Mayo (1976) obtainedmeasures of reliability when observers recorded how much time a listenergazed into the speaker’s face during a conversation. Observer reliability in theirstudy was good; they found an average correlation of .92 between pairs of observerswho recorded time spent in eye contact.Key ConceptA correlation exists when two different measures of the same people, events,or things vary together—that is, when scores on one variable covary withscores on another variable. A correlation coefficient is a quantitative indexof the degree of this covariation. When observation data are measured usinginterval or ratio scales, a Pearson correlation coefficient, r, may be used to obtaina measure of interobserver reliability. The correlation tells us how wellthe ratings of two observers agree.The correlation coefficient indicates the direction and strength of the relationship.Direction can be either positive or negative. A positive correlationindicates that as the values for one measure increase, the values of the othermeasure also increase. For example, measures of smoking and lung cancer arepositively correlated. A negative correlation indicates that as the values of onemeasure increase, the values of the second measure decrease. For instance,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!