17.01.2015 Views

LibraryPirate

LibraryPirate

LibraryPirate

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

7.4 PAIRED COMPARISONS 251<br />

use of related observations resulting from nonindependent samples. A hypothesis test<br />

based on this type of data is known as a paired comparisons test.<br />

Reasons for Pairing It frequently happens that true differences do not exist<br />

between two populations with respect to the variable of interest, but the presence of extraneous<br />

sources of variation may cause rejection of the null hypothesis of no difference. On<br />

the other hand, true differences also may be masked by the presence of extraneous factors.<br />

Suppose, for example, that we wish to compare two sunscreens. There are at least<br />

two ways in which the experiment may be carried out. One method would be to select<br />

a simple random sample of subjects to receive sunscreen A and an independent simple<br />

random sample of subjects to receive sunscreen B. We send the subjects out into the sunshine<br />

for a specified length of time, after which we will measure the amount of damage<br />

from the rays of the sun. Suppose we employ this method, but inadvertently, most of the<br />

subjects receiving sunscreen A have darker complexions that are naturally less sensitive<br />

to sunlight. Let us say that after the experiment has been completed we find that subjects<br />

receiving sunscreen A had less sun damage. We would not know if they had less<br />

sun damage because sunscreen A was more protective than sunscreen B or because the<br />

subjects were naturally less sensitive to the sun.<br />

A better way to design the experiment would be to select just one simple random<br />

sample of subjects and let each member of the sample receive both sunscreens. We could,<br />

for example, randomly assign the sunscreens to the left or the right side of each subject’s<br />

back with each subject receiving both sunscreens. After a specified length of exposure<br />

to the sun, we would measure the amount of sun damage to each half of the back.<br />

If the half of the back receiving sunscreen A tended to be less damaged, we could more<br />

confidently attribute the result to the sunscreen, since in each instance both sunscreens<br />

were applied to equally pigmented skin.<br />

The objective in paired comparisons tests is to eliminate a maximum number of<br />

sources of extraneous variation by making the pairs similar with respect to as many<br />

variables as possible.<br />

Related or paired observations may be obtained in a number of ways. The same subjects<br />

may be measured before and after receiving some treatment. Litter mates of the same<br />

sex may be assigned randomly to receive either a treatment or a placebo. Pairs of twins or<br />

siblings may be assigned randomly to two treatments in such a way that members of a single<br />

pair receive different treatments. In comparing two methods of analysis, the material<br />

to be analyzed may be divided equally so that one-half is analyzed by one method and<br />

one-half is analyzed by the other. Or pairs may be formed by matching individuals on some<br />

characteristic, for example, digital dexterity, which is closely related to the measurement<br />

of interest, say, posttreatment scores on some test requiring digital manipulation.<br />

Instead of performing the analysis with individual observations, we use d i , the<br />

difference between pairs of observations, as the variable of interest.<br />

When the n sample differences computed from the n pairs of measurements constitute<br />

a simple random sample from a normally distributed population of differences,<br />

the test statistic for testing hypotheses about the population mean difference m d is<br />

t = d - m d 0<br />

(7.4.1)<br />

s d

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!