21.01.2022 Views

Statistics for the Behavioral Sciences by Frederick J. Gravetter, Larry B. Wallnau ISBN 10: 1305504917 ISBN 13: 9781305504912

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SECTION 10.4 | Effect Size and Confidence Intervals for the Independent-Measures t 317

In the context of an independent-measures research study, the difference between the two

sample means (M 1

− M 2

) is used as the best estimate of the mean difference between the

two populations, and the pooled standard deviation (the square root of the pooled variance)

is used to estimate the population standard deviation. Thus, the formula for estimating

Cohen’s d becomes

estimated d 5

estimated mean difference

estimated standard deviation 5 M 1 2 M 2

Ïs 2 p

(10.8)

For the data from Example 10.2, the two sample means are 8 and 12, and the pooled variance

is 9. The estimated d for these data is

d 5 M 1 2 M 2

Ïs 2 p

5 8 2 12

Ï9 5 24

3 521.33

Note: Cohen’s d is typically reported as a positive value; in this case d = 1.33. Using the

criteria established to evaluate Cohen’s d (see Table 8.2 on page 253), this value indicates

a very large treatment effect.

The independent-measures t hypothesis test also allows for measuring effect size by

computing the percentage of variance accounted for, r 2 . As we saw in Chapter 9, r 2 measures

how much of the variability in the scores can be explained by the treatment effects.

For example, some of the variability in the reported scores for the cheating study can be

explained by knowing the room in which particular student was tested; students in the

dimly lit room tend to report higher scores and students in the well-lit room tend to report

lower scores. By measuring exactly how much of the variability can be explained, we can

obtain a measure of how big the treatment effect actually is. The calculation of r 2 for the

independent-measures t is exactly the same as it was for the single-sample t in Chapter 9:

t2

r 2 5

t 2 1 df

(10.9)

For the data in Example 10.2, we obtained t = −2.67 with df = 14. These values produce

an r 2 of

r 2 5 22.672

22.67 2 1 14 5 7.13

7.13 1 14 5 7.13

21.13 5 0.337

According to the standards used to evaluate r 2 (see Table 9.3 on page 283), this value also

indicates a large treatment effect.

Although the value of r 2 is usually obtained by using Equation 10.9, it is possible to

determine the percentage of variability directly by computing SS values for the set of

scores. The following example demonstrates this process using the data from the cheating

study in Example 10.2.

EXAMPLE 10.5

The cheating study described in Example 10.2 examined dishonest behavior for two groups

of students; one group who were tested in a dimly lit room and one group tested in a well-lit

room. If we assume that the null hypothesis is true and there is no difference between the two

populations of students, there should be no systematic difference between the two samples.

In this case, the two samples can be combined to form a single set of n = 16 scores with an

overall mean of M = 10. The two samples are shown as a single distribution in Figure 10.4(a).

For this example, however, the conclusion from hypothesis test is that there is a real

difference between the two groups. The students tested in the dimly lit room have a mean

score of M = 12, which is 2 points above the overall average. Similarly, the students tested

in the well-lit room had a mean score of M = 8, 2 points below the overall average. Thus,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!