21.01.2022 Views

Statistics for the Behavioral Sciences by Frederick J. Gravetter, Larry B. Wallnau ISBN 10: 1305504917 ISBN 13: 9781305504912

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

400 CHAPTER 12 | Introduction to Analysis of Variance

statistical conclusion is consistent with the appearance of the data in Figure 12.9(b). Looking

at the figure, we see that the scores from the two samples appear to be intermixed randomly

with no clear distinction between treatments.

As a final point, note that the denominator of the F-ratio, MS within

, is a measure of the

variability (or variance) within each of the separate samples. As we have noted in previous

chapters, high variability makes it difficult to see any patterns in the data. In Figure 12.9(a),

the 4-point mean difference between treatments is easy to see because the sample variability

is small. In Figure 12.9(b), the 4-point difference gets lost because the sample variability

is large. In general, you can think of variance as measuring the amount of “noise”

or “confusion” in the data. With large variance there is a lot of noise and confusion and it

is difficult to see any clear patterns.

Although Examples 12.7 and 12.8 present somewhat simplified demonstrations with

exaggerated data, the general point of the examples is to help you see what happens when

you perform an ANOVA. Specifically:

1. The numerator of the F-ratio (MS between

) measures how much difference exists

between the treatment means. The bigger the mean differences, the bigger the

F-ratio.

2. The denominator of the F-ratio (MS within

) measures the variance of the scores inside

each treatment; that is, the variance for each of the separate samples. In general,

larger sample variance produces a smaller F-ratio.

We should note that the number of scores in the samples also influences the outcome

of an ANOVA. As with most other hypothesis tests, if other factors are held constant,

increasing the sample size tends to increase the likelihood of rejecting the null hypothesis.

However, changes in sample size have little or no effect on measures of effect size such

as η 2 .

Finally, we should note that the problems associated with high variance often can be

minimized by transforming the original scores to ranks and then conducting an alternative

statistical analysis known as the Kruskal-Wallis test, which is designed specifically for

ordinal data. The Kruskal-Wallis test is presented in Appendix E, which also discusses

the general purpose and process of converting numerical scores into ranks. The Kruskal-

Wallis test also can be used if the data violate one of the assumptions for the independentmeasures

ANOVA, which are outlined at the end of Section 12.4.

■ MS within

and Pooled Variance

You may have recognized that the two research outcomes presented in Example 12.8

are similar to those presented earlier in Example 10.8 in Chapter 10. Both examples are

intended to demonstrate the role of variance in a hypothesis test. Both examples show that

large values for sample variance can obscure any patterns in the data and reduce the potential

for finding significant differences between means.

For the independent-measures t statistic in Chapter 10, the sample variance contributes

directly to the standard error in the bottom of the t formula. Now, the sample variance

contributes directly to the value of MS within

in the bottom of the F-ratio. In the t-statistic and

in the F-ratio the variances from the separate samples are pooled together to create one

average value for sample variance. For the independent-measures t statistic, we pooled two

samples together to compute

pooled variance 5 s 2 p 5 SS 1 1 SS 2

df 1

1 df 2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!