21.01.2022 Views

Statistics for the Behavioral Sciences by Frederick J. Gravetter, Larry B. Wallnau ISBN 10: 1305504917 ISBN 13: 9781305504912

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

Statistics is one of the most practical and essential courses that you will take, and a primary goal of this popular text is to make the task of learning statistics as simple as possible. Straightforward instruction, built-in learning aids, and real-world examples have made STATISTICS FOR THE BEHAVIORAL SCIENCES, 10th Edition the text selected most often by instructors for their students in the behavioral and social sciences. The authors provide a conceptual context that makes it easier to learn formulas and procedures, explaining why procedures were developed and when they should be used. This text will also instill the basic principles of objectivity and logic that are essential for science and valuable in everyday life, making it a useful reference long after you complete the course.

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SECTION 16.2 | The Standard Error of Estimate and Analysis of Regression 539

Conceptually, the standard error of estimate is very much like a standard deviation:

Both provide a measure of standard distance. Also, you will see that the calculation of the

standard error of estimate is very similar to the calculation of standard deviation.

To calculate the standard error of estimate, we first find a sum of squared deviations

(SS). Each deviation measures the distance between the actual Y value (from the data) and

the predicted Y value (from the regression line). This sum of squares is commonly called

SS residual

because it is based on the remaining distance between the actual Y scores and the

predicted values.

SS residual

= Σ(Y – Ŷ) 2 (16.8)

Recall that variance

measures the average

squared distance.

The obtained SS value is then divided by its degrees of freedom to obtain a measure of

variance. This procedure should be very familiar:

Variance = SS

df

The degrees of freedom for the standard error of estimate are df = n – 2. The reason for

having n – 2 degrees of freedom, rather than the customary n – 1, is that we now are measuring

deviations from a line rather than deviations from a mean. To find the equation for the

regression line, you must know the means for both the X and the Y scores. Specifying these

two means places two restrictions on the variability of the data, with the result that the scores

have only n – 2 degrees of freedom. (Note: the df = n – 2 for SS residual

is the same df = n – 2

that we encountered when testing the significance of the Pearson correlation on p. 508.)

The final step in the calculation of the standard error of estimate is to take the square

root of the variance to obtain a measure of standard distance. The final equation is

standard error of estimate = Î SS residual

df

5Î S (Y 2 Y ⁄ ) 2

n 2 2

(16.9)

The following example demonstrates the calculation of this standard error.

EXAMPLE 16.3

The same data that were used in Example 16.1 are used here to demonstrate the calculation

of the standard error of estimate. These data have the regression equation

Ŷ = 2X – 1

Using this regression equation, we have computed the predicted Y value, the residual, and

the squared residual for each individual, using the data from Example 16.1.

Data

Predicted

Y value

Residual

Squared

Residual

X Y Ŷ = 2X 2 1 Y 2 Ŷ (Y 2 Ŷ) 2

5 10 9 1 1

1 4 1 3 9

4 5 7 –2 4

7 11 13 –2 4

6 15 11 4 16

4 6 7 –1 1

3 5 5 0 0

2 0 3 –3 9

0 SS residual

= 44

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!