02.03.2013 Views

Thinking and Deciding

Thinking and Deciding

Thinking and Deciding

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

THE LENS MODEL 365<br />

Table 15.1: Data <strong>and</strong> predictions (PRE) for regression example<br />

Student P M F PRE Error<br />

1 90 90 90 91.6 1.6<br />

2 80 90 91 88.3 -2.7<br />

3 70 90 84 85.0 1.0<br />

4 70 70 71 70.7 -0.3<br />

5 60 40 46 46.0 0.0<br />

6 50 80 71 71.3 0.3<br />

In some cases, the basic idea of the model — that everything is multiplied by a<br />

weight representing its importance <strong>and</strong> all of the values are then added together —<br />

might be wrong. One way in which the model could be wrong is that there might be<br />

an interaction between two variables. This means that the importance (or weight) of<br />

one variable depends on the value of the other. For example, in prediction of college<br />

grades, aptitude tests might be more useful when high-school grades are low ...or<br />

high — one could tell a story about either case.<br />

Another way in which the model could be wrong is that some variables might<br />

not have a simply linear effect. The importance of a variable might be different<br />

for different parts of its range. It could even reverse direction. For example, the<br />

effect of sugar content of a beverage on taste ratings would probably increase <strong>and</strong><br />

then decrease. In general, though, when reversals like this are not expected, simple,<br />

additive models do quite well even if their assumptions are incorrect (Dawes <strong>and</strong><br />

Corrigan, 1974).<br />

The lens model<br />

Suppose we asked a professor to predict F (final exam grade) from M <strong>and</strong> P without<br />

the benefit of the formula. We could then obtain a list of judgments that we could<br />

place beside the true values for comparison, as shown in Table 15.2. We could ask<br />

several questions about these judgments. For example, we could ask how close they<br />

come to the true values, or whether the judgments themselves could be predicted<br />

from M <strong>and</strong> P, <strong>and</strong> so on.<br />

In this case, the formula for predicting the judgments J is J = .50·M +.49·P +<br />

0.76. Notice that the professor weighed the two predictors about equally, although in<br />

fact the midterm was a much better predictor than the paper (in the original formula<br />

above). Notice also, however, that the error in predicting the professor’s judgments<br />

was very small. She was quite consistent in her policy, even though it was off. On<br />

the other h<strong>and</strong>, her overemphasis on the paper did not hurt much. She still did quite<br />

well at predicting the final exam grade.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!