02.03.2013 Views

Thinking and Deciding

Thinking and Deciding

Thinking and Deciding

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

364 QUANTITATIVE JUDGMENT<br />

procedure may involve many of the same processes as evaluation, so we discuss it<br />

here too.<br />

A major question in the study of judgment concerns the relative efficacy of unaided<br />

holistic judgment — in which the judge simply assigns a number to each case<br />

— <strong>and</strong> judgment aided by the use of calculations, like those done in MAUT. The answers<br />

to this question have strong implications for decision making in government,<br />

business, <strong>and</strong> the professions, <strong>and</strong> in any situation where important decisions are<br />

made about a number of cases described on the same dimensions.<br />

Multiple linear regression<br />

Most of the literature on judgment has looked at situations in which each of a number<br />

of possibilities, such as applicants for college, is characterized by several numbers.<br />

Each number represents a value on some dimension, or cue, such as grades or quality<br />

of recommendations. The judge’s task is to evaluate each possibility with respect to<br />

some goal (or goals), such as college grades as a criterion of success in college. Each<br />

dimension or cue has a high <strong>and</strong> low end with respect to the goal. For example, high<br />

test scores are assumed to be better than low test scores. In these situations a certain<br />

kind of normative model is assumed to apply, specifically, a statistical model called<br />

multiple linear regression (or just regression, for short). 1<br />

Let us take an extremely simple, somewhat silly, example of predicting a student’s<br />

final-exam grade (F) from the grades on a paper (P) <strong>and</strong> a midterm exam<br />

(M). Table 15.1 shows the three grades. A computer finds a formula that comes<br />

as close as possible to predicting F from M <strong>and</strong> P. In this case, the formula is<br />

F = .71 · M + .33 · P − 2.3. The predicted values, computed from the formula, are<br />

in the column labeled PRE. You can see that the predictions are not perfect. The last<br />

column shows the errors.<br />

The usual method of regression, illustrated here, chooses the formula so that the<br />

prediction is a linear sum of the predictors, each multiplied by a weight, <strong>and</strong> the<br />

sum of the squares of the errors is minimized. The formula also contains a constant<br />

term, in this case −2.3, which makes sure that the mean of the predicted values is<br />

the same as the mean of the real ones. The weights are roughly analogous to weights<br />

in MAUT (p. 341).<br />

The same method is useful in many other situations in which we want to predict<br />

one variable from several others, <strong>and</strong> in which we can expect this sort of linear formula<br />

to be approximately true. That is, we should expect the dependent variable —<br />

F in this case — to increase (or decrease) with each of the predictors. Examples are:<br />

predicting college grades from high-school grades <strong>and</strong> admissions tests (Baron <strong>and</strong><br />

Norman, 1992); election margins from economic data (Fair, 2002); <strong>and</strong> wine quality<br />

from weather data in the year the grapes were grown (Fair, 2002; see Armstrong,<br />

2001, for other examples).<br />

1 Economists call it “ordinary least squares” regression, or OLS, because the method involves minimizing<br />

the sum of the squared errors of prediction.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!