27.12.2012 Views

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

176 Daniel G. Goldstein et al.<br />

by regression reverses the "true" relative sizes of the weights, regression will<br />

make inferior predictions. Equal weighting insures you against making this kind<br />

of error. More generally, Einhorn and Hogarth (1975) showed that equal weight<br />

ing makes better predictions than regression as (1) the number of predictor (or<br />

independent) variables increase, (2) average inter-correlation between predictors<br />

increases, (3) the ratio of predictors to data points (on which regression<br />

weights are estimated) increases, and (4) the R 2 of the regression model decreases.<br />

To see how equal weighting can help you pick the "all star" basketball<br />

team or decide how many and which forecasters you should consult in making a<br />

"consensus" forecast, see Einhorn and McCoach (1977) and Hogarth (1978).<br />

Take The Best<br />

Take The Best is a heuristic from the adaptive toolbox (Gigerenzer and<br />

Goldstein 1996) that neither looks up nor integrates all available information. It<br />

is a lexicographic procedure (similar to the LEX model tested by Payne and colleagues)<br />

that uses a rank ordering of cues to make inferences and predictions<br />

(Martignon and Hoffrage 1999). Cues are searched through one at a time, until a<br />

cue that satisfies a stopping rule is found. The decision is made on the basis of<br />

the cue that stopped search, and all other cues are ignored. In empirical tests,<br />

Take The Best used less than a third of all information available to it. Remarkably,<br />

despite its simplicity, Take The Best can make predictions that are more accurate<br />

than those made by multiple regression and approximates the accuracy of<br />

Bayesian networks (Martignon and Laskey 1999; Czerlinski et al. 1999).<br />

What makes it work? In Take The Best, the decision made by a higher-ranked<br />

cue cannot be overruled by the integration of lower-ranked cues. Its predictions<br />

are equivalent to those of linear models with noncompensatory families of<br />

weights (Martignon and Hoffrage 1999), e.g., consider the linear model:<br />

y = gjcj + 4x2 + 2x3 +lx4<br />

(10.1)<br />

wherext is a binary (1 or 0) cue for / = 1,2,3,4. Each term on the right-hand side<br />

cannot be equaled or exceeded by the sum of all the terms with lesser weights. If<br />

cues are not binary but have positive real values, which become neither infinitely<br />

small nor infinitely large (i.e., bounded from below and from above by<br />

strictly positive real numbers), it is always possible to find weights that determine<br />

such a noncompensatory linear model equivalent to Take The Best in performance.<br />

If the "true" weights of the cues (i.e., those of an optimal model like<br />

regression) are noncompensatory, then Take The Best cannot be beaten by any<br />

other linear model when fitting data.<br />

When making predictions on new data, the frugality and simplicity of Take<br />

The Best are responsible for its robustness. Here the predictive accuracy of Take<br />

The Best is comparable to that of subtle Bayesian models, often surpassing optimal<br />

linear models (which tend to overfit). A variant of Akaike's Theorem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!