27.12.2012 Views

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

170 Laura Martignon<br />

construction mechanisms that guarantee high accuracy. The Bayesian paradigm<br />

discussed in this chapter provides efficient search methods for good classification<br />

trees.<br />

It is amazing to note that both the Bayesian toolbox and adaptive toolbox<br />

meet in the end, adopting, at least in the final steps (see Figure 9.3), very similar<br />

inference machines, namely trees. As I have pointed out, being or not being a<br />

Bayesian is a matter not only of information formats but of memory and retrieval<br />

capacity. Nothing is more psychologically plausible than the Bayesian approach,<br />

when natural frequencies (of the correct cue configurations) are either<br />

provided or retrieved from memory. All other models are surrogates for the profile<br />

memorization method, not just when training and test set differ, but also<br />

when fitting known data. Once being a natural Bayesian becomes impossible,<br />

models make their entrance. They can be fast and frugal models (e.g., the Take<br />

The Best tree) or amount to simple trees searched by means of laborious implementations<br />

of Occam's Razor.<br />

The normative and the adaptive toolboxes came to exist because of our limited<br />

memory, limited retrieval capacity, and the need to make predictions on unknown<br />

data. Both toolboxes have a bias toward simplicity. The normative<br />

toolbox uses complex computations to become simple while the adaptive toolbox<br />

became simple by adaptively exploiting the structures of environments<br />

through the millennia.<br />

ACKNOWLEDGMENT<br />

I am very grateful to Robin Hogarth for his helpful comments and advice.<br />

REFERENCES<br />

Akaike, H. 1973. Information theory and an extension of the maximum likelihood<br />

principle. In: Second International Symposium on Information Theory, ed. B.N.<br />

Petrov and F. Csaki, pp. 267-281. Budapest: Akademiai Kiado.<br />

Albers, W. 1997. Foundations of a theory of prominence in the decimal system. Working<br />

Papers Nos. 265—271. Institute of Mathematical Economics. Bielefeld, Germany:<br />

Univ. of Bielefeld.<br />

Breiman, L., J.H. Friedman, R.A. Olshen, and C.J. Stone. 1993. Classification and<br />

Regression Trees. New York: Chapman and Hall.<br />

Cooper, G. 1990. The computational complexity of probabilistic inferences. Artif. Intel.<br />

42:393-405.<br />

Cover, T.M., and J. A. Thomas. 1991. Elements of Information Theory. New York: Wiley.<br />

Czerlinski, J., G. Gigerenzer, and D.G. Goldstein. 1999. How good are simple heuristics?<br />

In: Simple Heuristics That Make Us Smart, G. Gigerenzer, P.M. Todd, and the ABC<br />

Research Group, pp. 97—118. New York: Oxford Univ. Press.<br />

Dawes, R.M. 1979. The robust beauty of improper linear models in decision making.^m.<br />

Psych. 34:571-582.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!