27.12.2012 Views

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

BoundedRationality_TheAdaptiveToolbox.pdf

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

112 Gary Klein<br />

selecting the best one will decrease. (A similar issue was raised by von<br />

Winterfeldt and Edwards [1986] in their discussion of the flat maximum.) Thus,<br />

the comparisons that mean the least (for options that are microscopically different<br />

in value) will call forth the greatest effort. This argues that optimization may<br />

exist primarily for local tasks and may be unworkable when more global consid<br />

erations such as effort are taken into consideration. To the extent that this is true,<br />

it then diminishes the relevance of optimization for field settings and raises other<br />

difficulties, such as the self-referential problem of calculating the cost of calculating.<br />

Thus, if I want to optimize, I must also determine the effort it will take me<br />

to optimize; however, the subtask of determining this effort will itself take effort,<br />

and so forth into the tangle that self-referential activities create.<br />

7. The options must be thoroughly compared to each other. This entails<br />

comparing each option to every other option and making the comparisons on a<br />

full set of dimensions. Janis and Mann (1977) contrasted optimizing with<br />

satisficing and noted that satisficing permits minimal comparisons. The decision<br />

maker does not have to compare every option to every other option. Optimizing<br />

requires that all the reasonable options be compared on all dimensions,<br />

in carrying out a multiattribute utility analysis.<br />

Janis and Mann (1977) further stated that a satisficing strategy is attractive<br />

because it can be conducted using a limited set of requirements. They point out<br />

that an optimizing strategy contains an unrealistically large number of evaluation<br />

criteria and factors that have to be taken into account. Thus, the comparisons<br />

must be conducted over a wide range of considerations. This was discussed<br />

earlier in terms of the need to look at global issues. Hill-climbing algorithms represent<br />

a middle ground between satisficing (e.g., selecting the first option that<br />

works) and exhaustive comparisons (all options on each dimension), but they require<br />

a well-structured problem space and do not constitute optimizing.<br />

8. The decision maker must use a compensatory strategy. In critiquing the<br />

feasibility of an optimization strategy, Janis and Mann (1977) note that the comparison<br />

needs to take into account the degree of advantage and disadvantage that<br />

each option carries, on each evaluation dimension, rather than merely noting<br />

whether or not a difference exists. A problem arises when we consider findings<br />

such as Erev et al. (1993), who found that the decomposition needed to conduct<br />

formal analyses seemed to result in lower quality decisions than an overall preference<br />

would yield. If the implementation of optimization procedures interferes<br />

with expertise, then the methodology may have a serious disadvantage.<br />

9. The probability estimates must be coherent and accurate. It makes little<br />

sense to use the appropriate methods if those methods are not carried out well.<br />

Beyth-Marom et al. (1991) noted that in carrying out a decision analysis, the<br />

probability estimates for different outcomes must be reasonably accurate. This<br />

requirement makes sense. Yet how should a decision maker determine if the requirement<br />

is satisfied? Unless the different outcomes are systematically tested,<br />

the requirement is more an admonition to try hard to be accurate. Sometimes,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!