12.07.2015 Views

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Mercier & Sperber: <strong>Why</strong> <strong>do</strong> <strong>hum<strong>an</strong>s</strong> <strong>reason</strong>?thinking of <strong>reason</strong>s to support their <strong>an</strong>swer is what people<strong>do</strong> spont<strong>an</strong>eously <strong>an</strong>yhow when they regard their <strong>an</strong>swernot as <strong>an</strong> obvious piece of knowledge but as <strong>an</strong> opinionthat might be challenged. By contrast, particip<strong>an</strong>ts in theother group were much less overconfident. Having tothink of arguments against their <strong>an</strong>swer enabled them tosee its limitations – something they would not <strong>do</strong> ontheir own (<strong>for</strong> replications <strong>an</strong>d extensions to the phenomenonof hindsight bias <strong>an</strong>d the fundamental attributionerror, see Arkes et al. 1988; Davies 1992; Griffin &Dunning 1990; Hirt & Markm<strong>an</strong> 1995; Hoch 1985; Yateset al. 1992). It is then easy to see that overconfidencewould also be reduced by having particip<strong>an</strong>ts discusstheir <strong>an</strong>swers with people who favor different conclusions.4.1.3. Belief persever<strong>an</strong>ce. Motivated <strong>reason</strong>ing c<strong>an</strong> alsobe used to h<strong>an</strong>g on to beliefs even when they have beenproved to be ill-founded. This phenomenon, known asbelief persever<strong>an</strong>ce, is “one of social psychology’s mostreliable phenomena” (Guenther & Alicke 2008, p. 706;<strong>for</strong> <strong>an</strong> early demonstration, see Ross et al. 1975). Theinvolvement of motivated <strong>reason</strong>ing in this effect c<strong>an</strong> bedemonstrated by providing particip<strong>an</strong>ts with evidenceboth <strong>for</strong> <strong>an</strong>d against a favored belief. If belief persever<strong>an</strong>cewere a simple result of some degree of psychologicalinertia, then the first evidence presented should be themost influential, whether it supports or disconfirms thefavored belief. On the other h<strong>an</strong>d, if evidence c<strong>an</strong> beused selectively, then only evidence supporting thefavored belief should be retained, regardless of the orderof presentation. Guenther <strong>an</strong>d Alicke 2008 tested thishypothesis in the following way. Particip<strong>an</strong>ts first had toper<strong>for</strong>m a simple perceptual task. This task, however,was described as testing <strong>for</strong> “mental acuity,” a made-upconstruct that was supposed to be related to general intelligence,making the results of the test highly relev<strong>an</strong>t toparticip<strong>an</strong>t’s self-esteem. Particip<strong>an</strong>ts were then givenpositive or negative feedback, but a few minutes laterthey were told that the feedback was actually bogus <strong>an</strong>dthe real aim of the experiment was explained. At threedifferent points, the particip<strong>an</strong>ts also had to evaluatetheir per<strong>for</strong>m<strong>an</strong>ce: right after the task, after the feedback,<strong>an</strong>d after the debriefing. In line with previous results, theparticip<strong>an</strong>ts who had received positive feedback showed aclassic belief-persever<strong>an</strong>ce effect <strong>an</strong>d discounted thedebriefing, which allowed them to preserve a positiveview of their per<strong>for</strong>m<strong>an</strong>ce. By contrast, those who hadreceived negative feedback did the opposite: They tookthe debriefing fully into account, which allowed them toreject the negative feedback <strong>an</strong>d restore a positive viewof themselves. This strongly suggests that belief persever<strong>an</strong>ceof the type just described is <strong>an</strong> inst<strong>an</strong>ce of motivated<strong>reason</strong>ing (<strong>for</strong> applications to the <strong>do</strong>main of politicalbeliefs, see Prasad et al. 2009). 114.1.4. Violation of moral norms. The results reviewed sofar have shown that motivated <strong>reason</strong>ing c<strong>an</strong> lead topoor epistemic outcomes. We will now see that ourability to “find or make a <strong>reason</strong> <strong>for</strong> everything one has amind to <strong>do</strong>” (Fr<strong>an</strong>klin 1799) c<strong>an</strong> also allow us to violateour moral intuitions <strong>an</strong>d behave unfairly. In a recentexperiment, Valdesolo <strong>an</strong>d DeSteno (2008) have demonstratedthe role <strong>reason</strong>ing c<strong>an</strong> play in maintaining moralhypocrisy (when we judge someone else’s action byusing tougher moral criteria th<strong>an</strong> we use to judge ourown actions). Here is the basic setup. On arriving at thelaboratory, particip<strong>an</strong>ts were told that they would be per<strong>for</strong>mingone of two tasks: a short <strong>an</strong>d fun task or a long <strong>an</strong>dhard task. Moreover, they were given the possibility ofchoosing which task they would be per<strong>for</strong>ming, knowingthat the other task would be assigned to <strong>an</strong>other particip<strong>an</strong>t.They also had the option of letting a computerchoose at r<strong>an</strong><strong>do</strong>m how the tasks would be distributed.Once they were <strong>do</strong>ne assigning the tasks, particip<strong>an</strong>tshad to rate how fair they had been. Other particip<strong>an</strong>ts,instead of having to make the assignment themselves,were at the receiving end of the allocation <strong>an</strong>d had nochoice whatsoever; they had to rate the fairness of the particip<strong>an</strong>twho had <strong>do</strong>ne the allocation, knowing the exactconditions under which this had been <strong>do</strong>ne. It is then possibleto compare the fairness ratings of particip<strong>an</strong>ts whohave assigned themselves the easy task with the ratingsof those who have been assigned the hard task. The differencebetween these two ratings is a mark of moral hypocrisy.The authors then hypothesized that <strong>reason</strong>ing, since itallows particip<strong>an</strong>ts to find excuses <strong>for</strong> their behavior, wasresponsible <strong>for</strong> this hypocrisy. They tested this hypothesisby replicating the above conditions with a twist: The fairnessjudgments were made under cognitive load, whichmade <strong>reason</strong>ing close to impossible. This had the predictedresult: Without the opportunity to <strong>reason</strong>, theratings were identical <strong>an</strong>d showed no hint of hypocrisy.This experiment is just one illustration of a more generalphenomenon. Reasoning is often used to find justifications<strong>for</strong> per<strong>for</strong>ming actions that are otherwise felt to be unfairor immoral (B<strong>an</strong>dura 1990; B<strong>an</strong>dura et al. 1996; Bersoff1999; Cr<strong>an</strong>dall & Eshlem<strong>an</strong> 2003; D<strong>an</strong>a et al. 2007; Diekm<strong>an</strong>net al. 1997; Haidt 2001; Mazar et al. 2008; Mooreet al. 2008; Snyder et al. 1979; <strong>for</strong> children, see Gummerumet al. 2008). Such uses of <strong>reason</strong>ing c<strong>an</strong> havedire consequences. Perpetrators of crimes will betempted to “blame the victim” or find other excuses tomitigate the effects of violating their moral intuitions(Ry<strong>an</strong> 1971; <strong>for</strong> a review, see Hafer & Begue 2005),which c<strong>an</strong> in turn make it easier to commit new crimes(Baumeister 1997). This view of <strong>reason</strong>ing <strong>do</strong>vetails withrecent theories of moral <strong>reason</strong>ing that see it mostly as atool <strong>for</strong> communication <strong>an</strong>d persuasion (Gibbard 1990;Haidt 2001; Haidt & Bjorklund 2007).These results raise a problem <strong>for</strong> the classical view of<strong>reason</strong>ing. In all these cases, <strong>reason</strong>ing <strong>do</strong>es not lead tomore accurate beliefs about <strong>an</strong> object, to better estimatesof the correctness of one’s <strong>an</strong>swer, or to superior moraljudgments. Instead, by looking only <strong>for</strong> supporting arguments,<strong>reason</strong>ing strengthens people’s opinions, distortstheir estimates, <strong>an</strong>d allows them to get away with violationsof their own moral intuitions. In these cases, epistemic ormoral goals are not well served by <strong>reason</strong>ing. By contrast,<strong>argumentative</strong> goals are: People are better able to supporttheir positions or to justify their moral judgments.5. Proactive <strong>reason</strong>ing in decision makingIn the previous section, we argued that much <strong>reason</strong>ing is<strong>do</strong>ne in <strong>an</strong>ticipation of situations where <strong>an</strong> opinion mighthave to be defended, <strong>an</strong>d we suggested that work onmotivated <strong>reason</strong>ing c<strong>an</strong> be fruitfully reinterpreted in68 BEHAVIORAL AND BRAIN SCIENCES (2011) 34:2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!