12.07.2015 Views

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Commentary/Mercier & Sperber: <strong>Why</strong> <strong>do</strong> <strong>hum<strong>an</strong>s</strong> <strong>reason</strong>?some aspect of the world, followed by <strong>for</strong>mulation of hypothesesthat are tested <strong>an</strong>d immediately ab<strong>an</strong><strong>do</strong>ned if disconfirmed bydata. I propose that the following account is more accurate.Research <strong>for</strong> professional <strong>reason</strong>ers begins with <strong>an</strong> emotionalattraction to certain ideas, <strong>an</strong> attraction Tomkins (1965) called“ideo-affective reson<strong>an</strong>ce.” This emotional reson<strong>an</strong>ce c<strong>an</strong> causescientists to cling tenaciously to ideas, even in the face of counterevidence.In some ways, science resembles legal proceedings inwhich the very best case <strong>for</strong> guilt or innocence is presented byuncompromising prosecuting <strong>an</strong>d defense attorneys, respectively.Scientists who resonate to different views clash in conferences<strong>an</strong>d in journals. Each seeks to convince others that he or she iscorrect. M&S review research indicating that when members ofgroups holding disparate views debate, each arguing <strong>for</strong> a differentview, “truth wins” (sect. 2.3, para. 1). Perhaps truth <strong>do</strong>es win oftenenough in jury trials <strong>an</strong>d scientific debates, but as we all know,sometimes it <strong>do</strong>es not. M&S might be expressing unwarr<strong>an</strong>ted optimismhere.I w<strong>an</strong>t to close my commentary with some observations aboutmoral <strong>reason</strong>ing. Research by Haidt (2001), mentioned by M&S,<strong>an</strong>d by Joshua Greene (2003) strongly supports a dual-processmodel wherein people inst<strong>an</strong>t<strong>an</strong>eously decide if <strong>an</strong> act is “good”<strong>an</strong>d there<strong>for</strong>e something we “ought” to <strong>do</strong> by taking note of theimmediate, reflexive feelings that emerge when thinking aboutthe act. In the second stage of the dual process, they mayattempt to defend their feelings in terms of rational argument. Professionalphilosophers are much better at the <strong>reason</strong>ing part of theprocess, but are still guided initially by emotional reflexes. Theimmediacy <strong>an</strong>d inevitability of certain emotions (e.g., revulsionon contemplating the torture of a child) c<strong>an</strong> lead philosophers<strong>an</strong>d nonphilosophers alike into making pronouncements suchas “That we ought to refrain from torturing children is a moraltruth.”But only propositions about what is the case c<strong>an</strong> be true or false.Moral pronouncements express reflexive feelings about how weought to behave <strong>an</strong>d are there<strong>for</strong>e not truth-apt. “Moral truth” isa category mistake. I have a yet-untested two-part hypothesisabout why so m<strong>an</strong>y people (including moral philosophers) makethis apparent category mistake (Johnson 2007). First, hum<strong>an</strong>beings are prone to mistakenly assuming that when they feel astrong <strong>an</strong>d immediate emotion, this is a reliable sign of a selfevidenttruth. Second, although moral systems evolved becausethey conferred benefits on all particip<strong>an</strong>ts (compare M&S’s observationthat persuasive communication must be sufficiently beneficialto both parties, else the capacity <strong>for</strong> being persuadedwould be selected against <strong>an</strong>d go out of existence), the propensityof a person to be responsive to moral “oughts” c<strong>an</strong> be exploited bysomeone who benefits at that person’s expense. Compare, <strong>for</strong>example, the persuasiveness of “Give me ten percent of yourmoney because I w<strong>an</strong>t it” with “That we have a duty to tithe tothe church is a venerable moral truth.” Scrutiny of <strong>an</strong>y rhetoricalef<strong>for</strong>t is wise, particularly those in the moral <strong>do</strong>main.True to the power of one? Cognition,argument, <strong>an</strong>d <strong>reason</strong>ing<strong>do</strong>i:10.1017/S0140525X10002992Drew Michael Khlentzos <strong>an</strong>d Bruce StevensonL<strong>an</strong>guage <strong>an</strong>d Cognition Research Centre, Psychology, School of Behavioural,Cognitive <strong>an</strong>d Social Sciences, University of New Engl<strong>an</strong>d, Armidale 2351,Australia.dkhlentz@une.edu.au bstevens@une.edu.auhttp://www.une.edu.au/staff/dkhlentz.phphttp://www.une.edu.au/staff/bstevens.phpAbstract: While impressed by much of what Mercier & Sperber (M&S)offer through their <strong>argumentative</strong> hypothesis, we question whetherthe specific competencies entailed in each system are adequate. Inparticular, whether system 2 might not require independent <strong>reason</strong>ingcapabilities. We explore the adequacy of the expl<strong>an</strong>ations offered <strong>for</strong>confirmation bias <strong>an</strong>d the Wason selection task.For Mercier <strong>an</strong>d Sperber (M&S), what appears as poor <strong>reason</strong>ingis actually appropriate argument – social dialogue facilitates<strong>reason</strong>ing by prompting agents to <strong>for</strong>mulate arguments <strong>an</strong>ddefend them from objections. M&S propose a dual-processmodel with system 1 (S 1 ) a consortium of inference mech<strong>an</strong>isms<strong>an</strong>d system 2 (S 2 ), <strong>an</strong> S 1 apologist. We identify some features wethink require clarification <strong>an</strong>d provide alternative interpretationsof phenomena used by M&S to support their model.If S 1 generates conclusions without revealing their derivation(modular-like), then where <strong>do</strong>es S 2 acquire the competence tosupport these arguments? What type of <strong>reason</strong>ing is required<strong>for</strong> it to construct these arguments, or <strong>do</strong>es it run data backthrough S 1 <strong>for</strong> a <strong>reason</strong>ed result? Related to this is the issue of<strong>argumentative</strong> contexts which trigger S 2 . These appear to bericher in in<strong>for</strong>mation, creating a potential confound <strong>for</strong> the <strong>argumentative</strong>hypothesis: Is it the <strong>argumentative</strong> feature or theincreased in<strong>for</strong>mation that is critical?The social psychology findings M&S adduce to support theirview present a puzzle <strong>for</strong> it: How c<strong>an</strong> truth win out amongstsophistical S 2 s committed not to discovering the facts but todefending S 1 ’s representation of them? Convergence-on-truthsuggests there’s more to S 2 th<strong>an</strong> defence of S 1 . One alternativeviews S 2 as a dynamic, defeasible <strong>reason</strong>er that sifts through S 1outputs, independently generating conclusions to be updated inthe light of new in<strong>for</strong>mation.Presumably S 1 must support probabilistic as well as deductiveinferences. In which case, some regulatory role <strong>for</strong> S 2 is inescapable.Suppose S 1 has both deductive <strong>an</strong>d probabilistic mech<strong>an</strong>isms<strong>an</strong>d these produce compatible results with input X bothdeductively entailing <strong>an</strong>d probabilistically supporting Y. Imaginenew evidence E emerging that undermines Y so that X þ Emakes Y not probable. Nonetheless, E c<strong>an</strong>not affect the derivationof Y from X. So X þ E still entails Y. Whence S 2 has to decidewhether to defend Y since it is derivable from X þ E or surrenderY as X þ E makes Y improbable. How would it make this decision?Consider now M&S’s views on confirmation bias. M&S denyconfirmation bias is a flaw in <strong>reason</strong>ing. Yet if the aim of eachagent’s S 2 is to persuade others, confirmation bias would justpolarize views with no agent prepared to listen to <strong>an</strong>other’s arguments.Alternatively, if each S 2 defends <strong>an</strong> agent’s beliefs againstobjections, amassing evidence <strong>for</strong> those beliefs is import<strong>an</strong>t but<strong>an</strong>ticipating likely objections <strong>an</strong>d preparing a defence is no lessso. Relative to aims of persuasion or defence, then, confirmationbias registers as a fault in <strong>reason</strong>ing.Compare <strong>an</strong> M&S-styled S 2 -<strong>reason</strong>er Aaron with a defeasibleS 2 -<strong>reason</strong>er Belle. Aaron is convinced the river mussels are goodto eat since he’s eaten them the past five days. Belle felt ill aftereating them the day be<strong>for</strong>e. She advises Aaron to refrain. Aaron’sS 2 considers positive evidence <strong>an</strong>d discounts negative evidence.So Aaron eats the mussels <strong>an</strong>d falls ill. In contrast, Belle’s S 2 constructsfast generalizations on the fly. Having eaten them <strong>for</strong> fourdays, Belle inferred (G) the mussels are good to eat. But now herS 2 enables Belle to a<strong>do</strong>pt a position appropriate to the evolvingevidence. The crucial difference between Aaron <strong>an</strong>d Belle isthis: Were they to swap roles, Belle would feel no internalpressure from her S 2 to eat the mussels (unlike Aaron fromhis): Evidence someone else fell ill c<strong>an</strong> prompt a defeasible <strong>reason</strong>erto update (G) as disconfirming <strong>an</strong>d confirming evidenceare weighted equally. Whilst M&S’s model allows S 1 to updatein<strong>for</strong>mation, <strong>reason</strong>ing to a new conclusion (belief revision)appears <strong>an</strong>omalous.Does the <strong>argumentative</strong> hypothesis yield the best expl<strong>an</strong>ationof <strong>reason</strong>ing per<strong>for</strong>m<strong>an</strong>ce? Take the Wason selection task. M&Sclaim that when agents are asked to assess the truth of (W) Ifthere’s a vowel on one side of a card, there’s <strong>an</strong> even number onits other side <strong>for</strong> <strong>an</strong> E, K, 4, 7 array, their S 1 matches cards to82 BEHAVIORAL AND BRAIN SCIENCES (2011) 34:2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!