12.07.2015 Views

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

Why do humans reason? Arguments for an argumentative theory

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Commentary/Mercier & Sperber: <strong>Why</strong> <strong>do</strong> <strong>hum<strong>an</strong>s</strong> <strong>reason</strong>?The difficulty which such advocates of intuition have faced is toexplain why <strong>hum<strong>an</strong>s</strong> evolved a capacity <strong>for</strong> <strong>reason</strong>ing which isbest not trusted. M&S attempt to fill that gap in the targetarticle, thus supporting what I believe to be a d<strong>an</strong>gerouslyflawed line of <strong>reason</strong>ing about hum<strong>an</strong> intelligence. It is notnecessary to follow them <strong>do</strong>wn this path in order to respect theintegrity of their arguments about the evolution of <strong>reason</strong>ingvia argumentation. Unique hum<strong>an</strong> abilities <strong>for</strong> reflectivethought have required the evolution of a number of facilities,including l<strong>an</strong>guage, metarepresentation, <strong>an</strong>d large <strong>for</strong>ebrains,none of which could plausibly have been driven by some Darwini<strong>an</strong>need <strong>for</strong> a new mind. If there were such a driver, surelyother <strong>an</strong>imals would have evolved hum<strong>an</strong>-like intelligence. It ismore plausible to argue that the new mind was <strong>an</strong> evolutionaryaccident, in which case <strong>an</strong> exapted ability <strong>for</strong> <strong>reason</strong>ing derivedfrom argumentation may well be part of that story.Artificial cognitive systems: Where <strong>do</strong>esargumentation fit in?<strong>do</strong>i:10.1017/S0140525X10002839John FoxDepartment of Engineering Science, University of Ox<strong>for</strong>d, Ox<strong>for</strong>d OX1, UnitedKing<strong>do</strong>m.John.fox@eng.ox.ac.uk www.cossac.orgAbstract: Mercier <strong>an</strong>d Sperber (M&S) suggest that hum<strong>an</strong> <strong>reason</strong>ing isreflective <strong>an</strong>d has evolved to support social interaction. Cognitive agentsbenefit from being able to reflect on their beliefs whether they are actingalone or socially. A <strong>for</strong>mal framework <strong>for</strong> argumentation that has emergedfrom research on artificial cognitive systems that parallels M&S’sproposals may shed light on mental processes that underpin socialinteractions.Mercier <strong>an</strong>d Sperber (M&S) offer a provocative view of argumentationas <strong>reason</strong>ing <strong>for</strong> social purposes. Hum<strong>an</strong> <strong>reason</strong>ing,they suggest, is not the same as classical inference in the sensethat in <strong>reason</strong>ing, the rationale <strong>for</strong> conclusions is available <strong>for</strong>reflection <strong>an</strong>d hence <strong>for</strong> communication <strong>an</strong>d discussion. This is<strong>an</strong> import<strong>an</strong>t distinction, but there are also grounds <strong>for</strong> believingthat reflective <strong>reason</strong>ing has general benefits <strong>for</strong> <strong>an</strong>y cognitiveagent, not just <strong>for</strong> social interaction.A <strong>do</strong>main in which these benefits are evident is <strong>reason</strong>ing <strong>an</strong>ddecision making in medicine. I have a long-st<strong>an</strong>ding interest inthe cognitive mech<strong>an</strong>isms that support decision making <strong>an</strong><strong>do</strong>ther high-level cognitive processes that underpin hum<strong>an</strong> expertise,<strong>an</strong>d argumentation has acquired a central role in our work.Early approaches based on logical <strong>an</strong>d probabilistic simulationsof cognitive processes yielded promising results (Fox 1980), butextending either model to capture the flexible <strong>an</strong>d adaptive characterof hum<strong>an</strong> thinking proved difficult. Among the <strong>reason</strong>s <strong>for</strong>this were that there was no representation of the rationale onwhich to reflect – to question prior conclusions or the relev<strong>an</strong>ceof evidence, <strong>for</strong> example.Subsequent work has sought to address this. This research programmehas focused on artificial intelligence (AI) rather th<strong>an</strong>psychology, so my comments should be taken as complementaryto the M&S hypothesis rather th<strong>an</strong> directly addressing it.However, I will suggest that a cognitive agent, whether hum<strong>an</strong>or artificial, derives major benefits from being able to reflect onits mental states; its goals, intentions, justifications <strong>for</strong> itsbeliefs <strong>an</strong>d so on (Das et al. 1997; Fox & Das 2000; Fox et al.1990). Metacognitive capabilities confer flexibility <strong>an</strong>d robustnesswhether <strong>an</strong> agent is acting alone or in concert with others.Mercier <strong>an</strong>d Sperber’s (M&S’s) distinction between inference,which they call “intuitive,” <strong>an</strong>d <strong>reason</strong>ing, which af<strong>for</strong>ds “reflection,”may perhaps be clarified by a <strong>for</strong>mal perspective. A st<strong>an</strong>dardway of <strong>for</strong>malizing inference systems is to provide a“signature” that specifies how one set of sentences (e.g., propositions)is entailed by <strong>an</strong>other set of sentences (e.g., a databaseof propositions <strong>an</strong>d rules). This is a typical inference signature:DatabaseConclusion LInferenceThat is to say: Conclusion c<strong>an</strong> be validly inferred from Databaseunder the axioms of inference system L.Complex cognitive tasks like decision making <strong>an</strong>d pl<strong>an</strong>ningrequire a more complex signature. To emulate hum<strong>an</strong> clinicaldecision making, we sought a <strong>reason</strong>ing model in which generalmedical knowledge is applied to specific patient data by arguingthe pros <strong>an</strong>d cons of alternative ways of achieving clinical goals.This is summarized by the following signature.Knowledge < Data(Claim, Grounds, Qualifier) LAArgumentationIn contrast to the atomic conclusion of the inference signature,this <strong>for</strong>mulation makes the structure of arguments explicit. InLA, a Logic of Argument (Fox et al. 1993), the structure distinguishesthree things: the Claim (a tentative conclusion),Grounds (justification), <strong>an</strong>d Qualifier (the confidence in theClaim warr<strong>an</strong>ted by the argument. As in classical decision<strong>theory</strong>, but not classical logic, collections of arguments c<strong>an</strong> beaggregated within the LA framework to yield <strong>an</strong> overallmeasure of confidence in competing claims. For example, <strong>an</strong>agent may have multiple lines of argument <strong>for</strong> <strong>an</strong>d against competingdiagnoses or treatments, each of which increases ordecreases overall confidence.LA was developed <strong>for</strong> cognitive tasks like situation assessment,decision making, <strong>an</strong>d pl<strong>an</strong>ning, which often involve uncertainty.Uncertainty is modelled explicitly by me<strong>an</strong>s of the Qualifier <strong>an</strong>dthere<strong>for</strong>e permits reflection. A qualifier may indicate that <strong>an</strong>argument “supports” or “opposes” a claim, <strong>for</strong> example. In TheUses of Argument the philosopher Stephen Toulmin has alsopointed out that people routinely use linguistic qualifiers suchas “presumably...,” “possibly...,” “probably...,” <strong>an</strong>d theirlexical <strong>an</strong>d affixal negative <strong>for</strong>ms; linguistic qualifiers c<strong>an</strong> be <strong>for</strong>malisedas conditions <strong>for</strong> accepting claims based on collections ofarguments (Elv<strong>an</strong>g-Gor<strong>an</strong>sson et al. 1993). Qu<strong>an</strong>titative schemes<strong>for</strong> expressing argument strength, such as Bayesi<strong>an</strong> representations(e.g., Oaks<strong>for</strong>d <strong>an</strong>d Chater [2009] discussion in BBS vol.32) c<strong>an</strong> also be accommodated within the framework (Fox2003; Fox et al. 1993).It is a truism that the more supporting (opposing) argumentsthere are <strong>for</strong> a claim, the more (less) confidence we shouldhave in it, which we have called the evidential mode (Fox, inpress). Another mode, dialectical argumentation, exploits theobservation that discussion <strong>an</strong>d debate also commonly involves“attacks” which rebut or undercut the arguments of otheragents. Researchers in AI <strong>an</strong>d computational logic are giving subst<strong>an</strong>tialattention to argumentation <strong>for</strong> modelling interactions <strong>an</strong>ddialogues between cognitive agents (Besnard & Hunter 2008).Argumentation <strong>theory</strong> may there<strong>for</strong>e offer insights into thekinds of social interactions that M&S are investigating.Formal argumentation <strong>theory</strong> has practical applications. LA isthe foundation of PRO<strong>for</strong>ma, a l<strong>an</strong>guage <strong>for</strong> modelling cognitiveagents (Fox & Das 2000; Fox et al. 2003); which has been used todevelop m<strong>an</strong>y practical decision tools, notably in medicine(OpenClinical 2001–6). Argumentation <strong>theory</strong> may also help toclarify the philosophical <strong>an</strong>d theoretical nature of somewhatvague notions like evidence, as this term is commonly used inlegal, medical, scientific, <strong>an</strong>d other kinds of <strong>reason</strong>ing <strong>an</strong>d ineveryday decision-making <strong>an</strong>d evidence-based discussions(OpenClinical 2001–6).These practical uses of argumentation <strong>theory</strong> <strong>do</strong> not directlyaddress M&S’s proposition that hum<strong>an</strong> cognition has evolvedto support argument-based <strong>reason</strong>ing, but the practical power78 BEHAVIORAL AND BRAIN SCIENCES (2011) 34:2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!