12.07.2015 Views

the ISFC39 Proceedings - International Systemic-Functional ...

the ISFC39 Proceedings - International Systemic-Functional ...

the ISFC39 Proceedings - International Systemic-Functional ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Papers from <strong>the</strong> 39th ISFCif psychological <strong>the</strong>ories of reading tell us that we do not hear <strong>the</strong> text when we read, wemust still look for a realisation in written English that is independent of an imagined or realpattern of intonation. More traditional studies of <strong>the</strong> psychology of reading have oscillatedbetween <strong>the</strong>ories that do not require <strong>the</strong> text to be heard, by proposing a direct route from <strong>the</strong>written word to meaning (e.g. Dehaene et al., 2005), and those that require some form ofdetour through a phonological level of processing (Lukatela et al., 2001), resulting insuggestions that both ‘routes’ operate simultaneously (Perfetti and Bolger, 2004).However, recent advances in psychological <strong>the</strong>ory cast serious doubt on <strong>the</strong>seperspectives because <strong>the</strong>y view cognitive processes as modular – a view now considered“indefensible” (Edelman, 2004). An embodied, grounded approach to cognition demands thatwe view all <strong>the</strong> senses employed in learning as being redeployed in recall (Barsalou, 2008).For instance, hearing speech probably deploys <strong>the</strong> same cognitive processes used forproducing speech, up to <strong>the</strong> motivation of <strong>the</strong> physical system (D’Ausilio et al., 2009). Thus,reading is not <strong>the</strong> ‘receiving’ of a writer’s words (as presumed by earlier psychologicalparadigms), but <strong>the</strong> interactive making of meaning. In cognitive space, <strong>the</strong>re may be no realdifference between reading and listening, as <strong>the</strong>y both entail <strong>the</strong> motivation of speaking andwriting processes.Significantly, however, even if we ‘hear’ what we read, we do not need to draw breathwhen reading silently, and so <strong>the</strong> written information unit is constrained not by <strong>the</strong> limitationsof <strong>the</strong> physical respiratory system as in speech, but by <strong>the</strong> cognitive visual system. This againsupports <strong>the</strong> view that New information should be positioned where it is easily identified by<strong>the</strong> eye, but also implies that written information units are likely to be distinct from spokenunits. Evidence for such differences can be found in academic or legal texts that areextremely difficult to read aloud but present little challenge to <strong>the</strong> silent reader, highlightingagain that written language performs different functions to spoken language (Halliday, 1989).2.3 Intonation in NeuroanatomyThe independence of prosodic features and syntax is recognised in most formal grammatical<strong>the</strong>ories (Jackendoff, 2002). However, Halliday and Matthiessen (2004) specify <strong>the</strong>separation of <strong>the</strong> (con)textual extent of <strong>the</strong> message from its interpersonal intent andideational content through periodic, field and constitutional grammatical patterns,respectively. Evidence is emerging to suggest that <strong>the</strong> division between <strong>the</strong>se semioticmetafunctions can be traced to brain structure and function, providing fur<strong>the</strong>r evidence for anindependent realisation for Information Structure in both speech and writing.Whatever localisation or divisions in <strong>the</strong> brain that may be proposed for linguisticfunctions, <strong>the</strong>se are no more than a bias and are “nei<strong>the</strong>r universal nor irreversible” (Deacon,1997, p.310). What has been identified, however, is a role for <strong>the</strong> two hemispheres to separateprosodic from constituent data in <strong>the</strong> stream of linguistic input:language production and analysis effectively require that we implement two different modes ofphonetic analysis and vocal control simultaneously: prosodic and phonemic processes... <strong>the</strong>monitoring of prosodic information tends to operate against a foreground attention to specific wordsand phrases. (Deacon, 1997 p.314).That is, we identify prosodic and phonemic information simultaneously (Friederici and Alter,2004). Although this insight has not yet been researched in <strong>the</strong> context of continuous reading,I would propose that in a saccade <strong>the</strong> same process is taking place: <strong>the</strong> hemispheres of <strong>the</strong>brain function to simultaneously recognise textual features of <strong>the</strong> visual signal, such as spacesand punctuation marks, while processing <strong>the</strong> signal for ideational meaning.197

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!