29.01.2013 Views

Abstracts of the Psychonomic Society — Volume 14 — November ...

Abstracts of the Psychonomic Society — Volume 14 — November ...

Abstracts of the Psychonomic Society — Volume 14 — November ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Sunday Morning Papers 272–278<br />

9:00–9:15 (272)<br />

Effects <strong>of</strong> Stress, Working Memory Capacity, and Inferential Complexity<br />

on Foreign Language Reading Comprehension. LESTER<br />

C. LOSCHKY, MANPREET K. RAI, RICHARD J. HARRIS, PATRI-<br />

CIA C. BARROS, & LINDSAY G. COOK, Kansas State University<strong>—</strong><br />

We investigated effects <strong>of</strong> stress, working memory (WM) capacity, and<br />

inferential complexity on foreign language (FL) readers’ inference comprehension,<br />

in terms <strong>of</strong> accuracy (processing effectiveness) and reading<br />

speed (processing efficiency). Fifty-five intermediate-level Spanish<br />

learners’ reading comprehension was measured using questions with<br />

three levels <strong>of</strong> inferential complexity: noninference, bridging-inference,<br />

and pragmatic-inference. We measured participants’ WM capacity and<br />

varied <strong>the</strong>ir stress level using a video camera. The results showed that<br />

higher WM learners were more accurate overall. Stress decreased efficiency,<br />

with a trend toward greater effects on RTs for questions requiring<br />

greater inferential complexity. Consistent with Eysenck et al.’s (2007)<br />

attentional control <strong>the</strong>ory, analyses showed that higher WM learners<br />

strategically traded efficiency for greater effectiveness, whereas lower<br />

WM learners only did so (less successfully) under stress. Thus, <strong>the</strong> results<br />

showed that stress impedes FL reading comprehension through<br />

interactions between WM capacity and inferential complexity but can be<br />

strategically compensated for by increasing processing time.<br />

Divided Attention<br />

Republic Ballroom, Sunday Morning, 8:00–9:55<br />

Chaired by Andrew Heathcote, University <strong>of</strong> Newcastle, Australia<br />

8:00–8:15 (273)<br />

Testing <strong>the</strong> Architecture <strong>of</strong> Cognition. ANDREW HEATHCOTE,<br />

AMI EIDELS, & SCOTT D. BROWN, University <strong>of</strong> Newcastle, Australia<strong>—</strong>Psychologists<br />

have long sought ways to identify <strong>the</strong> architecture<br />

<strong>of</strong> cognition, from Donders’s (1859) subtractive methodology, through<br />

Sternberg’s (1969) additive factors methodology for establishing selective<br />

influence on a processing stage, and culminating in Townsend and<br />

Nozawa’s (1995) system factorial technology (SFT). SFT analyzes response<br />

time distribution, ra<strong>the</strong>r than summaries such as mean RT, allowing<br />

it to avoid <strong>the</strong> limitations <strong>of</strong> earlier approaches and make strong<br />

inferences (Platt, 1964). SFT uses signature functions that identify architectural<br />

variations critical to <strong>the</strong> study <strong>of</strong> attention and perception,<br />

such as whe<strong>the</strong>r a system is serial or parallel and whe<strong>the</strong>r it is affected<br />

by capacity limitations. We propose a nonparametric Bayesian approach,<br />

based on Klugkist, Laudy, and Hoijtink (2005), to statistical inference<br />

for SFT. We benchmark its statistical performance for testing selective<br />

influence, where <strong>the</strong>re exist alternative approaches from econometrics,<br />

and illustrate its broader application to testing SFT hypo<strong>the</strong>ses where no<br />

alternative tests exist.<br />

8:20–8:35 (274)<br />

Dual Task With and Without Cost: An Underlying Modular Architecture.<br />

ASHER COHEN, MORAN ISRAEL, MAYA ZUCKERMAN,<br />

& ARIT GLICKSOHN, Hebrew University <strong>of</strong> Jerusalem<strong>—</strong>Dual task<br />

costs are <strong>of</strong>ten documented, but <strong>the</strong>ir underlying costs are disputed. We<br />

conducted experiments in which participants performed one or two visual<br />

tasks over eight sessions. In <strong>the</strong> first set <strong>of</strong> experiments, <strong>the</strong> input to<br />

both tasks was presented simultaneously. The two tasks ei<strong>the</strong>r shared a<br />

module or did not share a module. Clear dual task costs were observed<br />

when <strong>the</strong> tasks shared a module, and no costs were observed when <strong>the</strong><br />

tasks did not share a module. The very same tasks were performed in a<br />

second set <strong>of</strong> experiments, except that <strong>the</strong> SOA <strong>of</strong> <strong>the</strong> input to <strong>the</strong> two<br />

tasks varied between trials. Here, a clear cost was observed even when<br />

<strong>the</strong> tasks did not share a module. The results suggest that a modular<br />

architecture is an important factor in dual task performance and that <strong>the</strong><br />

PRP paradigm is not adequate for examining structural dual task costs.<br />

8:40–8:55 (275)<br />

Supertaskers: Extraordinary Ability in Multitasking. DAVID L.<br />

STRAYER & JASON M. WATSON, University <strong>of</strong> Utah<strong>—</strong>Theory<br />

suggests that driving should be impaired for all motorists when <strong>the</strong>y<br />

42<br />

concurrently talk on a cell phone. But is everybody impaired by this dualtask<br />

combination? We tested 200 participants in a high-fidelity driving<br />

simulator in both single- and dual-task conditions. The dual-task condition<br />

involved concurrently performing a very demanding auditory version<br />

<strong>of</strong> <strong>the</strong> operation span (OSPAN) task. Whereas <strong>the</strong> vast majority <strong>of</strong> participants<br />

showed significant performance decrements in dual-task conditions<br />

(compared to single-task conditions for both <strong>the</strong> driving and OSPAN<br />

tasks), 2% <strong>of</strong> <strong>the</strong> sample showed absolutely no performance decrements<br />

across <strong>the</strong> single- and dual-task conditions. In <strong>the</strong> single-task condition,<br />

<strong>the</strong>se supertaskers scored in <strong>the</strong> top quartile on all dependent measures<br />

associated with driving, and OSPAN tasks and Monte Carlo simulations<br />

indicated that <strong>the</strong> frequency <strong>of</strong> supertaskers was significantly greater than<br />

chance. We suggest that, in demanding dual-task situations, supertaskers<br />

recruit from broader neural regions than do <strong>the</strong> population at large.<br />

9:00–9:15 (276)<br />

Triple-Task Performance Reveals Common Codes for Spatial Information.<br />

PAUL ATCHLEY & DAVID MARSHALL, University <strong>of</strong><br />

Kansas<strong>—</strong>Multiple resource <strong>the</strong>ory (Wickens, 1980, 2002) suggests attention<br />

may use separate resource pools and share common codes (such as<br />

spatial codes). This view is supported by work showing that spatial coding<br />

for verbal and visual attention information may overlap physiologically<br />

in right hemisphere regions. We examined cross-modal dual-task performance<br />

using a dichotic listening task to present verbal information and a<br />

pointing task to collect visual attention data. Participants listened to word<br />

streams <strong>of</strong> directional words and color words in each ear and responded to<br />

one ear on a block <strong>of</strong> trials by producing compatible direction (e.g., “up:<br />

north”) or color (e.g., “ruby: red”) responses while performing a visual<br />

task in which <strong>the</strong>y identified a centrally presented target and <strong>the</strong>n localized<br />

a target presented peripherally along <strong>the</strong> same axes as <strong>the</strong> directional<br />

words. The data showed clear code interference effects; dichotic evidence<br />

for a right lateralized effect for direction was mixed.<br />

9:20–9:35 (277)<br />

Evidence Against a Unitary Central Bottleneck: Reductions <strong>of</strong> Dual-<br />

Task Costs Depend on Modality Pairings. ELIOT HAZELTINE, University<br />

<strong>of</strong> Iowa, ERIC RUTHRUFF, University <strong>of</strong> New Mexico, & TIM<br />

WIFALL, University <strong>of</strong> Iowa<strong>—</strong>Practice can dramatically reduce dualtask<br />

costs, in some cases, completely eliminating <strong>the</strong>m. However, dualtask<br />

costs are reduced at different rates, depending on <strong>the</strong> particular combination<br />

<strong>of</strong> tasks. We examined <strong>the</strong> role that task similarity, in terms <strong>of</strong><br />

<strong>the</strong> input and output modalities, plays in reduction <strong>of</strong> dual-task costs with<br />

practice. Four groups <strong>of</strong> participants performed a visual-manual task that<br />

was paired with ei<strong>the</strong>r ano<strong>the</strong>r visual-manual task, a visual-vocal task, an<br />

auditory-manual task, or an auditory-vocal task. Although task difficulty<br />

was similar across all conditions, only <strong>the</strong> auditory-vocal group was able<br />

to eliminate dual-task costs after 16 sessions <strong>of</strong> practice. These findings<br />

suggest that dual-task costs after practice depend on crosstalk between<br />

modality-specific representations ra<strong>the</strong>r than on competition for amodal<br />

central operations.<br />

9:40–9:55 (278)<br />

When Two Objects Are Easier Than One: Implications for<br />

Object- Based Attention. W. TRAMMELL NEILL, YONGNA LI, &<br />

GEORGE A. SEROR, University at Albany<strong>—</strong>In many studies, subjects<br />

process two features <strong>of</strong> one object more easily than <strong>the</strong>y do two features<br />

<strong>of</strong> two objects. Such within-object superiority suggests that attention is<br />

object based ra<strong>the</strong>r than merely space based. However, o<strong>the</strong>r studies find<br />

<strong>the</strong> opposite result: between-object superiority (e.g., Davis & Holmes,<br />

2005; Neill, O’Connor, & Li, 2008). How can it be easier to divide attention<br />

between features <strong>of</strong> different objects than between features <strong>of</strong> <strong>the</strong><br />

same object? We consider three explanations: (1) Processing capacity<br />

may be initially divided between objects, so that features <strong>of</strong> <strong>the</strong> same<br />

object compete for allocated capacity, but features <strong>of</strong> different objects do<br />

not. (2) Attention to two features <strong>of</strong> <strong>the</strong> same object may cause irrelevant<br />

object information to also be processed. (3) When target features are integral<br />

to object shapes, whole objects may be easier to compare than <strong>the</strong><br />

component features. We report multiple experiments supporting <strong>the</strong> third<br />

explanation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!