12.07.2015 Views

In-Vehicle Tasks: Effects of Modality, Driving Relevance, and ...

In-Vehicle Tasks: Effects of Modality, Driving Relevance, and ...

In-Vehicle Tasks: Effects of Modality, Driving Relevance, and ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

ABSTRACTThe current study was designed to assess the factors that could moderate the differencesbetween auditory <strong>and</strong> visual delivery <strong>of</strong> in-vehicle information, primarily as such informationaffected measures <strong>of</strong> driving performance, <strong>and</strong> secondarily as such factors would influence theprocessing <strong>of</strong> the in-vehicle information itself. <strong>In</strong> particular, we were interested in threemoderating factors: task relevance, task complexity, <strong>and</strong> the nature <strong>of</strong> the driving performancevariable (lane keeping versus hazard response). Finally, we were interested in how the use <strong>of</strong> aredundant combination <strong>of</strong> auditory <strong>and</strong> visual delivery could also moderate these effects.The results revealed that drivers generally protect lane keeping from interference, but donot protect their hazard response, replicating the results from the Horrey <strong>and</strong> Wickens (2002)study. However, drivers did not protect their hazard response from interference as secondary taskinformation was given priority when the messages were visual <strong>and</strong> relevant (they respondedfaster to these messages). Drivers exhibited better steering control when the messages wereauditory <strong>and</strong> relevant, supporting the idea that relevant, auditory messages leave the driver moreattuned to the roadway <strong>and</strong> thus they are less abrupt with their lane corrections. <strong>In</strong>creasedmessage complexity increased the auditory benefit to driving performance (steering velocity wasgreatest for visual, complex messages), but led to an auditory cost for in-vehicle task (message)performance. An interaction between relevance <strong>and</strong> modality suggests that the safety benefits forauditory delivery <strong>of</strong> in-vehicle information appear to be greater when the message is relevantthan when it is irrelevant. Redundant presentation <strong>of</strong> messages across both modalities sometimeshelped, relative to the worst <strong>of</strong> the two single-modality interfaces, but rarely did it appear to <strong>of</strong>ferthe “best <strong>of</strong> both worlds,” supporting performance that was sometimes as good, but never betterthan either <strong>of</strong> the single-modality conditions. These results have implications for training driversto use redundant systems.1


<strong>Driving</strong> <strong>In</strong>formation Displays<strong>Driving</strong> is a multitask environment which requires concurrent processing <strong>of</strong> information.The primary task <strong>of</strong> the driver is to simultaneously guide the position <strong>of</strong> the vehicle, detect <strong>and</strong>classify potential hazards, <strong>and</strong> navigate the route. Secondary tasks compete for resources <strong>and</strong>deteriorate the primary driving performance (Lansdown, Brook-Carter, & Kersloot, 2002).Examples <strong>of</strong> secondary tasks include in-cab viewing, roadside scanning, <strong>and</strong> auditory, cognitive,<strong>and</strong> motor tasks (Wickens, Gordon, & Liu, 1998). The presence <strong>of</strong> secondary tasks intrudes upondriving <strong>and</strong> diverts a significant amount <strong>of</strong> attention away from the driving task as to pose asafety hazard. Hughes <strong>and</strong> Cole (1986) quantify the amount <strong>of</strong> visual attention that is devoted tosecondary tasks as 30-50% <strong>of</strong> driver’s visual attention, <strong>and</strong> Antin et al. propose that driversspend a third <strong>of</strong> their time looking inside the vehicle (Antin et al., 1990). Appropriate taskprioritization <strong>and</strong> the influence <strong>of</strong> display-improving mechanisms do not necessarily help tomitigate the attentional decrements in driving performance that secondary task displays create(Noy, 1990). This realization highlights the need for <strong>of</strong>fsetting the attentional <strong>and</strong> visualmonitoring costs <strong>of</strong> secondary task displays. A necessary step towards implementing informationdisplays is to thus explore the display manipulations that will afford significant reduction insecondary display costs to attentional resources. A resource model, proposed by Wickens (1980,1991, 2002) examines the effectiveness <strong>of</strong> time-sharing multiple tasks along a number <strong>of</strong>dimensions; these dimensions have implications for display design.Multiple ResourcesThe multiple resource model predicts the relative benefits <strong>of</strong> different display types tomultiple task performance (Sarno & Wickens, 1995; Wickens, 2002). The model proposes thattwo tasks are better time-shared if they share separate resources than if they share commonresources. <strong>Tasks</strong> are categorized according to their dem<strong>and</strong>s along four resource dimensions:processing stage, perceptual modality, visual channel, <strong>and</strong> processing codes.The stage <strong>of</strong> processing dimension refers to the separate resources used for perceptualcognitiveactivity (perceptual processing) compared to response selection <strong>and</strong> action.Physiological research supports the separation <strong>of</strong> perceptual <strong>and</strong> response selection processes inthe stage distinction <strong>of</strong> the multiple resource model. Isreal et al. (1980) examined the eventrelatedbrain potential (ERP) measures in assessing task activity <strong>and</strong> found that perceptual <strong>and</strong>response processes had systematic differences in task workload. The stage dichotomy predictsthat there will be substantial interference between perceptual <strong>and</strong> cognitive processes involved inworking memory functions (Liu & Wickens, 1992) <strong>and</strong> in two response processes, such asdialing a cell phone <strong>and</strong> steering a vehicle (Serafin, Wen, Paelke, & Green, 1993), but lessinterference between two tasks drawing from separate resources across the stage-defineddichotomy. Drivers are able to concurrently scan the roadway (perceptual processing) whilesteering (response execution); however, the driving task is degraded if another manual task isperformed.The codes <strong>of</strong> processing dimension refers to the distinction between spatial <strong>and</strong> verbalprocessing resources; these processing codes employ separate resources associated with the twocerebral hemispheres (Polson & Friedman, 1988), <strong>and</strong> define different resources within each <strong>of</strong>the individual stages <strong>of</strong> processing. Considering response execution, manual responses (spatial)3


are efficiently time-shared with vocal outputs (verbal). Processing <strong>of</strong> spatial <strong>and</strong> verbalinformation employs different cognitive resources (Wickens & Holl<strong>and</strong>s, 2000). The processingcode distinction is derived from Baddeley’s (1995) model <strong>of</strong> working memory in which thecentral executive component (an attentional control system) coordinates information from thephonological loop (verbal representation <strong>of</strong> information) <strong>and</strong> the visuospatial sketchpad (spatialrepresentation <strong>of</strong> information). The model <strong>of</strong> working memory identifies spatial <strong>and</strong> verbalinformation as separate components that activate separate brain structures <strong>and</strong> that independentlyrely upon the attentional mechanisms. Each <strong>of</strong> the two systems or codes, given their independentprocessing <strong>and</strong> separate resources, suffer more when a concurrent task requires the same type <strong>of</strong>information processed in working memory than when information in the alternative code is beingsimultaneously processed. As the two codes use separate processing resources (Wickens, 1992),in controlled experimental evaluations, it is necessary to carefully design secondary tasks thatmanipulate modality to employ the same code <strong>of</strong> processing for the manipulation <strong>of</strong> modality tobe a pure comparison.The visual channel distinction differentiates the form <strong>of</strong> visual processing between focal<strong>and</strong> ambient vision. The two types <strong>of</strong> visual processing employ separate brain structures. Focalvision is used to recognize discrete objects <strong>and</strong> patterns <strong>and</strong> primarily involves use <strong>of</strong> the fovea.Ambient vision is used in continuous motion processing <strong>and</strong> guidance <strong>and</strong> involves use <strong>of</strong>peripheral vision <strong>and</strong> to some degree fovea vision (Wickens, & Holl<strong>and</strong>s, 2000). Weinstein <strong>and</strong>Wickens (1992) manipulated display location for multiple flight displays. The results indicatedthat pilots performed more quickly <strong>and</strong> accurately using displays that employed separate visualresources than the same visual resource, particularly two displays requiring ambient vision.Within the context <strong>of</strong> the driving domain, drivers are able to guide the vehicle (ambient) whilereading roadway signs (focal); however, the ability to concurrently navigate with an in-vehicledisplay or process other detailed information on such a display, <strong>and</strong> to detect hazards whiledriving results in a competition for visual resources as both tasks employ focal vision.The processing modalities dichotomy, which will be the focus <strong>of</strong> the present research,refers to the visual <strong>and</strong> auditory input channels. Dual-task performance is improved when inputis distributed to the separate modality channels. Cross-modal time-sharing (between visual <strong>and</strong>auditory input) is more effective than intra-modal time-sharing (two auditory inputs or two visualinputs; Wickens, 2002). Wright, Holloway, <strong>and</strong> Aldrich (1974) found that participants werebetter able to concurrently perform a tracking task <strong>and</strong> to process verbal information wheninformation secondary to the tracking task was presented auditorially rather than visually. Whilethe multiple resource theory outlines the structural dichotomies <strong>of</strong> resources that help to accountfor the differences in time-sharing efficiency, the allocation <strong>of</strong> the available resources ismodulated by the characteristics <strong>of</strong> the task(s) <strong>and</strong> the allocation strategies <strong>of</strong> individuals.Resource AllocationThe success <strong>of</strong> time-sharing in the driving task depends on the total amount <strong>of</strong> attentionalresources available to the driver <strong>and</strong> the strategy <strong>of</strong> allocating those resources. The strategies <strong>of</strong>resource allocation are affected by task properties <strong>and</strong> environmental goals. An increase in thedifficulty <strong>of</strong> a particular task will result in a decrement to the concurrent task unless the operatormaintains an unchanging allocation <strong>of</strong> resources, so the increasing-difficulty task suffers, orunless the concurrent task is data limited (through automaticity) (Wickens & Holl<strong>and</strong>s, 2000). <strong>In</strong>4


a driving context, a study by Liu (2001) revealed that an increase in the amount <strong>of</strong> informationunits presented to a driver during a secondary task results in the subsequent increase in theamount <strong>of</strong> decrement to primary <strong>and</strong> secondary task performance. Further, the level <strong>of</strong>importance assigned to a particular task dictates the allocation strategy <strong>of</strong> attentional resources.An increase in the b<strong>and</strong>width <strong>of</strong> a particular task can contribute to a shift in task priority; as anexample, the increase in road curvature would potentially cause a shift <strong>of</strong> attention from an invehicletask to the roadway.The salience <strong>and</strong> expectancy <strong>of</strong> the information, the required amount <strong>of</strong> effort to obtainthe information, <strong>and</strong> the value <strong>of</strong> the task influence the allocation strategy <strong>of</strong> individuals(Wickens, Helleberg, Goh, Xu, & Horrey, 2001). <strong>In</strong>formation that is more salient (conspicuous)carries a greater likelihood <strong>of</strong> being attended to <strong>and</strong> processed. Visual displays that are placed atwider eccentricities (<strong>and</strong> are consequentially less salient) require longer switching times <strong>and</strong>degrade performance <strong>of</strong> tasks that are positioned on those displays. Wickens, Dixon, <strong>and</strong> Seppelt(2002) examined a paradigm with a primary compensatory tracking task paired with acomprehension side task; the results revealed that more eccentric displacements <strong>of</strong> the side taskinformation lead to delayed initiation <strong>and</strong> completion <strong>of</strong> the side task response, <strong>and</strong> increasedtracking error during side task response. <strong>In</strong>creased display separation increases the effortrequired to sample information from the different displays. <strong>In</strong>formation that requires greatereffort to obtain is less likely to receive allocation <strong>of</strong> attentional resources (Wickens et al., 2002).The expectancy <strong>of</strong> information defined by the frequency or b<strong>and</strong>width <strong>of</strong> the informationpresentation also influences the allocation <strong>of</strong> attention. For instance, drivers may attend moreclosely to their following distance behind a lead vehicle if the braking response <strong>of</strong> the leadvehicle is frequent <strong>and</strong> erratic (conveying more information in terms <strong>of</strong> information theory).Finally, the value assigned to particular tasks modulates the level <strong>of</strong> attention that is given todisplay a source that supports the particular task; this subjective measure changes as a function<strong>of</strong> the perceived cost <strong>of</strong> failing to attend to a particular task <strong>and</strong> to process the relevantinformation. For example, in nearly all driving tasks, the value <strong>of</strong> sampling the outside world isgreater than the value <strong>of</strong> sampling inside, because the consequences <strong>of</strong> failures to noticedeviations or hazards in the outside world are quite severe. These characteristics <strong>of</strong> a particularenvironment (salience, effort, expectancy, value) <strong>and</strong> the degree to which they are factors in aparticular information processing task dictate the allocation <strong>of</strong> attention between tasks.Additional resource allocation issues are relevant when considering the processingmodalities dimension (auditory vs. visual) <strong>and</strong> these include visual resource competition <strong>and</strong> theauditory preemption effect (Helleberg & Wickens, 2003). The majority <strong>of</strong> feedback the drivergets from driving is visual (according to Rockwell (1972) 90 percent). It would seem prudent toavoid in-vehicle devices that require use <strong>of</strong> visual processes, especially in high-workloadsituations. However, visual information is self-paced in that it allows drivers to delay attendingto the information until a time when they can safety remove their eyes from the roadway; thelimitations <strong>of</strong> working memory are alleviated with visual information. Auditory information incontrast, is generally forced-paced <strong>and</strong> requires the driver to perceive it immediately. It wouldrequire a repeat function in the equipment in order for the information to be recalled if it isforgotten (S<strong>and</strong>ers & McCormick, 1993).5


A further challenge to an auditory display is the auditory preemption effect which canreduce the benefit <strong>of</strong> employing the two modalities. This phenomenon occurs when thepresentation <strong>of</strong> discrete auditory information disrupts or preempts an ongoing visual task(Wickens & Liu, 1988). The transient nature <strong>of</strong> the auditory information <strong>and</strong> the limitation <strong>of</strong> theworking memory to retain extensive pieces <strong>of</strong> information, as noted above, may necessitate thatattention is promptly directed to the auditory information upon its presentation (Wickens, Dixon,& Seppelt, 2002); performance on the driving task is consequently sacrificed. Furthermore, theauditory preemption effect is also resultant from the attention-grabbing characteristic <strong>of</strong> theauditory modality (Sorkin, 1987; Spence & Driver, 1997). This disruptive effect <strong>of</strong> the auditorymodality appears to be particularly prominent in situations where the auditory presentation <strong>of</strong> theIVT task is complex <strong>and</strong> unexpected (Wickens, Dixon, & Seppelt, 2002). Auditory messages thatare more abrupt, less predictable, <strong>and</strong> longer tend to interrupt a continuous visual task to agreater extent than messages that are expected <strong>and</strong> short (Helleberg & Wickens, 2003; Wickens,Goh, Helleberg, Horrey, & Talleur, in press).The various attentional mechanisms described modulate the benefits <strong>and</strong> costs to use <strong>of</strong>the auditory <strong>and</strong> visual modalities in the driving task. The benefit <strong>of</strong> one perceptual modalityover another depends upon the design manipulations which may amplify the effect <strong>of</strong> a particularset <strong>of</strong> those mechanisms. The differential effect <strong>of</strong> auditory versus visual presentation <strong>of</strong> invehiclemessages on driver performance may differ as a function <strong>of</strong> the relevance <strong>of</strong> the invehicleinformation, task workload, <strong>and</strong> display separation.Auditory Presentation <strong>of</strong> <strong>In</strong>-<strong>Vehicle</strong> <strong>In</strong>formationThe use <strong>of</strong> the auditory modality helps to reduce the attentional <strong>and</strong> cognitive dem<strong>and</strong>sthat are inherent in multiple concurrent processing situations. For example, a study by Folds <strong>and</strong>Gerth (1994) examined a multiple process-monitoring situation where a tracking task <strong>and</strong> amonitoring task were concurrently performed. The results showed that auditory presentation <strong>of</strong>warning information led to faster <strong>and</strong> more accurate primary <strong>and</strong> secondary task performancethan visual presentation <strong>of</strong> the same information. Wickens, Dixon, <strong>and</strong> Seppelt (2002) examineda dual-task paradigm in which the primary task involved a manual tracking task <strong>and</strong> thesecondary task involved verbal read-back <strong>of</strong> digit strings. The experiment revealed similarresults as in the Folds <strong>and</strong> Gerth study (1994), showing a clear advantage <strong>of</strong> auditory side taskdelivery relative to visual delivery <strong>of</strong> side task information at wide eccentricities, for both theprimary <strong>and</strong> secondary task performance; participants responded more quickly to auditory thanto visual secondary task information, <strong>and</strong> tracking error decreased with auditory presentation <strong>of</strong>the side task relative to visual presentation.A good deal <strong>of</strong> research has compared auditory <strong>and</strong> visual presentation modalities <strong>of</strong> adiscrete task paired with a continuous visual task (e.g., Wickens, S<strong>and</strong>ry, & Vidulich, 1983;Braune & Wickens, 1985; Wickens & Goettl, 1984; Wickens, Braune, & Stokes, 1987; Yeh &Wickens, 1988; Isreal, 1980; Wickens, 1980; Tsang & Wickens, 1988; see Wickens & Liu, 1988,<strong>and</strong> Wickens,1980 for a review). The general conclusions drawn from these studies are thatperformance with the visual modality suffers from increased spatial separation from the primarytask (the continuous visual task); this visual cost gives an advantage to the auditory modality toexhibit better time-sharing than with the visual modality. However, minimal visual separationresults in a more ambiguous differential effect between the auditory <strong>and</strong> visual modalities. At a6


educed eccentricity, the auditory preemption effect is more likely to account for the differencesbetween the two modalities, penalizing the continuous visual task, but rewarding the discreteauditory task.The effect <strong>of</strong> task modality on time-sharing in dual-task situations has also beeninvestigated in high-fidelity domains. A number <strong>of</strong> studies in vehicle driving have examined theeffect <strong>of</strong> modality on vehicle control, guidance, <strong>and</strong> navigation performance. <strong>In</strong> general, thesestudies pair a side task with the primary driving task <strong>and</strong> examine the influence <strong>of</strong> side taskinformation modality on primary <strong>and</strong> secondary task performance. The primarily visual drivingtask is hypothesized to benefit from an aurally-delivered side task when the implications <strong>of</strong> themultiple resource model are taken into consideration. These studies show mixed results,however, as to whether there is an auditory benefit (better primary <strong>and</strong> secondary taskperformance using the auditory modality to present information as opposed to the visualmodality) or an auditory cost (better primary <strong>and</strong> secondary task performance using the visualmodality to present information as opposed to the auditory modality) to auditory presentation <strong>of</strong>in-vehicle task information.Following the procedure used by Wickens <strong>and</strong> Seppelt (2002) we present the followingreview <strong>of</strong> driving studies in 3 categories: 1) those that showed auditory benefits in neither task,2) those that showed auditory benefits in one task, <strong>and</strong> 3) those that showed auditory benefits inboth tasks. We then try to infer common properties <strong>of</strong> the studies that lead to one <strong>of</strong> theparticular category <strong>of</strong> results that they show. To represent the collective effects <strong>of</strong> a set <strong>of</strong>studies, we first represent each study by a simple icon: a horizontal axis with two vertical linesemerging (see Figure I1 below). The degree <strong>of</strong> auditory benefit (up line) or cost (down line) isindicated by the vertical direction <strong>of</strong> the lines. The line on the left refers to benefits (or costs) <strong>of</strong>auditory display to the primary driving task. That on the right refers to performance on the IVTitself. When there is neither cost nor benefit, it is indicated by a dot on the horizontal axis. Ifworkload was varied, its effects are represented by two vertical lines on either the primary (leftside) or secondary (right side) task. The slope <strong>of</strong> the vector connecting these lines thus representswhether increasing workload enhances the auditory benefit (or diminishes the auditory cost)sloping upward from left to right, or has the opposite effect, sloping downward. Finally, next toeach icon, we describe two key features that, we will see, appear to modulate the relative pattern<strong>of</strong> effects: 1) whether the display <strong>of</strong> visual information is in a head-up (HUD) or head-down(HDD) location, <strong>and</strong> 2) whether the IVT contains driving relevant (Rel) or irrelevant (IRR)information.Example:RelHUDLo HiLoHiFigure I1. Example <strong>of</strong> icon figure.7


AV better than VV on NEITHER taskAn auditory cost can be seen in a number <strong>of</strong> studies that examine modality <strong>of</strong> in-vehicleinformation displays (Matthews, Sparkes, & Bygrave, 1996; Horrey & Wickens, 2002; Lee,Gore, & Campbell, 1999; Dingus et al., 1997). These studies show the visual modality to beequal to if not better than the auditory modality for measures <strong>of</strong> both primary <strong>and</strong> secondary taskperformance, hence providing results at odds with the predictions <strong>of</strong> multiple resource theory.Matthews, Sparkes, <strong>and</strong> Bygrave (1996) had participants drive a low-fidelity drivingsimulator (on a desktop computer); the roadway consisted <strong>of</strong> straight <strong>and</strong> curved sections. Areasoning task (irrelevant to the driving task), requiring subjects to asses whether simplesentences followed by a letter pair were true or false, was presented either visually orauditorially. The auditory items were generated as computerized speech; the visual items wereoverlaid onto the simulated driving scene (e.g., on road signs). Results showed an auditory costfor both primary <strong>and</strong> secondary task measures in the high driving workload condition. No dualtaskdecrement was observed for the visual display, suggesting that the focal vision required didnot disrupt the ambient driving task (e.g., Summala et al., 1996). The accuracy <strong>of</strong> the responsesto the IVT task for both modalities deteriorated as more priority was given to the driving task.Horrey <strong>and</strong> Wickens (2002) employed an experimental paradigm similar to that used inthe current experiment. They examined the effect <strong>of</strong> display clutter, separation, <strong>and</strong> modality ondriving <strong>and</strong> side task performance. Participants drove through a high-fidelity simulated drivingenvironment, consisting <strong>of</strong> urban, rural straight, <strong>and</strong> rural curves roads, <strong>and</strong> concurrentlyperformed a digit callout task (irrelevant to the driving task). R<strong>and</strong>om strings <strong>of</strong> 4, 7, <strong>and</strong> 10digits were presented to participants either on an overlaid display (0 degrees below the horizonline), a head-up display (7 degrees below the horizon line), a head-down display (38 degreesbelow the horizon line in the center <strong>of</strong> the console), or aurally through a set <strong>of</strong> speakers locatedin the vehicle. Response times <strong>and</strong> accuracy were recorded for the digit task, <strong>and</strong> vehicle lanekeeping, speed control, <strong>and</strong> hazard event detection times were recorded for the driving task.Drivers generally protected the driving task regardless <strong>of</strong> display manipulation, showing noeffect on vehicle control, <strong>and</strong> no effect on hazard event detection in any condition except for thehead-down presentation in which the hazard response was slowed. Accuracy <strong>and</strong> responseduration to the digit task suffered with complex auditory information compared to visualpresentation (HDD <strong>and</strong> HUD) <strong>of</strong> the side task information. However, participants reacted morequickly to the side task, supporting a preemptive effect <strong>of</strong> the auditory modality (Helleberg &Wickens, 2003).Lee, Gore, <strong>and</strong> Campbell (1999) also used auditory information identical in form to thevisual information (text messages conveying driving status for different driving events). Thevisual display was located in the instrument panel directly in front <strong>of</strong> the driver, approximately21 degrees below the line <strong>of</strong> sight. Participants drove simulated routes in which they receivedmessages alerting them to the presence <strong>of</strong> varying events (relevant to the driving task)throughout the driving environments. Drivers were asked to acknowledge each informationmessage (or roadway warning) by pressing one <strong>of</strong> two steering wheel buttons in order to measureresponse time to attend to the in-vehicle messages. The messages were presented either with acomm<strong>and</strong> or a notification style to test the effect <strong>of</strong> conveyed immediacy <strong>of</strong> the message ondriver response time. The primary driving task was protected across manipulations with vehicle8


control <strong>and</strong> guidance measures equivalent for auditory <strong>and</strong> visual displays; this is consistent withthe finding from Horrey <strong>and</strong> Wickens (2002). The effect <strong>of</strong> modality on side task (IVT)performance however showed an auditory cost with a smaller percentage <strong>of</strong> messageacknowledgement compared to visual messages; this auditory decrement was modulated bymessage style with comm<strong>and</strong> messages increasing the effect <strong>of</strong> the decrement.Dingus et al. (1997) examined the effects <strong>of</strong> modality <strong>of</strong> collision warning informationdisplays on driving performance in a realistic environment. The visual form <strong>of</strong> the warninginformation displayed a series <strong>of</strong> nine colored bars placed in perspective; as the distance betweenthe participant’s vehicle <strong>and</strong> the lead vehicle became shorter, new bars were displayed one belowthe other, increasing in number <strong>and</strong> changing in color from green to orange to red shades,depending on the urgency <strong>of</strong> the headway distance. The visual screen was mounted in thedashboard next to an analog speedometer <strong>and</strong> was located approximately 23 degrees below theline <strong>of</strong> sight. The same criteria that determined the urgency <strong>of</strong> the headway distance wasemployed in the auditory form <strong>of</strong> the warning information; voice messages signaled a shift in theurgency <strong>of</strong> the headway distances in the same way that color was used in the visual display. Aredundant condition included both the auditory <strong>and</strong> visual display formats. No significantdifferences across modality were observed for either primary or secondary task measures. Dualtaskdecrements however were observed for each modality condition compared to their singletask measures <strong>of</strong> headway-monitoring performance.The results from these studies, which are graphically summarized in Figure I2, suggestthat the benefit to auditory presentation <strong>of</strong> secondary task information may be mitigated byprocessing mechanisms <strong>and</strong> display characteristics. Four studies fell into this category <strong>of</strong>auditory cost or no auditory advantage. <strong>In</strong> a distinction that will prove to be important below,two had irrelevant messages (Matthews, Sparkes, & Bygrave, 1996; Horrey & Wickens, 2002)<strong>and</strong> two had driving-relevant messages (Lee et al., 1999; Dingus et al., 1997). Also these twoirrelevant message studies found no visual cost (relative to auditory) when visual display washead-up. <strong>In</strong> general however, there appeared to be no clearly defining property that characterizedall four <strong>of</strong> these studies.9


Class I – Auditory Cost Primary Task Secondary TaskMatthews et al., 1996IRRHUDLo HiHorrey & Wickens, 2002IRRHUDLee et al., 1999RelDown<strong>Driving</strong> WorkloadLo Hi<strong>Driving</strong> WorkloadRTAccuracyLo HiIVT ComplexityDingus et al., 1997RelDownFigure I2. Icon figure <strong>of</strong> auditory cost studies. (<strong>In</strong> Horrey <strong>and</strong> Wickens (2002), we only depictthe results from the HUD condition here.)AV better than VV on ONE taskBoth an auditory cost <strong>and</strong> an auditory benefit can be seen in a number <strong>of</strong> studies thatexamine modality <strong>of</strong> in-vehicle information displays (Mollenhauer et al., 1994; Labiale, 1990;Parkes & Burnett, 1993; Hurwitz & Wheatley, 2002; Ranney et al., 2002; Srinivasan & Jovanis,1997). These studies show the visual modality to be equal to if not better than the auditorymodality on either the primary or secondary task performance but the auditory modality to besuperior to the visual modality on one <strong>of</strong> the two tasks (primary or secondary), suggesting atrade-<strong>of</strong>f between modality <strong>and</strong> the allocation <strong>of</strong> resources between the two tasks.Mollenhauer, Lee, Cho, Hulse, <strong>and</strong> Dingus (1994) examined the effects <strong>of</strong> modality <strong>and</strong>information workload <strong>of</strong> in-vehicle information systems on driver performance. Drivers wereexpected to process <strong>and</strong> adhere to road sign information presented throughout the simulateddriving task. The relevant road sign information included the current speed limit <strong>and</strong> roadnumber, <strong>and</strong> the information presented on the last displayed sign; this information was presentedeither auditorially or visually on an LCD display located on top <strong>of</strong> the simulator instrument paneldirectly ahead <strong>of</strong> the driver (approximately 11 degrees below the line <strong>of</strong> sight). The secondary10


information task consisted <strong>of</strong> two levels <strong>of</strong> task workload; the road sign information waspresented either at a rate that a driver would see signs during a normal drive (high workload) orat half the rate seen during a normal drive (low workload). Auditory presentation impaireddriving performance resulting in greater lane position deviation, steering wheel velocity, roadheadingerror, <strong>and</strong> speed deviation compared to driving performance with visual displays. Atrade<strong>of</strong>f in driving performance decrement was seen with an improvement to secondary taskperformance. Drivers recalled approximately 2 more information items with auditorypresentation than with visual presentation. However, independent <strong>of</strong> modality, higherinformation workload significantly decrease the number <strong>of</strong> recalled items.Horrey <strong>and</strong> Wickens (2002), as discussed earlier, found that drivers protected the primarylane-keeping task in driving, regardless <strong>of</strong> display manipulation. For the primary task measure <strong>of</strong>hazard detection however, drivers noticed hazard events more rapidly in the auditory conditioncompared to the head-down visual condition, thus showing a greater level <strong>of</strong> driving safety forauditory displays. <strong>In</strong> contrast, auditory presentation <strong>of</strong> side task information resulted in greaterspeed variation than with the visual head-down display; though, the implications to theclassification <strong>of</strong> an auditory cost or benefit are inconclusive as speed control was not consideredan indicator <strong>of</strong> driving safety. Again, results showed an auditory cost for secondary taskmeasures <strong>of</strong> accuracy <strong>and</strong> response duration. A significant auditory advantage was evidenthowever for response times to the secondary task.Labiale (1990) had participants engage in real driving that was directed according to thenavigational information received through either the visual or auditory modality in verbal format.An auditory beep signaled the instance <strong>of</strong> each occurrence <strong>of</strong> a navigational message in bothmodalities. The length <strong>of</strong> the messages was varied from 4 information units to18 informationunits to examine the effects <strong>of</strong> message length on driving performance; the visual informationwas displayed for a limited amount <strong>of</strong> time, calculated to allow only one read-through in order tocompare to auditory constraints in phonetic memory. The visual display was located on the righth<strong>and</strong> side <strong>of</strong> the instrument panel <strong>and</strong> oriented towards the driver (approximately 23 degreesbelow the line <strong>of</strong> sight). Results did not reveal a main effect <strong>of</strong> modality <strong>of</strong> road informationmessages on vehicle <strong>and</strong> speed control performance. However, the manipulation <strong>of</strong> IVTworkload revealed some important interactions; at low IVT workload, the visual <strong>and</strong> auditorymodalities performed equally in measures <strong>of</strong> primary <strong>and</strong> secondary task performance. Theincrease in message length (IVT workload) however modulated the differential effect betweenmodalities for primary task performance with a benefit to using the auditory display seen in thehigh workload conditions, enabling more precise speed <strong>and</strong> vehicle control than in the highworkloadvisual conditions. This supports the notion that dual-task performance is affected bychanges in workload; the increase in workload <strong>of</strong> one task is <strong>of</strong>fset with performance decrementsin the other task (Wickens, 2002). However, an auditory cost was seen in the high-workload IVTmessage condition for secondary task performance, with the decrease in recall performance inthe high workload condition relative to the low workload condition <strong>of</strong> a greater magnitude thanthat in the visual modality.Hurwitz <strong>and</strong> Wheatley (2002) analyzed driver performance as participants drove in ahigh-fidelity driving simulator on either a predominantly straight road or a predominately curvyroad. The information processing task required participants to monitor a continuous stream <strong>of</strong>letters <strong>and</strong> to enact a button-response when a particular letter was presented; this task was11


Six studies fell into this category <strong>of</strong> mixed auditory cost <strong>and</strong> auditory benefit for primary<strong>and</strong> secondary tasks (refer to Figure I3). The results from these studies suggest that the benefit toauditory presentation <strong>of</strong> primary task information may be determined by display location. All <strong>of</strong>the studies that showed a benefit to primary task performance had visual displays located at wideeccentricities from the line <strong>of</strong> sight; an auditory cost for primary task performance was seen onlyin the one study that presented visual information on a head-up display (Mollenhauer et al.,1994). However, there appeared to be no clear characterization <strong>of</strong> secondary task performance.As in the first set <strong>of</strong> studies, which showed an auditory cost, half <strong>of</strong> the studies (three out <strong>of</strong> six)had irrelevant messages (Horrey & Wickens, 2002; Hurwitz & Wheatley, 2002; Ranney et al.,2002) <strong>and</strong> the other three had driving-relevant messages (Mollenhauer et al., 1994; Labiale,1990; Srinivasan & Jovanis, 1997). Another property <strong>of</strong> the studies shown in Figure I3 is the factthat increasing workload generally increased the auditory advantage.Class II - MixedMollenhauer et al., 1994RelHUDHorrey & Wickens, 2002IRRDownLabiale, 1990RelDownHurwitz & Wheatley, 2002IRRDownRanney et al., 2002IRRDownPrimary Task Secondary TaskHazard;Lane Pos.RTAccuracyLoHiLo Hi<strong>Driving</strong> WorkloadIVT ComplexityLoHiLoHiIVT ComplexityNot ReportedLo Hi<strong>Driving</strong> WorkloadPeriphery DetectionSteeringSrinivasan & Jovanis, 1997RelDownHazard;Lane Pos.LoHi<strong>Driving</strong> WorkloadFigure I3. Icon figure <strong>of</strong> mixed auditory cost/benefit studies.13


speed limit, traffic density, <strong>and</strong> number <strong>of</strong> intersections. The navigational instructions were ineither visual, auditory, or redundant form. The visual information consisted <strong>of</strong> either a routeguidance map or text messages combined with icons <strong>of</strong> vehicle <strong>and</strong> roadway status, <strong>and</strong> waspresented on an LCD display mounted centrally above the dashboard (approximately 24 o to theleft <strong>of</strong> the top <strong>of</strong> the steering wheel). The auditory information contained roadway-relevantinformation or route guidance information <strong>and</strong> was presented through a speaker in front <strong>of</strong> thepassenger seat. The redundant information was a simple combination <strong>of</strong> the auditory <strong>and</strong> visualinformation. The three modality conditions utilized the same set <strong>of</strong> information units (for thenavigational information task) which included geographical entities, road type <strong>and</strong> position, <strong>and</strong>driving instruction. Participants performed the navigation task under both high (visual: 9 – 14information units; auditory: message presented every 5 – 8 seconds) <strong>and</strong> low (visual: 3 – 5information units; auditory: message presented every 20 + seconds) workloads. An additionalsecondary task required participants to manually respond with a button-press to the vehicleinformation by either categorizing it as road condition or vehicle condition information.The overall effect <strong>of</strong> modality on primary task performance suggests a benefit to auditorypresentation <strong>of</strong> IVT information with better steering <strong>and</strong> speed control for auditory than visualinformation. The benefit to auditory presentation was modulated by information complexity; anincrease in information complexity resulted in an increase <strong>of</strong> the auditory benefit to drivingperformance. The increase in message complexity additionally resulted in longer reaction timesfor the button-press task with visual, complex messages. The effect <strong>of</strong> modality on secondarytask performance was contingent upon the driving task load, showing significant effects <strong>of</strong>modality only with the high workload driving condition. The auditory display resulted in betterinformation processing relative to the visual display with faster reaction times to the button-pushtask, fewer navigation-related errors to the navigation task, <strong>and</strong> a higher percentage <strong>of</strong> correctturns. The benefit to auditory presentation was somewhat mitigated however by the informationcomplexity; more wrong turns were made in the auditory condition than in the visual conditionwith complex information.Streeter, Vitello, <strong>and</strong> Wonsiexicz (1985) conducted a study that compared theeffectiveness <strong>of</strong> two navigational aids that differed from each other in both modality <strong>and</strong>processing code. Participants engaged in a driving task along moderately difficult local roads <strong>and</strong>simultaneously navigated their route with a particular in-vehicle navigation display. The visualinformation was a spatial representation with a navigational map (a h<strong>and</strong>-held map); the auditoryinformation was a verbal representation with a recorded list <strong>of</strong> street names <strong>and</strong> direction.Performance on the primary driving task was better using the auditory display with moreaccurate speed control <strong>and</strong> navigation; this benefit may be attributed to the reported greatersuccess in navigating using textual instructions rather than a spatial representation <strong>of</strong> theparticular desired path (S<strong>and</strong>ers & McCormick, 1993). The benefit <strong>of</strong> a verbal representation <strong>of</strong>navigational information is further seen in secondary task performance with auditory informationdisplays resulting in only one-third <strong>of</strong> the navigational errors (e.g., wrong turn, missed turn) asthose observed with visual information displays.Walker et al. (1990) examined the effects <strong>of</strong> auditory <strong>and</strong> visual in-vehicle navigationdevices on driver performance, as assessed by lateral deviation, speed control, <strong>and</strong> reaction timeto gauge changes. The visual device presented route electronic <strong>and</strong> textual route information (ona CRT display located to the right <strong>of</strong> the driver approximately 20 degrees below the line <strong>of</strong>15


sight); the auditory device presented aural direction <strong>and</strong> route specification text messages. TheIVT workload was manipulated for both the auditory <strong>and</strong> visual displays with three levels <strong>of</strong>information complexity. The driving task workload was also manipulated with the addition <strong>of</strong>crosswinds, traffic, narrowing lanes, <strong>and</strong> information processing tasks. The auditory informationdisplays supported better time-sharing performance relative to the visual information displays forboth primary <strong>and</strong> secondary task measures (vehicle <strong>and</strong> speed control, <strong>and</strong> number <strong>of</strong>navigational errors respectively). The effect <strong>of</strong> primary task workload on driver <strong>and</strong> IVTperformance was evident only in the preferences <strong>of</strong> drivers to navigate in a low complexityenvironment. That is, driving workload had no influence on the auditory benefit for drivingperformance. The interaction <strong>of</strong> workload with secondary task performance however reduced theauditory benefit <strong>of</strong> the IVT performance (the advantage <strong>of</strong> auditory over visual performancedecreased), but IVT complexity increased the driving performance advantage <strong>of</strong> the auditorydisplay.The results from these studies, summarized in Figure I4, suggest that the benefit toauditory presentation <strong>of</strong> secondary task information may be influenced by display location <strong>and</strong>message relevance. Five studies fell into this category <strong>of</strong> auditory benefit or auditory advantagefor both primary <strong>and</strong> secondary task performance. The majority <strong>of</strong> the studies (four out <strong>of</strong> five)present the visual information on a head-down display. This would suggest that the auditorypresentation <strong>of</strong> the navigation information may receive an advantage over the visual presentationdue to the large visual display separation, an observation which will be discussed in a latersection. Furthermore, all five <strong>of</strong> the studies had driving-relevant messages (Burnett & Joyner,1997; Gish et al., 1999; Liu, 2001; Streeter et al., 1985; Walker et al., 1990), showing a cleartrend for an auditory benefit for both primary <strong>and</strong> secondary task performance with relevantmessages. The extent to which this benefit will fail to emerge with irrelevant messages will beassessed in the current study.16


Class III – Auditory BenefitPrimary Task Secondary TaskBurnett & Joyner, 1997RelHUDGish et al., 1999RelDown / HUDHazard2 <strong>Tasks</strong>Liu, 2001RelDown(& over)Streeter et al., 1985RelDown(Map)Walker et al., 1990RelDownFigure I4. Icon figure <strong>of</strong> auditory benefit studies.RT <strong>and</strong> AccuracyNavigation<strong>Driving</strong> WorkloadLoHiIVT ComplexityLo Hi Lo HiIVT ComplexityThe literature comparing cross-modal <strong>and</strong> intra-modal information presentation revealsmixed findings for dual-task performance. If multiple resources are the important mechanism,auditory presentation <strong>of</strong> side task information is expected to result in improved time-sharingperformance between primary <strong>and</strong> secondary tasks relative to visual presentation <strong>of</strong> the sameinformation (Wickens, 2002). The various display manipulations however mitigate the instance<strong>of</strong> an auditory advantage <strong>and</strong> disadvantage within the literature. A review <strong>of</strong> the driving literaturethat manipulates modality delivery <strong>of</strong> in-vehicle task information in realistic driving simulationsidentifies the location <strong>of</strong> the visual display (HUD versus head-down) as one factor thatmodulates performance relative to an auditory display. A second factor, potentially, is therelevance <strong>of</strong> the message to the driving task as a trend in distinguishing the instances <strong>of</strong> anauditory advantage (more likely when relevant) from an auditory disadvantage (more likelywhen irrelevant; Wickens & Seppelt, 2002). We consider this relevance factor in more detail inthe following section.17


<strong>Relevance</strong> <strong>of</strong> <strong>In</strong>-<strong>Vehicle</strong> <strong>Tasks</strong>A majority <strong>of</strong> vehicle-based cross-modal studies have delivered “related” (ATIS,navigation) information, rather than unrelated in-vehicle task information (i.e., infotainmentinformation, e-mail, cell-phone numbers). Of the reviewed studies, 2 out <strong>of</strong> the four auditory coststudies, 3 out <strong>of</strong> the six mixed auditory benefit/cost studies, <strong>and</strong> all 5 out <strong>of</strong> the five auditorybenefit studies had driving-relevant messages (Lee et al., 1999; Dingus et al., 1997; Mollenhaueret al., 1994; Labiale, 1990; Srinivasan & Jovanis, 1997; Burnett & Joyner, 1997; Gish et al.,1999; Liu, 2001; Streeter et al., 1985; Walker et al., 1990). These driving studies reveal eitherthat auditory presentation <strong>of</strong> side task information provides less disruption <strong>of</strong> driving than visualpresentation <strong>of</strong> side task information, or that auditory <strong>and</strong> visual presentation <strong>of</strong> side taskinformation yields equivalent effects on driving performance. <strong>In</strong> only one case did researchersfind that visual presentation <strong>of</strong> side task information resulted in less distraction to drivingperformance than auditory presentation <strong>of</strong> side task information (Mollenhauer et al., 1994).<strong>In</strong> contrast to the above, an auditory cost is more likely to be seen in studies that utilizesecondary tasks that carry less direct relevance to the driving task. Matthews, Sparkes, <strong>and</strong>Bygrave (1996), required participants to perform Baddeley’s (1986) Reasoning Test as asecondary task to the primary driving task. The task had no bearing on the nature <strong>of</strong> the drivingtask. Results yielded a cost <strong>of</strong> utilizing the auditory modality for presentation <strong>of</strong> the varyingsentences. Horrey <strong>and</strong> Wickens (2002) reported a slight cost <strong>of</strong> presenting unrelated auditoryinformation with the auditory condition exhibiting more variation in speed control relative to thehead-down visual condition.This analysis <strong>of</strong> the dual-task driving literature suggests a trend, that is partiallysupported by other data, suggesting that irrelevant auditory messages can be distracting fromprimary visual tasks (e.g., Latorella, 1998; Wickens & Liu, 1988). A non-driving study by Tsang<strong>and</strong> Rothschild (1985) that employed a non-relevant secondary task presented a spatialtransformation task in conjunction with a compensatory tracking task supports this interpretation.The results indicated a cost to presenting the spatial information aurally (achieved throughvariance <strong>of</strong> tone location) as opposed to presenting the information visually. Participants aredistracted by the information as opposed to being assisting in performing the tracking task.Because irrelevant tasks do not support the task <strong>of</strong> driving, a primarily visual task, there is nobenefit in <strong>of</strong>floading cognitive dem<strong>and</strong>s to the auditory modality in decreasing the workload <strong>of</strong>the driving task. Drivers are less skilled in being able to integrate (or filter) irrelevant auditoryinformation when they are concurrently performing the visual driving task, <strong>and</strong> indeed suchauditory delivery has been found to “preempt” ongoing visual tasks in aviation research (e.g.,Helleberg & Wickens, 2003; Latorella, 1998). Wickens <strong>and</strong> Liu (1988) developed a “signature”<strong>of</strong> this auditory preemption effect, whereby a discrete secondary task (here represented by the invehicletask) benefits from auditory delivery <strong>of</strong> information, since that delivery will “callattention” to its message, whereas the ongoing visual task (here, driving) will suffer by auditoryrelative to visual delivery <strong>of</strong> the secondary task. And while the auditory-visual / related-unrelatedinteraction can be tied with auditory preemption <strong>of</strong> unrelated tasks, it is important to note that hiscan also be linked to emerging findings from the field <strong>of</strong> educational technology, namely thecognitive load theory, articulated by Sweller <strong>and</strong> his colleagues (e.g., Tindall-Ford et al., 1997;Wickens & Holl<strong>and</strong>s, 2000, chapter 6). Proponents <strong>of</strong> this theory stress that when deliveringrelated verbal <strong>and</strong> spatial (i.e., pictorial) information for instruction, the former is more18


effectively delivered by speech (auditory) than by text (visual), thereby capitalizing on multipleresources.The theoretical explanations behind the relevant/irrelevant distinction are as follows: forrelevant messages, an auditory display is beneficial because having attention preempted to thesecondary task is useful in that the information pertains to the driving task <strong>and</strong> is intended toameliorate the driving task, to support the driving task. This is more beneficial than a visualdisplay because the visual display is more easily ignored <strong>and</strong> the information could be missed;additionally, taking the eyes from the roadway to gather information to the driving task isunnatural <strong>and</strong> is distracting to a task that requires one to look at the roadway. A driver couldbetter be able to make use <strong>of</strong> multiple resources <strong>and</strong> to integrate the information they are hearingif it pertains to the driving task than if it does not have any significance to the primary task. It iseasier to disengage from the IVT if it is relevant to the driving task because drivers do not needto re-orient their mental scripts or processes if they are processing the same type <strong>of</strong> information.However, for an irrelevant display it is harder to disengage from the IVT task if it is an auditorytask because they have to make the mental switch, especially as the auditory informationpreempts their attention <strong>and</strong> redirects it to other cognitively-different processes. For the visualdisplay, the operator need only to look away for them to be able to disengage themselves formthe IVT task; it is a way to self-monitor the intake <strong>of</strong> information in cognitively-taxing situations.For irrelevant messages, drivers need to be able to ignore the secondary task; this is whya visual message is appropriate because it gives the driver the ability to look away from thedisplay when (s)he needs to concentrate their attentional resources on the driving task(particularly if it is complex). Having an auditory display is potentially detrimental because, aswe have noted, it can preempt the driver’s attention <strong>and</strong> cause them to direct their attention awayfrom the driving task to the secondary task to information that is irrelevant to the primary taskthey are conducting.Spatial Separation <strong>of</strong> <strong>In</strong>-<strong>Vehicle</strong> TaskThe literature comparing auditory <strong>and</strong> visual modalities <strong>of</strong> discrete tasks coupled with acontinuous visual task (Wickens & Seppelt, 2002), supports the conclusion that the larger thedisplay separation, the greater the cost to using a visual display over an auditory display. Schons<strong>and</strong> Wickens (1993) found that spatial separation between displays greater than 7.5 degreesdegraded performance significantly. Wickens (1992) defines regions <strong>of</strong> visual scanning in terms<strong>of</strong> degrees <strong>of</strong> separation for pairs <strong>of</strong> displays. The outlined model <strong>of</strong> information access effortreports that display separation greater than the no-scan region (approximately 7.5 degrees)requires increased time <strong>and</strong> effort to acquire information in scanning. At greater eccentricities,drivers must divide their focal visual resources between relevant information in the drivingenvironment <strong>and</strong> in-vehicle, <strong>and</strong> thus they are less likely to detect unexpected events in thedriving scene (Gish et al., 1999; Horrey & Wickens, 2002).Of the studies presented in the icon figures, two out <strong>of</strong> the four auditory cost studies, fiveout <strong>of</strong> the six mixed auditory benefit/cost studies, <strong>and</strong> four out <strong>of</strong> the five auditory benefit studieshad head-down displays; these studies showed either an auditory benefit or equivalence betweenmodality for primary task performance. Of these eleven studies with head-down displays with19


greater than 20 degrees separation, all but three (Horrey & Wickens, 2002; Hurwitz & Wheatley,2002; Ranney et al., 2002) had driving-relevant messages.The four remained studies in the review had a separation angle <strong>of</strong> less than 11 degrees(HUDs). Of these, one revealed an advantage <strong>of</strong> auditory over visual display use (Burnett &Joyner, 1997), one revealed equivalent results with the auditory <strong>and</strong> visual display use (Horrey &Wickens, 2002 - with the HUD), while two actually found an advantage <strong>of</strong> using a visual displayover an auditory display (Mollenhauer et al., 1994; Matthews et al., 1996). Note that Matthews etal. (1996) was a study in which the IVT display was unrelated to driving. Thus it appears thatwhen there is substantial visual display separation (>15 degrees), there is usually a cost to usingthe visual display over the auditory display. When there is not a large display separation, theresults are equivocal, showing either no difference, or a visual advantage.WorkloadPrimary Task WorkloadA relatively small number <strong>of</strong> dual-task driving studies look systematically at how anydifference in auditory/visual information displays are modified by driving workload or taskcomplexity (i.e., in terms <strong>of</strong> a modality x complexity interaction in some measure <strong>of</strong> drivingperformance; Liu, 2001; Walker et al., 1990; Hurwitz & Wheatley, 2002; Srinivasan & Jovanis,1997; Horrey & Wickens, 2002; Matthews, Sparkes, & Bygrave, 1996). These workload effectsare reflected by the slope <strong>of</strong> the lines in Figures I2 – I4. Liu (2001) reported that under highdriving load condition, participants tended to more accurately modulate speed when using theauditory display than when using the visual display or the multimodality display. Additionally,as driving load increased, driving control (as measured by mean absolute velocity deviation) withuse <strong>of</strong> the multimodality display increased relative to the auditory <strong>and</strong> visual displays. Hurwitz<strong>and</strong> Wheatley (2002) found that steering wheel variations <strong>and</strong> lane deviations were greater oncurved (versus straight) tracks that utilized visual secondary tasks than on trips that utilizedauditory tasks. The benefit <strong>of</strong> auditory displays in primary control performance increased as thedriving load increased. Srinivasan <strong>and</strong> Jovanis (1997) found that the auditory benefit increasedwith the more complex driving situations. Horrey <strong>and</strong> Wickens (2002) found no interactionbetween increased driving load <strong>and</strong> display modality. Only one study showed an interaction suchthat there was greater harm in the auditory than visual condition, as driving workload increased(Matthews, Sparkes, & Bygrave, 1996). Matthews et al. (1996) found that dual-task interferencewas evident only in curve driving in the auditory condition; a finding that did not occur instraight driving, inferring that auditory <strong>and</strong> visual displays render equivalent performance in lowdriving loads.The manipulation <strong>of</strong> driving load further carries implications for the in-vehicle side taskperformance. Liu (2001) found that an increase in driving load leads to improved navigationusing an auditory display compared to using a visual display (with the highest percentage <strong>of</strong>correct turns associated with the auditory display); at the low driving load, significant effects didnot arise between the modality manipulation, however, at high driving load, the auditory displayperformed significantly better than the visual display.20


Thus, in general, most <strong>of</strong> the reviewed studies show an increased benefit to using anauditory display with the increase in driving workload. <strong>In</strong>creased complexity <strong>of</strong> the driving taskincreases the b<strong>and</strong>width <strong>of</strong> the guidance, navigation, <strong>and</strong> hazard detection tasks; thus, driversmust devote more <strong>of</strong> their visual resources to attend to the driving environment <strong>and</strong> consequentlyhave fewer visual resources to allocate to an in-vehicle task. The auditory modality helps to<strong>of</strong>fload the visual requirements <strong>and</strong> provide better time-sharing for both the driving task <strong>and</strong> thein-vehicle task.Secondary Task WorkloadAn increase in the complexity <strong>of</strong> the side task in dual-task literature is shown to have animpact on driving <strong>and</strong> secondary task performance (Dingus et al., 1994; Liu, 2001; Walker et al.,1990; Ranney et al., 2002; Srinivasan & Jovanis, 1997; Labiale, 1990; Mollenhauer et al., 1994;Horrey & Wickens, 2002). A number <strong>of</strong> studies find that an increase in IVT complexity leads toan increased auditory benefit for the primary driving task (Liu, 2001; Walker et al., 1990; Dinguset al., 1994; Labiale, 1990). Liu (2001) reported that compared to equivalent performance in lowIVT workload conditions, complex (high IVT workload) visual information decreased driver’svehicle control <strong>and</strong> led to more frequent major lane deviations than when such information waspresented by auditory <strong>and</strong> multimodality information. Under high load conditions, the auditorymodality required less driver attention <strong>and</strong> led to safer driving behavior than did the visualmodality. Both Walker et al. (1990) <strong>and</strong> Labiale (1990) report results that suggest that increasingcomplexity either produced a visual cost, or amplified a pre-existing visual cost (in using a visualinformation display). Walker et al. (1990) reported that a low IVT workload was met withequivalent performance whereas a high IVT workload resulted in an auditory benefit with greaterlane maintenance <strong>and</strong> speed control found with use <strong>of</strong> the complex auditory display over thecomplex visual display. Labiale (1990) reported that at low workloads, visual <strong>and</strong> auditorypresentations <strong>of</strong> IVT information equally affected driving performance; however, at high IVTcomplexity, auditory presentation resulted in more effective speed <strong>and</strong> vehicle control than didvisual presentation <strong>of</strong> IVT information. Dingus et al. (1994) found that the addition <strong>of</strong> voicereduced the visual attention dem<strong>and</strong> requirements; this effect was more pronounced with anincrease in display complexity (turn-by-turn versus route map). The combined results <strong>of</strong> thesestudies would suggest that the predictions <strong>of</strong> MRT are upheld in that subjects were found toexhibit better driving performance when able to <strong>of</strong>fload the secondary task to the auditorymodality.The conclusions drawn from research that has examined the effects <strong>of</strong> increased side taskworkload on side task performance on the other h<strong>and</strong> are mixed. Liu (2001) found an auditorybenefit to the IVT load increase, showing that under the complex secondary task condition, agreater decrement in response time <strong>and</strong> error rate was evident for the visual modality comparedto the auditory modality <strong>and</strong> multimodality; the IVT performance was equivalent for the visual<strong>and</strong> auditory modalities under low IVT workload conditions. <strong>In</strong> contrast, Labiale (1990) foundthat under high workload conditions, visual presentation <strong>of</strong> IVT information results in betterrecall performance than auditory presentation <strong>of</strong> IVT performance; IVT performance isequivalent at low workload conditions. However, the degree <strong>of</strong> auditory cost found in the Labiale(1990) study is mitigate by the finding that auditory presentation is better than visualpresentation <strong>of</strong> IVT information with increased message complexity as measured by the number<strong>and</strong> duration <strong>of</strong> visual explorations. This finding is arguable a more direct measure <strong>of</strong> driving21


safety than recall performance <strong>and</strong> thus a more relevant determinant <strong>of</strong> AV cost/benefit. Finally,Horrey <strong>and</strong> Wickens (2002) noted that very long auditory messages (10-digit phone numbers)led to a decrease in accuracy as the capacity <strong>of</strong> working memory was exceeded, a decrease thatwas not shown in the visually delivered phone numbers nor, as noted above, in the primarydriving task.Redundant Presentation <strong>of</strong> <strong>In</strong>formation<strong>In</strong>-vehicle information may be presented using visual displays, auditory displays, or both.The majority <strong>of</strong> the information needed to perform the driving task is obtained visually, <strong>and</strong> thusit would follow that auditory presentation <strong>of</strong> secondary task information would detract less fromthe driving task. Further, intuition would support the idea that redundant presentation <strong>of</strong> thesecondary task information would lead to better performance, as the individual modalities in bothmodalities would be reinforced by the additional modality present. Helleberg <strong>and</strong> Wickens(2003) however present contrasting findings to that <strong>of</strong> intuition. The study compared theeffectiveness <strong>of</strong> auditory, visual, <strong>and</strong> auditory-visual data link displays (in the context <strong>of</strong> anaviation domain). The redundant display condition resulted in inferior performance relative toboth single modality conditions; the cost in performance was attributed to the distracting nature<strong>of</strong> the auditory stimulus when initiated during an ongoing visual task. Stokes, Wickens, <strong>and</strong> Kite(1990) reinforce the idea <strong>of</strong> audio messages being intrusive (or preemptive, as discussed earlier).The results <strong>of</strong> the Helleberg <strong>and</strong> Wickens study (2003) suggest that redundant presentation <strong>of</strong>information may not lead to superior performance because <strong>of</strong> the difficulty in uncoupling the twomodalities in such a way as to take advantage <strong>of</strong> each modality individually. Essentially, the twomodalities limit each other in mutually competing for attention <strong>and</strong> in forcing the completion <strong>of</strong>the task to hinge upon the least effective (i.e., longest reaction time, longest processing time)modality. The findings on redundancy in its effect on dual-task performance are mixed.<strong>In</strong> order to better underst<strong>and</strong> the dual-task effectiveness <strong>of</strong> redundancy, Wickens <strong>and</strong>Gosney (2003) constructed a five-category system to classify the effects <strong>of</strong> redundantpresentation <strong>of</strong> secondary task information relative to single-modality presentation. <strong>In</strong> the casethat the redundant display produces better dual task performance than either single-modalityalone, the effect is classified as “Gestalt”, following “the whole is greater than the sum <strong>of</strong> itsparts” definition. <strong>In</strong> the best <strong>of</strong> both worlds (BOBW) pattern, the redundant display producesperformance that is equal to the better <strong>of</strong> the two single-modality conditions. <strong>In</strong> the case that theredundant display has dual-task performance that is midway between the performance <strong>of</strong> the twosingle-modality conditions, the redundant display is classified as Average. <strong>In</strong> the worst <strong>of</strong> bothworlds (WOBW) pattern, the redundant display produces performance that is equal to the poorest<strong>of</strong> the two single-modality conditions. And finally, in the case that the redundant displayproduces worse dual task performance than either single-modality alone, the effect is classifiedas Anti-Gestalt (a summary <strong>of</strong> these five categories can be seen in Figure I5 below). Naturallyone <strong>of</strong> these 5-level categories can characterize performance on each <strong>of</strong> the primary <strong>and</strong>secondary tasks, in each set <strong>of</strong> experimental results.22


Gestalt ☺ABest <strong>of</strong> BothWorldsAverageWorst <strong>of</strong> BothWorldsAnti-Gestalt AV > A, VAV = Best (A, V)AV = (A+V) / 2AV = Worst (A,V)AV < A, VA V AVA V AVA V AVA V AVA V AVFigure I5. Classes <strong>of</strong> redundancy effects, from Wickens <strong>and</strong> Gosney (2003). The graphs on theright represent hypothetical measures <strong>of</strong> performance, with higher performance shown by higherlevels.Within the context <strong>of</strong> the 5-level categorization, Wickens <strong>and</strong> Gosney (2003) examinedthe effects <strong>of</strong> modality, redundancy, display location, <strong>and</strong> task priority <strong>of</strong> a discrete IVT task (adigit callout task) on interference with a continuous visual tracking task. Results showed that theredundant display <strong>of</strong> the secondary information resulted in an average effect <strong>of</strong> redundancy atnear angles <strong>of</strong> separation <strong>and</strong> a WOBW effect <strong>of</strong> redundancy at far angles for measures <strong>of</strong>primary <strong>and</strong> secondary task performance. And while instructions in the use <strong>of</strong> the redundantdisplays did improve their performance, the single-modality displays still resulted in betterperformance as the redundant condition was overall the average <strong>of</strong> the two single-modalityconditions.<strong>In</strong> the context <strong>of</strong> the driving domain, the results <strong>of</strong> four studies show a greater advantagein using a redundant display over a single-modality display whether for visual (Dingus et al.,1994; Srinivasan et al., 1994; Parkes & Burnett, 1993) or over both single-modalities (Liu,2001). Dingus et al. (1994) presented four configurations <strong>of</strong> a navigational system toparticipants. Each participant navigated a route with a visually displayed turn-by-turn systemeither with voice or without voice, or a route map with voice or without (an auditory aloneconditions was not included). The visual form <strong>of</strong> the navigational information presented either afull, forward-directional route map (located to the right <strong>of</strong> the driver on the horizontal center <strong>of</strong>the dashboard approximately 15 degrees below horizontal), graphical representation <strong>of</strong> staticturn-by-turn information, a textual paper direction list, or a conventional paper map. Theauditory form <strong>of</strong> the navigational information presented the visual directional list <strong>and</strong> theforward-directional route map with redundant aural messages. The format <strong>of</strong> the routeinformation (turn-by-turn information or a spatial map representation) varied the workload withincreased visual attentional required with the spatial map presentation. Mean velocity <strong>and</strong>23


Redundant modality presentation (auditory <strong>and</strong> visual) <strong>of</strong> secondary task information isexpected to improve driving <strong>and</strong> side task performance in providing the benefits (strengths) <strong>of</strong>both modal channels; however, redundant presentation has not always lead to better performancethan that <strong>of</strong> the two single-modality display conditions (Helleberg & Wickens, 2003); <strong>and</strong> <strong>of</strong> thedriving studies reviewed, only one evaluated both single-task conditions, to allow the results tobe placed in the context <strong>of</strong> the scheme shown in Figure I5.The different manipulations <strong>and</strong> methodologies <strong>of</strong> studies that investigate deliverymodality <strong>of</strong> in-vehicle task information in relatively realistic driving simulation revealinconsistencies in the conclusions regarding the effects <strong>of</strong> secondary task complexity,redundancy, <strong>and</strong> modality presentation on primary <strong>and</strong> secondary task performance. Our reviewindicates that relevance <strong>of</strong> the secondary task information to the primary driving task maymodulate the differential effect <strong>of</strong> auditory versus visual presentation <strong>of</strong> in-vehicle messages ondriver performance (see also Wickens & Seppelt, 2002). However this contrast between relevant<strong>and</strong> irrelevant in-vehicle messages has never been examined within an experiment, thatcontrolled for the length or complexity <strong>of</strong> the two classes <strong>of</strong> messages; hence, this study willinclude the manipulation <strong>of</strong> the two message types in the design. A second factor that is expectedto mediate the relative difference between the two modalities <strong>of</strong> information presentation is thecomplexity <strong>of</strong> the message set, which this experiment will also manipulate. Finally, in addition toauditory <strong>and</strong> visual delivery, this experiment will add a third delivery medium involving theredundant presentation <strong>of</strong> auditory <strong>and</strong> visual information, for which the literature, as notedabove, reveals an inconsistent pattern <strong>of</strong> benefits.All conditions in this experiment involve the same general procedure adopted by Horrey<strong>and</strong> Wickens (2002), in which in-vehicle messages were delivered while the driver engaged inroutine vehicle control <strong>and</strong> unexpected response to roadway hazards in a high-fidelity simulator.The difficulty <strong>of</strong> the driving task was controlled with a consistent high (workload) task dem<strong>and</strong>involving numerous curves <strong>and</strong> elevation changes. Numerous curves were included in the roaddesign as the primary driving task was intended to significantly tax the driver such that theirattentional resources were limited. Measures <strong>of</strong> lane deviation, steering control, <strong>and</strong> hazard eventdetection were taken to assess primary driving performance. At r<strong>and</strong>om intervals throughout thedriving task, participants were presented with in-vehicle messages delivered either auditorially,visually (in a head-down console), or redundantly; half <strong>of</strong> the participants received messagesrelevant to the driving task, the other half <strong>of</strong> the participants received messages irrelevant to thedriving task. The complexity <strong>of</strong> the information messages was varied with either a low-workloadmessage (single sentence/idea, maximum <strong>of</strong> 5 words) or a high-workload message (twosentences/ideas, maximum <strong>of</strong> 11 words). Every unexpected hazard occurred just following thedelivery <strong>of</strong> a complex message. A number <strong>of</strong> hypotheses, for this experiment, were formulatedbased on a comprehensive review <strong>of</strong> the literature (Wickens & Seppelt, 2002) examining thefactors which modulate the differential effect in costs <strong>and</strong> benefits <strong>of</strong> modality presentation toprimary driving performance <strong>and</strong> secondary (IVT) task performance.1) Cross-modal (auditory-visual) information presentation is expected to result in betterprimary task (safety, e.g., hazard detection) performance <strong>and</strong> secondary task performancethan intra-modal (visual – visual) information presentation, given the head-down location<strong>of</strong> the latter.26


narrow drop shoulder curve, 2 lane rural fork, <strong>and</strong> 2 lane rural t-intersection with left h<strong>and</strong> turnlane. The residential sections contained the following tiles: 2 lane residential 90 degree turn, <strong>and</strong>2 lane straight residential with center turning lane. Each driving environment was comprised <strong>of</strong> 2rural sections <strong>and</strong> 2 residential sections, with appropriate transition tiles between the different tilesections. Ambient traffic moved throughout the driving environment at a minimal rate <strong>of</strong>approximately 3-5 cars per minute (or 3-5 cars per kilometer). This moderate amount <strong>of</strong> trafficreflected a reasonable level <strong>of</strong> traffic in a rural setting. An example <strong>of</strong> one driving course isshown in Figure M1b. The environments were portrayed from the drivers’ viewpoint in a mannerconsistent with those seen in Horrey <strong>and</strong> Wickens (2002) (see Figure M1a).a) b)Figure M1: View <strong>of</strong> the driving simulator apparatus adopted from Horrey <strong>and</strong> Wickens (2002)(1a). Overhead view <strong>of</strong> typical driving environment with curved rural <strong>and</strong> residential roadsections (1b).EventsCoupled with the task <strong>of</strong> driving, subjects were required to process messages that carriedeither direct relevance to the driving task or that carried no direct relevance to the driving task.The messages were delivered at approximately 2-minute intervals for a total <strong>of</strong> 10 messages perdriving route (based on the total driving time <strong>of</strong> 20-25 minutes); subjects encountered a total <strong>of</strong>30 messages across conditions. The 10 task messages per route were r<strong>and</strong>omly selected from atotal <strong>of</strong> 14 possible events. Message complexity was varied for the 10 messages within eachdriving environment (comprised <strong>of</strong> 5 simple messages <strong>and</strong> 5 complex messages per drive).During the periods <strong>of</strong> baseline measurement, no messages were presented to subjects in order toassess driving task performance alone.Relevant task messages warned <strong>of</strong> driving events that occurred downstream; therefore,the relevant driving task messages carried significance for the primary driving task, <strong>and</strong> thedriver’s compliance with the message could be inferred from his or her subsequent steering ordecelerating behavior. Irrelevant task messages carried no significance to the primary driving29


task or to the downstream implications to the driving environment. <strong>In</strong> order to ensure that bothrelevant or irrelevant messages were heard or read, operators were asked to repeat them out-load.For relevant conditions, the braking <strong>and</strong> steering responses were examined to determine if theimplications <strong>of</strong> the messages were understood; an early braking or steering response indicated ahigher degree <strong>of</strong> underst<strong>and</strong>ing <strong>and</strong> compliance to the message. For irrelevant conditions,operators were asked to categorize the messages semantically to ensure that the implications <strong>of</strong>the messages were understood (see Appendix C for coding forms for relevant <strong>and</strong> irrelevantmessages). The messages (relevant <strong>and</strong> irrelevant) were delivered to the subjects approximately8 - 10 seconds before the relevant driving conditions were encountered downstream <strong>and</strong> wereplaced in the driving environment at approximately 2-minute intervals. The irrelevant“infotainment” messages were equated for complexity (equal word length) with the relevantmessages <strong>and</strong> occurred at the same location in the drive (see Appendix D for an example), as thecorresponding relevant messages. The irrelevant messages fell into 4 categories including: 1) cellphone/e-mail, 2) national news, 3) radio, <strong>and</strong> 4) weather. The four categories <strong>of</strong> irrelevantmessages were equally represented such that 2 <strong>of</strong> each type <strong>of</strong> messages (for a total <strong>of</strong> 8) <strong>and</strong> 2r<strong>and</strong>omly selected comprised the 10 messages per scenario. Examples <strong>of</strong> the relevant events,ATIS messages (relevant <strong>and</strong> irrelevant), <strong>and</strong> downstream implications are presented in TableM1.30


Table M1: Examples <strong>of</strong> ATIS relevant <strong>and</strong> irrelevant messages.Event Relevant Messages Irrelevant Messages Implications Downstream1. Accident in lane Accident in road ahead. (simple);Accident in road ahead. Slow to 45 MPH.Mostly sunny <strong>and</strong> warm. Highs in the 70s.(complex; Weather);Cones, collided cars,emergency vehicles(complex)Smith Job Talk tomorrow. (simple; CellPhone);Ex-Serb leader faces tribunal. (simple;National News)2. Curve Sharp curve ahead. Slow to 35 MPH.(complex)Artic chill deepens. Drop to -10 degrees.(complex; Weather)90-degree turn (residential)3. Crosswalk Pedestrian crossing ahead. (simple) Alternative Rock 107.1. (simple; Radio);Light showers today. (simple; Weather)Pedestrian crossing withpedestrians present onsidewalk/roadway4. Emergency vehicle in lane Approaching emergency vehicle (simple);Emergency vehicle in lane ahead. Blindspots in curved road ahead. (complex)5. Merging traffic Merging traffic from the right/left. (simple);Heavy traffic ahead. Merging traffic fromthe right/left. (complex)Afternoon flurries are expected today.Temperatures rising in the early evening.(complex; Weather);Volleyball practice tonight. (simple; CellPhone)Smooth Jazz V98.7. Home <strong>of</strong> the Trip-A-Day Give-Away. (complex; Radio);Girl missing after plane crash. (simple;National News);Contact Linda Carr. Office hours from 3-5daily. (complex; Cell Phone);Low temperatures today. Cold front fromthe North. (complex; Weather)Oncoming emergency vehicle(traveling down center <strong>of</strong> lane)Employ merging road tile, line<strong>of</strong> cars moving onto mainroadway from side road6. Merging vehicle Merging vehicle ahead. (simple);Merging vehicle ahead. Beware <strong>of</strong> blinddriveways. (complex)<strong>In</strong>formation request received. Send receiptto Accounting. (complex; Cell Phone);National Census taken. Latinos largestU.S. minority. (complex; National News)Obscured driveway with carpulling out <strong>and</strong> joining trafficflow7. Object in road Object in road ahead. (simple);Object in road ahead. Use caution.(complex)San Francisco bans Segway. (simple;National News);Drew <strong>and</strong> Mike Show. (simple; Radio);Highs in the 60s. Sunny tomorrow.(complex; Weather)8. Railroad crossing Railroad crossing ahead. (simple) Holiday party tonight. (simple; Cell Phone);Library item available. (simple; Cell Phone)9. Reduction <strong>of</strong> lead vehiclespeedSlow vehicle ahead. (simple);Slow vehicle ahead. Curve in road.(complex)Terrorist leader arrested. (simple; NationalNews);Tower <strong>of</strong> Power - What Is Hip? (complex;Radio);Send project fax. (simple; Cell Phone)Bicyclist in road ahead indriver's laneEmploy a railroad tileRapid deceleration <strong>of</strong> lead car;or car merging onto roadwayfrom left side <strong>of</strong> road followinga curve10. Road construction Construction ahead. (simple);Construction ahead, Detour to the left/right.(complex)LiteRock 98.7. (simple; Radio);Partly cloudy. (simple; Weather);94.7 WCSX. Rock to the Hits. (complex;Radio)Cones, barrels, constructionvehicle, arrow trailer; roadveering <strong>of</strong>f to the right/left thatloops back onto main road11. School zone School zone ahead. Watch for children.(complex)Light snow showers. Wind to increase.(complex; Weather);Smooth Jazz 104.3. Tunes to Groove.(complex; Radio)12. Slippery roadway Slippery road ahead. (simple) War protestors arrested. (simple; NationalNews)13. Speed reduction Speed reduction ahead. (simple);Speed reduction ahead. Speed limit 35MPH (complex)Research position available. Applicationdeadline June 5th. (complex; Cell Phone)Place school building <strong>and</strong>crosswalkChange in steering wheelcontrol variablesTransition tile from residentialto rural following curved ruraltile14. Traffic congestion Traffic congestion ahead. (simple);Traffic congestion ahead. Detour to theright/left. (complex)Chemical warheads found. <strong>In</strong>spectors to Line <strong>of</strong> cars; road veering <strong>of</strong>f tocontinue search. (complex; National News) the right/left that loops backonto main road31


The complex messages were twice the length <strong>of</strong> the simple messages both in word count<strong>and</strong> in syllable count (the summary statistics <strong>of</strong> the messages are presented in Table M2).Table M2. Summary statistics <strong>of</strong> word count <strong>and</strong> syllable count <strong>of</strong> relevant <strong>and</strong> irrelevantmessages.Category Number <strong>of</strong> Words Number <strong>of</strong> SyllablesSimple Complex Simple ComplexRelevant Mean 3.27 7.2 Mean 6.6 12.2Stdev 0.8 1.32 Stdev 1.3 2.65Irrelevant Mean 3.27 7.2 Mean 6.67 13.87Stdev 0.8 1.32 Stdev 1.88 4.09All text messages were presented to the driver in the head down display positioned on thedashboard next to the instrument panel. Auditory messages were presented through the vehicle’s4-speaker surround sound system. Redundant messages presented both modalitiessimultaneously, with the initial phoneme <strong>of</strong> the auditory message coinciding with the onset <strong>of</strong> thevisual message.Critical hazard events<strong>In</strong> addition to the events pertaining to the task messages, subjects were presented withtwo unexpected events per driving environment. The critical events were intended to measuredriving performance with respect to reaction time to unexpected events (an indirect measure <strong>of</strong>hazard awareness). The types <strong>of</strong> critical events are described below (see Figure M2):Pedestrian/Bicycle <strong>In</strong>cursions<strong>Vehicle</strong> <strong>In</strong>cursionsParked Car PulloutWide Right TurnOncoming Lane DriftOncoming Lane Drift (Curve)32


a) b)c) d)e) f)Figure M2. Unexpected hazard events used in the study, adopted from Horrey <strong>and</strong> Wickens(2002): a) Lane drift on a curve, b) Lane drift on a hill; c) Car incursion; d) Bicycle incursion, e)Parked car pullout, f) Wide right turn.<strong>Driving</strong> performance was assessed according to the subject’s response time to theunexpected events. For the incursion events, the response time was measured from the time thatthe simulator vehicle triggered the event (via a location or time trigger located on the roadway)to the time when the subject first responded to the event (measured responses: braking, steeringmaneuvers). The events that involved a lane drift or pullout were measured from the moment thatthe intruding vehicle began to move towards the simulator vehicle’s path to the first response <strong>of</strong>the subject (events <strong>and</strong> measures adopted from Horrey & Wickens, 2002). Two unique hazardevents per driving environment were presented for a total <strong>of</strong> 6 events per subject.33


Throughout the experimental drives, participants were presented with IVT messages.Participants were instructed to promptly attend to the messages <strong>and</strong> to respond to theirpresentation with a manual button press as quickly as possible; the response buttons were locatedon the steering wheel, at either side, for ease <strong>of</strong> response for the driver. The calculation <strong>of</strong> theresponse time to the button press began at the instance <strong>of</strong> visual presentation (or for the auditoryconditions, at the start <strong>of</strong> the initial phoneme) <strong>of</strong> the IVT message. Following the button press,participants were to read-back the displayed (or vocalized) message. Participants were notinstructed to press the button at the end <strong>of</strong> the read-back due to the varied number <strong>of</strong> words inboth the simple <strong>and</strong> complex conditions. Additionally, in the irrelevant message condition,participants were instructed to verbally classify the particular message according to the 4category types. This categorization task was intended to guarantee some semantic processing <strong>of</strong>the message, as was assured to be the case with the relevant messages (in processing <strong>and</strong>integrating the information for response to the downstream events). The inter-stimulus intervalbetween each message presentation was approximately 2 minutes. Half <strong>of</strong> the subjects receivedthe relevant messages that previewed them to upcoming roadway <strong>and</strong> hazard conditions whichcarried implications downstream on the roadway. The second half <strong>of</strong> the subjects receivedmessages that carried no significance (irrelevant) to the driving task at h<strong>and</strong> but that required theverbal response <strong>and</strong> categorization in addition to the manual key press in order to ensure mentalprocessing <strong>of</strong> the message information. Participants were instructed to attend to the IVTmessages as promptly as possible <strong>and</strong> to properly navigate for safe driving.During each experimental drive, participants were presented with two r<strong>and</strong>omly-selectedcritical hazard events. Each hazard event was coupled with a complex IVT message. The coupledpresentations occurred at r<strong>and</strong>om placements within the driving environment to ensure theunexpected nature <strong>of</strong> their presentation.Following completion <strong>of</strong> the three experimental drives, participants were administered abrief post-experimental questionnaire on basic driving habits (see Appendix J). Thequestionnaire was given to allow participants a moment to overcome any residual feelings <strong>of</strong>unease from driving in the simulator before leaving; <strong>and</strong> thus, the data from the questionnairewas not analyzed nor reported in this study. Participants were then thanked <strong>and</strong> reimbursed fortheir participation.Experimental DesignThe experimental design was a mixed factorial design (3 x 2 x 2 design) with 2 withinsubjectsvariables <strong>and</strong> 1 between-subjects variable. The experiment considered 3 levels <strong>of</strong> IVISdesign: (a) message modality (visual, auditory, combined visual-auditory), which was a withinsubjectsvariable; (b) message complexity (simple, complex), which was a within-subjectsvariable; <strong>and</strong> (c) message type (relevant, non-relevant), which was a between-subjects variable.The order <strong>of</strong> message modality <strong>and</strong> <strong>of</strong> the three driving worlds was counterbalanced in a Latinsquare design. Message complexity was r<strong>and</strong>omly selected within each driving environment.Participants were assigned to the two levels <strong>of</strong> relevance in an alternating fashion.<strong>In</strong> the “relevant” condition, (half <strong>of</strong> the subjects), each <strong>of</strong> the three roadway environmentswas scripted with its own unique set <strong>of</strong> relevant messages, downstream events, <strong>and</strong> hazardevents. <strong>In</strong> the “irrelevant” condition, (half <strong>of</strong> the subjects), a (unique) message set consisting <strong>of</strong>35


the 4 category types was created in a way that matched the syllable <strong>and</strong> word count <strong>of</strong> the simple<strong>and</strong> complex relevant messages. One third <strong>of</strong> the subjects drove through each environment withone <strong>of</strong> the three display configurations.RESULTSThe results will be described according to primary (driving) <strong>and</strong> secondary (messageprocessing) task performance. The first section addresses the effects <strong>of</strong> message modality,relevance, <strong>and</strong> information complexity on measures <strong>of</strong> primary task performance including lanedeviation (tracking error), steering velocity, <strong>and</strong> reaction time to hazard events. The secondsection addresses the effects <strong>of</strong> the same independent variables on secondary task performanceincluding reaction time to message presentation <strong>and</strong> accuracy <strong>of</strong> message read-back. Measures <strong>of</strong>compliance are presented as support for the effective design <strong>and</strong> participant use <strong>of</strong> the secondarytask messages. The analyses were performed using the Systat 10 s<strong>of</strong>tware. St<strong>and</strong>ard error barsare used on the graphs <strong>and</strong> show 2 SEs for each data point. Each <strong>of</strong> the graphs is presented withthe x-axis containing message modality ordered with the following labels: Auditory, Redundant,<strong>and</strong> Visual. The redundant modality is placed in-between the single modalities to better comparethe effects <strong>of</strong> redundancy to the single-modal conditions.Data from ten subjects (those mentioned in the Methods section) was not included in theanalyses for the primary or secondary task performance due to (substantially) incomplete datasets resulting from simulator sickness <strong>and</strong> equipment failures. However, two <strong>of</strong> the thirty-sixsubjects included in the analyses had minimal missing data; the missing cells were removed fromthe data analysis for these subjects.Due to the high volume <strong>of</strong> driving control data collected (60 Hz), the raw data wastransformed into readable outputs <strong>and</strong> condensed into 15-second windows <strong>of</strong> time surroundingthe 10-second message events. The remainder <strong>of</strong> the time was integrated <strong>and</strong> used for controlmeasures. Data for the 15-second windows <strong>of</strong> time were separated into 4 data bins that weregrouped according to meaningful flags in the data across time. The data flags corresponded to thetimeline <strong>of</strong> events presented earlier in the METHODS section; namely, message presentation(MP), the instance <strong>of</strong> a button press (BP), <strong>and</strong> the downstream event (DSE). The three datamarkers categorized the data across time <strong>and</strong> served as the zero points for data analyses. A visualrepresentation <strong>of</strong> the 15-second window <strong>of</strong> time <strong>and</strong> the relevant data markers is presented inFigure R1.36


Figure R1. A typical 15-second message window. The graph shows the continuous vehiclecontrol data over a 15-second window <strong>of</strong> time as a function <strong>of</strong> time with normalized y-axisvalues (for the purposes <strong>of</strong> plotting all <strong>of</strong> the dependent primary task measures on one graph).The yellow arrows represent the different data bins within the 15-second window <strong>of</strong> time. Thefigure depicts a trial in which a hazard became visible around the time <strong>of</strong> message presentation.Here it is possible to see both the abrupt steering response (green line downward) <strong>and</strong> brakingresponse (yellow dashed-line upward) that occurs at the time <strong>of</strong> the button press.From the third chronological data marker, downstream event (DSE), two additional dataflags were derived <strong>and</strong> additionally used to categorize the data: 5 seconds prior to thedownstream event (-5 DSE), <strong>and</strong> 5 seconds following the downstream event (+5 DSE). Thedownstream events (referred to in Figure R1) were identical across message relevance (relevant,irrelevant messages), a between-subjects manipulation; only the messages themselves varied as afunction <strong>of</strong> modality <strong>and</strong> complexity within each <strong>of</strong> the relevance conditions. For both theprimary <strong>and</strong> secondary task measures, the results for messages that were coupled with hazardevents immediately following the message presentation were analyzed separately from those thatwere not coupled.37


Primary Task PerformanceThe main performance measures for the primary (driving) task were collected from theperiod <strong>of</strong> time from the message presentation (MP) to 5 seconds prior to the downstream event(-5 DSE). The logic behind the use <strong>of</strong> the truncated timeline rests on the notion that participantswill have effectively processed the visual <strong>and</strong> auditory messages 5 seconds prior to thedownstream event, <strong>and</strong> therefore this time period should reflect the greatest impact <strong>of</strong> thedem<strong>and</strong>s <strong>and</strong> interference <strong>of</strong> processing the message.The time period from 5 seconds prior to the downstream event to 5 seconds following thedownstream event more addresses the compliance <strong>of</strong> the driver to the relevant messagesthemselves. Compliance measures will be addressed in the secondary task performance section.<strong>Driving</strong> performance during the latter period <strong>of</strong> time identifies whether the operator trulyunderstood the meaning <strong>and</strong> significance <strong>of</strong> the message.Lane KeepingAn omnibus ANOVA comparing message modality, message relevance, <strong>and</strong> messagecomplexity was examined for the dependent measures <strong>of</strong> lane position. Lane position wasmeasured by the absolute distance deviation (in meters) <strong>of</strong> the vehicle relative to the center <strong>of</strong> thelane.Figure R2 plots lane deviations as a joint function <strong>of</strong> modality <strong>and</strong> message relevance forthe nonhazard data (those eight out <strong>of</strong> ten messages per driving environment that did not includea hazard event). The analysis <strong>of</strong> these data revealed that modality <strong>of</strong> message delivery had noeffect on lane keeping (F(2,68) = .43; p = .65), although there was a small, but significantincrease in lane deviations during <strong>and</strong> after the delivery <strong>of</strong> all message types, relative to the value<strong>of</strong> 0.43 measured during baseline driving when no message was present (t(107) = 3.52; p < .01).There was a non significant trend for relevant messages to disrupt lane keeping more thanirrelevant messages (F(1,34) = 2.60, p = .12), a trend that appears (in Figure R2) to be enhancedwith the auditory presentation, although the modality x relevance interaction did not approachsignificance (F(2,68) = 1.731; p = .185).38


Tracking Error<strong>Modality</strong> x <strong>Relevance</strong>0.6Tracking Error (LanePos.)0.50.40.30.20.10Auditory Redundant VisualRelevantIrrelevant<strong>Modality</strong>Figure R2. Tracking error (lane position) in meters by relevance <strong>and</strong> modality.Neither the main effect <strong>of</strong> complexity, nor its interaction with the other variables (seen inFigure R3), approached conventional levels <strong>of</strong> significance (F(1,34) = 1.06; p = .31; F(2,68) =1.06; p = .35). Thus, in general, <strong>and</strong> replicating Horrey <strong>and</strong> Wickens (2002), drivers did a goodjob <strong>of</strong> protecting driving lane keeping performance from differential interference caused by thedifferent aspects <strong>of</strong> the message task, although messages <strong>of</strong> all types <strong>and</strong> modalities equallycaused some disruption. The redundant information compared to single-modal information didnot show a significant advantage; for all conditions, the redundant display was the average <strong>of</strong> thetwo single-modal displays for tracking performance.Tracking Error<strong>Modality</strong> x Complexity0.6Tracking Error (LanePos.)0.50.40.30.20.10Auditory Redundant VisualSimpleComplex<strong>Modality</strong>Figure R3. Tracking error (lane position) by modality <strong>and</strong> complexity.39


An omnibus ANOVA comparing message modality <strong>and</strong> message relevance for trackingerror during the Hazard events (those two out <strong>of</strong> ten messages per driving environment thatincluded a hazard event), additionally showed no significant effects for the main effects <strong>of</strong>modality, relevance, nor the interaction between modality <strong>and</strong> relevance (F(2,68) = 1.6; p = .21;F(1,34) = .55; p = .46; F(2,68) = 1.5; p = .23). The effect <strong>of</strong> message property on the delay inresponding to the hazard will be discussed below.Steering VelocityAn omnibus ANOVA comparing message modality, message relevance, <strong>and</strong> messagecomplexity was examined for the dependent measure <strong>of</strong> RMS steering velocity. Steering velocitywas measured by the root mean squared error <strong>of</strong> the vehicle’s steering wheel velocity (the rate <strong>of</strong>change <strong>of</strong> the position <strong>of</strong> the steering wheel over time). <strong>In</strong> the absence <strong>of</strong> differences in lanekeeping error, we generally assume that conditions triggering greater velocity are moredisruptive to driving performance.Figure R4 plots steering wheel velocity as a joint function <strong>of</strong> modality <strong>and</strong> complexityduring non-hazard trials; a measure <strong>of</strong> baseline steering wheel velocity performance is alsoincluded as a comparison between single <strong>and</strong> dual task steering control performance. The figurepresents the RMS velocity during the time period from the message presentation to 5 secondsprior to the downstream event. For the nonhazard data (these are the eight messages that are notcoupled with unexpected events), there are main effects <strong>of</strong> message modality <strong>and</strong> complexity onsteering wheel velocity (respectively F(2,68) = 4.54; p = .01; F(1.34) = 5.56; p = .02). Thesteering velocity is lower with displays that have an auditory component; thus, drivers are able topay more attention to the roadway <strong>and</strong> be less affected in their steering control when they arepresented with auditory rather than visual messages (or when their eyes are on the roadwayinstead <strong>of</strong> on a visual display). More complex messages also result in greater steering velocity,<strong>and</strong> the significant interaction between complexity <strong>and</strong> modality (F(2,68) = 2.84; p = .06)indicates that this complexity effect increases as the presence <strong>of</strong> the auditory component isreduced.40


RMS Velocity30282624222018NONHAZARD Steering Wheel Velocity<strong>Modality</strong> x ComplexityAuditory Redundant VisualComplexSimpleBaselineFigure R4. Steering wheel velocity by modality <strong>and</strong> complexity, with a comparison <strong>of</strong> baselinesteering velocity control, for non-hazard events.Neither the main effect <strong>of</strong> relevance, nor its interaction with the other variables(modality; complexity) approached significance (relevance: F(1,34) = .356, p = .555; modality xrelevance: F(2,68) = .982, p = .38; complexity x relevance: F(1,34) = 2.31, p = .138). Overall, theeffect <strong>of</strong> redundancy compared to the single-modality information displays is averaged betweenthe two single modalities. Finally, there was a significant increase in steering wheel velocityduring the instances <strong>of</strong> dual-task relative to single-task driving processing (t(107) = 5.7, p < .01).Considering the effects on steering behavior, in conjunction with those <strong>of</strong> the lanekeeping results, show that though drivers protect the primary tracking task lane keeping fromvisual distraction, their steering behavior is more disrupted with the presentation <strong>of</strong> a messagewhen they need to look down at a visual display <strong>and</strong> when the information presented in thedisplay is complex (<strong>of</strong> greater length). Drivers exhibit more abrupt steering maneuvers in thesecircumstances which would indicate that they are correcting for their removal <strong>of</strong> attention fromthe roadway to long (complex) messages on the visual display.Figure R5 plots steering wheel velocity as a joint function <strong>of</strong> modality <strong>and</strong> relevanceduring hazard events (two out <strong>of</strong> the ten messages on each drive); with baseline steering controlagain included as a comparison between single <strong>and</strong> dual task steering control performance. Notethe much greater steering activity in response to the hazard, compared to the non-hazard data inFigure R4. The baseline performance is statistically equivalent across relevance (F(1,34) =2.309; p = .14) <strong>and</strong> therefore plots the average value <strong>of</strong> the two relevance conditions. For thehazard data, the only significant effect is a main effect <strong>of</strong> relevance on steering wheel velocity(F(1,34) = 5.67, p = .02), with relevant messages having less steering velocity than irrelevantmessages; drivers are more aggressive in steering control when they encounter a hazard eventwhen attempting to process irrelevant messages than when processing a relevant message. Thisdifference may reflect the fact that relevant messages kept the driver’s attention relatively moreattuned to the roadway (independent <strong>of</strong> delivering modality) <strong>and</strong> hence less disrupted when the41


hazard occurred. The velocities seen in the irrelevant conditions show an overcompensation <strong>of</strong> asteering maneuver (when compared with the baseline condition, t(107) = 15.6, p < .01), whichwould cause the driver to veer dangerously into the oncoming lane when avoiding a hazard.HAZARD Steering Wheel Velocity<strong>Modality</strong> x <strong>Relevance</strong>RMS Velocity706050403020100Auditory Redundant VisualIrrelevantRelevantBaselineFigure R5. Steering wheel velocity by modality <strong>and</strong> relevance, with a comparison <strong>of</strong> baselinesteering velocity control, for hazard events.Hazard ResponseEach driver encountered 2 events per driving environment for a total <strong>of</strong> 6 hazard events.The following 6 unique hazard events were presented to each participant: 1) car incursion; 2)bicycle incursion; 3) right turn across path (wide right turn); 4) lane drift (curve); 5) lane drift(hill); 6) car pullout. The 6 events were pooled for data analysis in order to increase the statisticalpower <strong>of</strong> the analyses <strong>and</strong> to prevent the occurrence <strong>of</strong> a Type II error. The hazard response wasdetermined from the 15-second timeline <strong>of</strong> continuous vehicle control (see Figure R1). Thehazard response time is defined by the time period from the initiation <strong>of</strong> hazard movement to thefirst instance <strong>of</strong> a response to the hazard.To increase the interpretability <strong>and</strong> reliability <strong>of</strong> hazard response data, a hazard responsewas classified <strong>and</strong> analyzed according to two different criteria: 1) the first occurrence <strong>of</strong> either azero acceleration value (removing foot from accelerator), a braking response, or a steeringresponse; 2) the occurrence <strong>of</strong> either a braking response or a steering response. For the firstcriteria, a removal <strong>of</strong> the foot from the accelerator was considered a cautionary movement <strong>and</strong> aprecursor to a braking response; the accelerator value was only considered as a response measurewhen it was subsequently followed by a braking response. The second analysis was run onbraking <strong>and</strong> steering response alone as these maneuvers are considered more intentional <strong>and</strong>more certainly indicative <strong>of</strong> a safety response than an accelerator release, although these mayoverestimate the actual response time. An accelerator release could potentially be confoundedwith speed adjustments, driver inattention, or the automatic response <strong>of</strong> drivers to the message inthe “relevant” condition.42


<strong>In</strong> contrast to lane keeping, hazard response was more disrupted by differences in themessage interface. Figure R6 presents the time till the initial hazard response was recorded viameasure 1 (either accelerator release, braking, or swerving) as a joint function <strong>of</strong> modality <strong>and</strong>message relevance. A between-subjects ANOVA comparing message modality <strong>and</strong> relevancewas conducted in order to minimize the number <strong>of</strong> deleted cells (due to instances <strong>of</strong> nonresponses to the hazard events) in the data analysis. It will be recalled that hazards were onlypresented during the complex messages. The main effect <strong>of</strong> modality (F(2,96) = 3.48; p =.035),reflecting the generally higher costs <strong>of</strong> visual display, can best be interpreted within the context<strong>of</strong> the marginally significant relevance x modality interaction (F(2,96) = 2.62: p=.08). The datashow that for relevant messages, there was a strong tendency for a cost to visual presentation,<strong>and</strong> a benefit for auditory presentation, with the redundant combination falling in between. Forirrelevant messages, modality had little effect. The main effect <strong>of</strong> message relevance was alsonot significant. A factorial ANOVA comparing modality for relevant messages alone confirmedthat the interaction was driven by the relevant condition showing a significant effect <strong>of</strong> modalityequal to that seen in the between-subjects ANOVA (F(2,28) = 3.8, p = .035). A one-wayANOVA on the irrelevant condition showed no significant effect <strong>of</strong> modality (F(2,30) = .538, p= .59). As Figure R6 shows, there is an approximately half-second slowing <strong>of</strong> hazard responseassociated with the visual presentation <strong>of</strong> relevant messages, a time during which a vehicletraveling at 55 mph could cover 50 feet. The redundant condition was the average <strong>of</strong> the twosingle-modal conditions across relevance for hazard response.2Hazard Reaction Time<strong>Modality</strong> x <strong>Relevance</strong>RT1.81.61.41.2RelevantIrrelevant1Auditory Redundant Visual<strong>Modality</strong>Figure R6. Reaction time to hazard events by relevance <strong>and</strong> modality.The analysis <strong>of</strong> hazard response with criteria 2 confirms the effect <strong>of</strong> display type onhazard response found with the first analysis (which included accelerator response as adependent measure). Figure R7 presents the time until the initial hazard response was recorded(either braking or swerving) as a joint function <strong>of</strong> modality <strong>and</strong> relevance. A between-subjectsANOVA was again conducted to minimize deleted cells; the analysis again revealed a maineffect <strong>of</strong> modality (F(2,97) = 4.33; p = .016). Though the relevance x modality interaction did43


not reveal a significant effect, a t-test between the auditory relevant condition <strong>and</strong> the auditoryirrelevant condition show a marginally significant benefit to the auditory presentation <strong>of</strong> relevantinformation (t(33) = 1.83, p = .076). The second analysis supports the trends found in the firstanalysis for hazard response times.Hazard RT - Brake/Steering ONLY<strong>Modality</strong> x <strong>Relevance</strong>RT21.81.61.41.21Auditory Redundant Visual<strong>Modality</strong>RelevantIrrelevantFigure R7. Reaction time to hazard events by modality <strong>and</strong> relevance.The number <strong>of</strong> non-responses to hazards was also examined (i.e., instances when analysis<strong>of</strong> the continuous vehicle control data failed to reveal a braking or swerving event), <strong>and</strong> thisrevealed a pattern similar to that found for hazard response time. The number <strong>of</strong> such eventsacross the auditory, redundant <strong>and</strong> visual conditions respectively were 2 (2 relevant, 0irrelevant), 12 (4 relevant, 8 irrelevant), <strong>and</strong> 18 (9 relevant, 9 irrelevant), (Χ 2 (2) = 12.25, p < .01).Drivers were less likely to see the occurrence <strong>of</strong> the particular hazard events when they wereviewing a visual display than when they were using an auditory-component display; an effectthat was relatively uninfluenced by display relevance. This follows intuitively that taking theeyes <strong>of</strong> the road, even for a split second, can lead to disastrous consequences, such as notnoticing a bicycle pulling out from between two cars. As with other measures, performance inthe redundant conditions appears to lie in-between the two pure modality conditions.Secondary task message responseThe time to initiate a response to the message presentation was assessed from theinitialization <strong>of</strong> the first phoneme (in the auditory <strong>and</strong> redundant conditions) until the first press<strong>of</strong> the response button (located on either side <strong>of</strong> the steering wheel). For the visual conditions,the time to initiate a response was measured from the initial presentation <strong>of</strong> the message on theIVT display until the first press <strong>of</strong> the response button. An omnibus ANOVA comparingmessage modality, complexity, <strong>and</strong> relevance was conducted for both non-hazard <strong>and</strong> hazardmessage sets.44


Figure R8 plots the time to initiate a response to the secondary message task, as afunction <strong>of</strong> modality <strong>and</strong> message complexity, for messages delivered in the absence <strong>of</strong> a hazard.There was no effect <strong>of</strong> relevance in response time nor interaction <strong>of</strong> relevance with modality.These data indicate a main effect <strong>of</strong> modality (F(2,68) = 34.27, p = .01), suggesting a slowing <strong>of</strong>response to all messages with an auditory component (auditory <strong>and</strong> redundant), relative to thepure visual message. As the figure suggests, responses to complex messages were initiated moreslowly than to simple messages (F(1,34) = 66.83; p


RELEVANT Message (IVT) Reaction Time<strong>Modality</strong> x <strong>Relevance</strong> x ComplexityIRRELEVANT Message (IVT) Reaction Time<strong>Modality</strong> x <strong>Relevance</strong> x Complexity5544RT32SimpleComplexRT32SimpleComplex110Auditory Redundant Visual0Auditory Redundant Visual<strong>Modality</strong><strong>Modality</strong>Figure R9. Three-way interaction <strong>of</strong> reaction time <strong>of</strong> button press to message presentation byrelevance (relevant – left side; irrelevant – right side), complexity <strong>and</strong> modality.One explanation for the slowed response for the auditory modality displays is due to theobservation that most participants waited to press the response button (instructed to be depressedat the beginning <strong>of</strong> the vocalization) until the auditory message was completely vocalized; this islikely due to participants attempting to avoid crosstalk, which is the when the response(s)relevant for one task (in this case the verbal read-back) overlaps with the processing for adifferent task (in this case the auditory vocalization; Fracker & Wickens, 1989). Though theinstructions attempted to eliminate this effect, most participants still fell into the habit <strong>of</strong> waitinguntil the vocalization finished instead <strong>of</strong> pressing the button as soon as they heard the auditorymessage. For the redundant conditions, the auditory modality dictated the response time <strong>of</strong> theparticipants as they again waited until the completion <strong>of</strong> the vocalization <strong>of</strong> the message beforethey responded. The redundant modality exhibited the worst <strong>of</strong> both worlds relative to the singlemodalities.The analysis <strong>of</strong> response time to messages that occurred just prior to the occasionalhazard event (or in conjunction with the occurrence <strong>of</strong> hazard events) revealed trade<strong>of</strong>fs toperformance between the primary <strong>and</strong> secondary tasks. These data, shown in Figure R10,indicated, again, a main effect <strong>of</strong> modality (F(2,68) = 5.7; p < .01), with visual responses fasterthan auditory, <strong>and</strong> a main effect <strong>of</strong> relevance (F(1,34) = 6.05, p = .02) with relevant messagesresponded to more rapidly than irrelevant ones. The interaction between relevance <strong>and</strong> modalitywas not significant (F(2,68) = 1.71, p >.10). We note that the mean RT in Figure R10 (mean =2.80) was a half-second longer than the mean RT in Figure R8 (mean = 2.32) which indicatesthat our timing <strong>of</strong> events was generally successful in getting drivers to perceive <strong>and</strong> respond tothe message prior to dealing with the hazard. These data are consistent with the view that thefaster responses to relevant <strong>and</strong> visual messages compromised the responses to hazard events(Figure R6). Based on the analysis <strong>of</strong> response time to hazard events, visual relevant messages(as opposed to auditory relevant messages) are particularly at risk <strong>of</strong> diverting the driver’sattention away from the road during a crucial instance requiring a quick response to situations inthe road ahead.46


HAZARD Message (IVT) Reaction Time<strong>Modality</strong> x <strong>Relevance</strong>RT43.532.521.510.50Auditory Redundant Visual<strong>Modality</strong>RelevantIrrelevantFigure R10. Reaction time <strong>of</strong> button press to message presentation during hazard events bycomplexity <strong>and</strong> modality.Finally, we examined the accuracy <strong>of</strong> the read-backs <strong>of</strong> the task messages as a function <strong>of</strong>modality, complexity <strong>and</strong> relevance. These read-backs took place after the participant hadpressed the button indicating a recognition <strong>of</strong> the message presentation; the read-back wasdesigned to ensure that the participant had perceived the message. This analysis revealed that inalmost all conditions, accuracy was high, (around 93%) <strong>and</strong> did not vary. The only exceptionwas the low (65%) accuracy <strong>of</strong> messages that were irrelevant, complex, <strong>and</strong> delivered without avisual component (auditory only), a similar observation to that observed by Helleberg <strong>and</strong>Wickens (2003).Sometimes, appropriately, drivers allowed their perception <strong>of</strong> the hazard to preempt <strong>and</strong>prolong their response to the IVT, despite the fact that the IVT appeared first. <strong>In</strong>terestingly, whenhazard <strong>and</strong> non-hazard IVT responses were combined in a single ANOVA, this revealed, inaddition to the significant half-second slowing caused by the hazard (F(1,34) = 27.0, p < .01), ahazard x relevance interaction (F(1,34) = 5.72, p = .022), which indicates that there was greaterhazard-induced slowing <strong>of</strong> the IVT RT for irrelevant than for relevant messages. This is turnsuggests that drivers were more able to “disengage” for the IVT task when it was irrelevant. Thiseffect was not modulated by modality.Comprehension Analysis<strong>In</strong> an effort to ensure that the semantic content <strong>of</strong> messages was effectively processed bythe participants, comprehension measures were analyzed for both relevant (downstreamcompliance) <strong>and</strong> irrelevant (category accuracy) messages. The comprehension measures(relevant <strong>and</strong> irrelevant) were analyzed as a function <strong>of</strong> message modality <strong>and</strong> complexity. Thenon-hazard <strong>and</strong> hazard data were analyzed separately.47


Relevant messagesThe time period from 5 seconds prior to the downstream event (-5 DSE) to 5 secondsfollowing the downstream event (+5 DSE) was analyzed to assess the comprehension, asoperationally defined by the downstream compliance, with the relevant messages. <strong>Driving</strong>behavior downstream determined whether the driver comprehended (or complied to) the relevantmessage. Each message’s 15-second timeline was examined graphically to identify compliancemeasures (brake, steering, acceleration) in the -5 DSE to +5 DSE period.A factorial ANOVA comparing message modality <strong>and</strong> message complexity wasexamined for the relevant downstream compliance measures for non-hazard events. A one-wayANOVA compared the effect <strong>of</strong> modality for the relevant downstream compliance measures forhazard events. A compliance response was classified <strong>and</strong> analyzed according to the followingcriteria: 1) the occurrence <strong>of</strong> a braking response, 2) the occurrence <strong>of</strong> a steering response; or 3)the occurrence <strong>of</strong> a zero accelerator value. Any combination <strong>of</strong> these variable, in addition to thesingular values, was considered a definitive response.Three levels <strong>of</strong> compliance were used to code the data. A steering or braking response (orthe combination <strong>of</strong> the two) was considered a complete compliance response <strong>and</strong> was coded withthe value <strong>of</strong> 1. A acceleration response was considered a partial compliance response <strong>and</strong> wascoded with a value <strong>of</strong> 0.5. The lack <strong>of</strong> either a steering, braking, or acceleration response wasconsidered a null compliance response <strong>and</strong> was coded with a value <strong>of</strong> 0.Across conditions, participants were 70.5% compliant to the relevant messages in theabsence <strong>of</strong> a hazard. Figure R11 plots the compliance measures for the relevant messages as afunction <strong>of</strong> modality <strong>and</strong> complexity, for messages delivered in the absence <strong>of</strong> a hazard. The dataindicate a main effect <strong>of</strong> complexity (F(1,17) = 5.16, p = .036), suggesting a more accurateresponse to complex messages relative to simple messages. <strong>In</strong>creased complexity <strong>of</strong> the messageprovides greater detail in describing the downstream event, which may account for the improvedcompliance to more complex messages. The main effect <strong>of</strong> modality <strong>and</strong> the modality xcomplexity interaction did not show significance (F(2,34) = 1.278, p = .30; F(2,34) = .327, p =.72). Note that the fact that 30% <strong>of</strong> the messages did not produce observable behavior <strong>of</strong>compliance does not imply that these were not processed. We do know that all messages wereread back. It is possible that, for some <strong>of</strong> these 30%, drivers did not assess that a change invehicle state was required. On the other h<strong>and</strong>, the presence <strong>of</strong> the appropriate downstreammaneuver does not guarantee that the semantic implications <strong>of</strong> the relevant message wasprocessed, because we did not collect data on vehicle control behavior to the same downstreamevents, in the absence <strong>of</strong> any messages whatsoever.48


NONHAZARD Compliance Measure<strong>Modality</strong> x Complexity% accuracy10.80.60.40.2SimpleComplex0Auditory Redundant Visual<strong>Modality</strong>Figure R11. Compliance accuracy to non-hazard events by modality <strong>and</strong> percent accuracy, forrelevant messages.For hazard events, the analysis <strong>of</strong> compliance measures for modality did not revealsignificance (F(2,34) = .571, p = .57). Across modality, participants were 63% complaint to therelevant messages in the presence <strong>of</strong> a hazard. The relevant messages enacted compliance wellabove the mean for both the hazard <strong>and</strong> non-hazard events, showing that participants did indeedattend to the messages <strong>and</strong> adjust their driving behavior appropriately to account for thedownstream events.Irrelevant messagesA factorial ANOVA comparing message modality <strong>and</strong> message complexity wasexamined for comprehension <strong>of</strong> the irrelevant messages in the absence <strong>of</strong> hazards. A one-wayANOVA comparing the effect <strong>of</strong> modality for the irrelevant comprehension measures wasconducted for hazard events. A comprehension response was analyzed according to the accuracy<strong>of</strong> the irrelevant message categorization. Participants were to correctly categorize the irrelevantmessages according to the following four categories: 1) radio, 2) e-mail, 3) news, <strong>and</strong> 4) weather.Three levels <strong>of</strong> accuracy were used to code the irrelevant data. An accurate assignment <strong>of</strong>a category type to the irrelevant message was considered a complete comprehensive response<strong>and</strong> was coded with the value <strong>of</strong> 1. An inaccurate assignment <strong>of</strong> a category type to the irrelevantmessage was considered a partial comprehensive response <strong>and</strong> was coded with a value <strong>of</strong> 0.5.The lack <strong>of</strong> an assignment <strong>of</strong> a category type to the irrelevant message was considered a nullcomprehensive response <strong>and</strong> was coded with a value <strong>of</strong> 0.Figure R12 plots the comprehension measures for the irrelevant messages as a function <strong>of</strong>modality <strong>and</strong> complexity, for messages delivered in the absence <strong>of</strong> a hazard. We note first thatthe overall level <strong>of</strong> accuracy was high (90.1%), assuring us that drivers did pay attention to the49


meaning <strong>of</strong> the messages. The factorial ANOVA did not show any significant effects; themanipulations <strong>of</strong> message modality <strong>and</strong> complexity (<strong>and</strong> their interaction) did not producesignificant variance among the data (modality: F(2,34) = .095, p = .91; complexity: F(1,17) =1.449, p = .25; modality x complexity: F(2,34) = .027, p = .97).NONHAZARD Comprehension<strong>Modality</strong> x Complexity% correct10.80.60.40.20Auditory Redundant Visual<strong>Modality</strong>SimpleComplexFigure R12. Comprehension accuracy to non-hazard events by modality <strong>and</strong> percent accuracy,for irrelevant messages.For the messages delivered in the presence <strong>of</strong> a hazard, the analysis <strong>of</strong> accuracy measuresfor modality similarly did not reveal significance (F(2,34) = .61, p = .552). The overall accuracyacross modality however was at 94%, showing again that participants were very accurate in theircategorization <strong>of</strong> the irrelevant messages, even when a hazard occurred.Thus, the compliance/comprehension analysis reveals that the relevant <strong>and</strong> irrelevantmessages were representative <strong>of</strong> their categories. Participants attended to the messages <strong>and</strong> wereapt at deriving the meaning contained in such messages.<strong>In</strong>dividual Differences in Hazard Detection Performance<strong>In</strong> order to assess the safety-critical drivers, those who present a hazard to others due tothe least safe driving behaviors, a set <strong>of</strong> “worst-case” performance measures were examined. <strong>In</strong>particular, the reaction time to hazard events for the worst-case performers was analyzed. Thedata for the 6 poorest performers (the slowest reaction times to the hazard events) was used torepresent the worst performers across all subjects. The means collapsed across modality wereused to determine the 6 poorest performers for each level <strong>of</strong> relevance (a between-subjectsmanipulation). The individual data points for the 6 worst performers within conditions was thenincluded in the data analysis.Figure R13 shows the data for the slowest 6 performers as a function <strong>of</strong> messagemodality <strong>and</strong> relevance. A between-subjects ANOVA failed to show a main effect <strong>of</strong> modality orrelevance (F(2,16) = 1.379, p = .28; F(1,8) = 0.0, p = .998 respectively). Additionally, the50


interaction between message modality <strong>and</strong> relevance did not show significant (F(2,16) = .214, p= .81). However, the means support the trends seen for the entire subject population (see FigureR6); showing that the worst drivers have tendencies that fall within the normal range <strong>of</strong> drivers.Though not statistically significant, the graphical trends in the data for the worst performerssupports the idea that relevant messages result in a benefit to auditory presentation <strong>and</strong> a cost tovisual presentation. Most importantly, the data reveal two things: 1) The slow responders in ourstudy do not show a pattern that is qualitatively different from other responders, <strong>and</strong> 2) theirmean response time was not exceptionally greater than that <strong>of</strong> the population as a whole.Hazard RT Worst Performers2.52RT1.510.5RelevantIrrelevant0Auditory Redundant Visual<strong>Modality</strong>Figure R13. Response time to the hazard events by modality <strong>and</strong> relevance for the slowest 6performers.DISCUSSIONThe current study was designed to assess the factors that could moderate the differencesbetween auditory <strong>and</strong> visual delivery <strong>of</strong> in-vehicle information, primarily as such informationaffected measures <strong>of</strong> driving performance, <strong>and</strong> secondarily as such factors would influence theprocessing <strong>of</strong> the in-vehicle information itself. <strong>In</strong> particular, we were interested in threemoderating factors: task relevance, task complexity, <strong>and</strong> the nature <strong>of</strong> the driving performancevariable (lane keeping versus hazard response). Finally, we were interested in how the use <strong>of</strong> aredundant combination <strong>of</strong> auditory <strong>and</strong> visual delivery could also moderate these effects.Resource Competition <strong>and</strong> Task <strong>In</strong>terference<strong>In</strong> many respects our design was similar to, <strong>and</strong> our results replicated the effects obtainedby Horrey <strong>and</strong> Wickens (2002). Like them, we found that modality differences were moreevident in the secondary in-vehicle task, than in the primary task performance measure <strong>of</strong> lanekeeping, but did note a small overall decrement in primary task driving associated with thepresence <strong>of</strong> the side task. As in Horrey <strong>and</strong> Wickens (2002, 2003), we found no modality effecton lane keeping, but a slight visual cost to steering, <strong>and</strong> a substantial cost to unexpected hazarddetection (both in response time, <strong>and</strong> response likelihood), associated with the head down visualdisplay <strong>of</strong> IVT information. These effects support both <strong>of</strong> our first two hypotheses.51


Both the findings <strong>of</strong> dissociation between lane keeping (no decrement) <strong>and</strong> hazardresponse (visual decrement), <strong>and</strong> the findings <strong>of</strong> the visual head down cost to hazard response areconsistent with, respectively, the visual channels distinction <strong>and</strong> the perceptual modalitiesdistinctions <strong>of</strong> the multiple resource theory (Wickens, 2002). Stated in other terms, hazarddetection <strong>and</strong> visual IVT information competed for focal visual resource <strong>and</strong> created interferencein hazard response (Figure R6); but the combination <strong>of</strong> visual IVT (focal vision) <strong>and</strong> lanekeeping (ambient vision) exploited separate resources, availing better parallel processing (FigureR3), as also did the combinations <strong>of</strong> auditory IVT <strong>and</strong> both lane keeping <strong>and</strong> hazard detection,exploiting separate modality-defined resources. The finding <strong>of</strong> auditory modality benefits todriving performance are certainly consistent with the prevalence <strong>of</strong> modality-comparison studies,using head down displays, reviewed in the <strong>In</strong>troduction (see Figures I1-I3, <strong>and</strong> Wickens &Seppelt, 2002).Thus in this study, as in many others, it appears that any auditory costs <strong>of</strong> pre-emption aregenerally dominated, or at least <strong>of</strong>fset by the visual costs <strong>of</strong> scanning to the head-down location(<strong>and</strong> /or requirement to use peripheral vision). Further evidence against the role <strong>of</strong> an auditorypreemption mechanism operating in the current paradigm, was provided by the finding that theperformance <strong>of</strong> the side task did not particularly benefit from auditory delivery (Figure R9), abenefit which, had it been observed in conjunction with a driving performance cost to auditorydelivery, would have been consistent with the pattern <strong>of</strong> effects typical <strong>of</strong> preemption (Wickens& Liu, 1988; Helleberg & Wickens, 2003; Wickens, Dixon, & Seppelt, 2002). It appears that thisauditory preemption cost only begins to emerge when the visual side task performance isimproved by presenting it at small spatial separations; <strong>and</strong> even then, the costs are notconsistently observed (Horrey & Wickens, 2002).We do note, in the current data, that the absence <strong>of</strong> benefit to auditory IVT processingcould be a complete artifact <strong>of</strong> the driver’s delay strategy in initiating the key press afterreadback was complete. Had our timing finished at the start <strong>of</strong> voice articulation, we mightindeed have found an auditory benefit to side task delivery, equally consistent with preemption<strong>and</strong> multiple resources.Significantly, the current study added to the findings <strong>of</strong> Horrey <strong>and</strong> Wickens (2002), <strong>and</strong>provided a unique contribution to all <strong>of</strong> the literature by varying message relevance within asingle experimental paradigm. The latter was done in order to test the third hypotheses,suggested by between-experimental comparisons reviewed in the <strong>In</strong>troduction, that the benefitsto auditory delivery over visual would be amplified, if that information were relevant to thedriving task (e.g., IVIS information) in contrast to irrelevant information (e.g., e-mail). Thecurrent results appear to be consistent with this hypothesis <strong>of</strong> a modality <strong>and</strong> relevanceinteraction. When the message was relevant, <strong>and</strong> therefore potentially easier to associate with thedriving task, drivers appeared to capitalize upon multiple resource theory, <strong>and</strong> to be morevigilant for hazards when the messages were delivered aurally than visually. On the other h<strong>and</strong>,when the message was irrelevant, modality appeared to have little influence on performancedriving performance. Importantly, this latter finding does not advocate visual presentation forirrelevant information. The possible costs <strong>of</strong> visual display <strong>of</strong> such information needs to befurther explored because, at wide eccentricities, there would still be expected to be visualresource competition, <strong>and</strong> only a carefully conceived allocation policy could be guaranteed toprotect the primary task.52


Message relevance, in addition to modulating the effect <strong>of</strong> modality, appeared to haveone other main effect on driving performance: when messages were irrelevant, <strong>and</strong> driversstarted processing them, the drivers appeared to be more disrupted by the hazards, as thisdisruption was indexed by the increase in steering wheel velocity (i.e., making more aggressivemaneuvers). Although the link between disruption <strong>and</strong> steering velocity is an indirect one, it isplausible to postulate such a link, given other associations in the data between steering velocityincrease, <strong>and</strong> the presence <strong>of</strong> the concurrent IVT task (see Figure R4).Regarding our fourth hypothesis, we had postulated two alternative effects <strong>of</strong> increasingIVT complexity. On the one h<strong>and</strong>, a majority <strong>of</strong> studies reviewed in the <strong>In</strong>troduction hadrevealed that increasing complexity <strong>of</strong> the in-vehicle task, by increasing the amount <strong>of</strong> visualhead-down time when the IVT was delivered visually, would lead to an increasing visual cost (ordecreasing visual advantage), relative to auditory delivery. On the other h<strong>and</strong>, some studies in anon-driving environment (e.g., Helleberg & Wickens, 2003) had provided at least some evidencefor an increasing cost <strong>of</strong> auditory preemption as complexity became greater. This because driversstrategically would need to keep attention fixated on the longer auditory message, in order that itnot be lost from working memory; whereas a correspondingly longer visual message couldsimply be processed with repeated scans at optimal times, while not diverting extra attentionfrom the roadway environment. According to such a perceptual hypothesis, the resources that arediverted are not sensory, but rather, are modality-independent perceptual-cognitive resources,diverted longer to the auditory message than to the visual. This conception is consistent with oneput forth by Latorella (1998).<strong>In</strong> fact, our current results were more consistent with the first explanation (visualscanning) than the second (strategic auditory preemption). <strong>In</strong>creasing message complexity, ifanything, produced a greater, rather than a reduced auditory advantage to driving performance.While it is true that long complex auditory messages suffered in their own comprehension(particularly when these messages were irrelevant, Figure R9), this increasing cost was notevident in driving task performance, as drivers again protected the higher-valued task (driving).The trend that we observed here, <strong>of</strong> side task’s complexity influence on driving performance, isquite consistent with the results observed in the more basic paradigm <strong>of</strong> Wickens, Dixon, <strong>and</strong>Seppelt (2002). There we also found that increasing side task complexity (the length <strong>of</strong> a phonenumber) improved tracking performance in the auditory, relative to the visual condition. Thedifference however between those results <strong>and</strong> the current findings, is that at the lowest level <strong>of</strong>complexity, there was an auditory cost (visual benefit) to tracking performance, which indicatedsome form <strong>of</strong> onset preemption (Spence & Driver, 1997), which was not observed in the currentstudy.Regarding our fifth hypothesis, the current study also added to the very scarce pool <strong>of</strong>literature that has examined AV redundancy assistance in dual task conditions. Of the fewstudies that were located, only one (Liu, 2001), had examined this issue in presenting in-vehicleinformation, observing that redundancy <strong>of</strong>fered the “best <strong>of</strong> both worlds” <strong>of</strong> the two single taskmodalities using the 5-level categorization scheme developed by Wickens <strong>and</strong> Gosney (2003).The current results, in contrast, showed no redundancy gain. While the single modality auditorycondition, as noted above, was generally equal to or better than the single modality visualcondition, in the latter case (auditory better than visual), there were no performance measures inwhich the redundant display exceeded the auditory display, <strong>and</strong>, indeed, in nearly all cases, an53


“averaging” model best captured the results. That is, <strong>of</strong>fering redundant visual informationappeared to hamper performance relative to pure auditory delivery, presumably by invitingvisual distraction.<strong>In</strong> this regard however it is important to note that the redundant condition never <strong>of</strong>feredthe “worst <strong>of</strong> both worlds”, a syndrome that had been observed in earlier research (Wickens,Goh, Helleberg, Horrey, & Talleur, in press; Helleberg & Wickens, 2003; Seagull, Wickens, &Loeb, 2001; Wickens <strong>and</strong> Gosney, 2003). Of these studies, the one that revealed the consistent“averaging” model, typical <strong>of</strong> the data found here, was Wickens <strong>and</strong> Gosney (2003) experiment2, in which participants were explicitly trained on the strategies for exploiting redundant AVdisplays. Hence it is possible that drivers here spontaneously adopted this strategy. Alternatively,it is possible that explicit training <strong>of</strong> the strategy to drivers could have brought out the “best <strong>of</strong>both worlds” for the redundant display, or, better still, the “Gestalt” pattern in which redundancyprovides an advantage over either single modality display. This possibility is one that awaitsfurther investigation.<strong>Relevance</strong> to Attention TheoriesThe findings <strong>of</strong> the current study can be considered in the context <strong>of</strong> different models <strong>of</strong>attention. The current data clearly support a multiple resource model, at least as multipleresources are defined by visual (focal ambient) channel distinctions, <strong>and</strong> by peripheral sensorymodalities (auditory-visual). Strayer <strong>and</strong> Johnston (2001) have <strong>of</strong>fered an alternative view thatthe competition between driving <strong>and</strong> in-vehicle technology is governed by allocation <strong>of</strong> a morecentral “attention”. Strayer <strong>and</strong> Johnston’s results do not contradict a multiple resource view, asthey did not vary the modality <strong>of</strong> arriving perceptual information, which would be necessary totest predictions <strong>of</strong> input-defined multiple resources.At the same time, the current results are quite consistent with the Strayer <strong>and</strong> Johnstonattentional model, in that some component <strong>of</strong> interference between driving (lane keeping) <strong>and</strong>IVT processing was modality independent, accounting for the small increase in lane keepingerror associated with processing all IVT message, even those delivered auditorally. There wasalso an associated increase in steering activity, associated with all messages (Figure R4). Itremains unclear from the current data, whether the source <strong>of</strong> this modality-independentinterference, is competition between driving <strong>and</strong> IVT processing for general perceptual-cognitiveresources, competition for common response resources , or even competition for some verygeneral “executive control” mechanism. It is plausible, but cannot be determined conclusively bythe above data, that the source may be competition for response-related resources (generating thearticulation <strong>of</strong> the message readback, concurrently with generating the motor comm<strong>and</strong>s forsteering), given the specific association <strong>of</strong> lane-keeping with steering activity, <strong>and</strong> given thegenerally limited capacity <strong>of</strong> the human to generate responses (Pashler, 1998; Wickens, 2002).Such an association <strong>of</strong> the modality-independent component with response conflict, rather thancognition, is supported by the fact that the response velocity effect increased with messagecomplexity (<strong>and</strong> therefore with the length <strong>of</strong> the readback articulation; Figure R4), but was notaffected by the cognitive component <strong>of</strong> the message (relevance to driving), except during thehazard events.54


Design Implications, <strong>and</strong> LimitationsThe most important design implications <strong>of</strong> the current results, are that verbal IVIS (i.e.,driving-relevant) information should not be displayed on a head-down console. Auditorypresentation is preferable in protecting the response to the unexpected roadway hazard. Of coursesuch information can be presented in a head-up display, not tested here (see Horrey & Wickens,2002). The current results also suggest caution in augmenting an auditory display with aredundant visual head-down text display, as this option appears to mitigate the advantages <strong>of</strong>multiple resources <strong>of</strong>fered by the auditory display <strong>of</strong> driving-relevant information. Training mayeffectively eliminate the redundancy costs, <strong>and</strong> avail the “best <strong>of</strong> both worlds”. However analternative solution, if the auditory message is complex or long <strong>and</strong> may be forgotten, is simplyto provide an easy-to-use repeat function, perhaps on a steering wheel mounted button.A second implication is that the auditory advantage does not hold for driving irrelevantinformation. Such information may always <strong>of</strong>fer greater distraction from driving (than relevantinformation). The processing <strong>of</strong> such information was more hampered by the driving task (whenit was complex <strong>and</strong> auditory), <strong>and</strong> more disrupted driving steering behavior, when a hazard waspresent. Such findings signal a general cost to processing driving-irrelevant information displays.The extent to which this cost might be mitigated by head up display remains to be established.The current study has certain limitations, which dictate some caution in generalizing thecurrent results. First, we only evaluated redundant presentation <strong>of</strong> verbal information. As notedin the <strong>In</strong>troduction, there may be advantages <strong>of</strong> distributing related visual spatial (i.e., map) <strong>and</strong>verbal information across modalities, presenting the latter as speech rather than text (Wickens &Holl<strong>and</strong>s, 2000). Second, our evaluation took place in the driving simulator, rather than on theroadway. It is possible that our effects might not generalize to the latter environment, inparticular because <strong>of</strong> the more realistic motion cues associated with actual vehicle control.However such motion cues would have been expected to better support the tracking (i.e., lanekeeping) aspect <strong>of</strong> driving, than to support the hazard detection aspect. <strong>In</strong> fact, if anything, therealistic motion feedback <strong>of</strong> on-the-road vehicle control, could be expected to supplement those<strong>of</strong> ambient vision, allowing effective lane keeping with even more head down time, <strong>and</strong> therebyamplifying the visual penalty to hazard response, compared to the simulator results obtainedhere. Thirdly, as noted, we did not fully assure relevant message compliance as, to do so, wouldhave required a central condition, containing the same downstream events, with no messages atall.Finally, the potential generality <strong>of</strong> the current results could be challenged because thesedid not fully evaluate the “worst case” situations, which <strong>of</strong>ten contribute to accidents. Our hazardresponses were not fully unexpected, <strong>and</strong> even the “worst performers” in our study did notrespond much slower to the hazards than did the sampled population as a whole. On the oneh<strong>and</strong>, this latter finding is important, in revealing that the qualitative pattern <strong>of</strong> interference weobserved, characterizing the “average” driver, can generalize to those slower responders, whomay present the greatest concerns to driver safety. On the other h<strong>and</strong>, the issue <strong>of</strong> generalizingexperimental results, with their dependence on generally expected conditions, conventionalstatistics, <strong>and</strong> the ”psychology <strong>of</strong> the mean”, is one that will always challenge experimentalresearch (Wickens, 2001), <strong>and</strong> is a reason why the implications <strong>of</strong> such research should always55


e joined with more epistemological or naturalistic studies <strong>of</strong> drivers, based upon accident <strong>and</strong>incident analysis.ACKNOWLEDGMENTSSpecial thanks to Bill Horrey, Jeff Mayhugh, <strong>and</strong> Nick Cassavaugh for help withsimulation programming <strong>and</strong> integration <strong>of</strong> simulation <strong>and</strong> script design. Many thanks to RonCarbonari for assistance with results analysis, data collection <strong>and</strong> reduction. Thanks to HankKaczmarski <strong>and</strong> Braden Kowitz for providing, maintaining, <strong>and</strong> installing simulator s<strong>of</strong>tware <strong>and</strong>equipment, <strong>and</strong> an additional thanks to Debbie Carrier for her help with scheduling subjects inthe simulator. Thanks also to Art Kramer for his helpful input <strong>and</strong> comments.REFERENCESAntin, J.F., Dingus, T.A., Hulse, M.C., & Wierwille, W.W. (1990). An evaluation <strong>of</strong> theeffectiveness <strong>and</strong> efficiency <strong>of</strong> an automobile moving map navigational display.<strong>In</strong>ternational Journal <strong>of</strong> Man-Machine Studies, 33, 581-594.Baddeley, A.D. (1986). Working memory. Oxford: Clarendon Press.Baddeley, A.D. (1995). Working memory. <strong>In</strong> M.S. Gazzaniga et al. (Eds.), The cognitiveneurosciences (pp. 755-784). Cambridge, MA: MIT Press.Braune, R., & Wickens, C.D. (1985). The functional age pr<strong>of</strong>ile: An objective decision criterionfor the assessment <strong>of</strong> pilot performance capacities <strong>and</strong> capabilities. Human Factors, 27,681-694.Burnett, G.E., & Joyner, S.M. (1997). An assessment <strong>of</strong> moving map <strong>and</strong> symbol-based routeguidance systems. <strong>In</strong> Y.I. Noy (Ed), Ergonomics <strong>and</strong> safety <strong>of</strong> intelligent driverinterfaces. Human factors in transportation (pp. 115-137). Hillsdale, NJ: LawrenceErlbaum Associates.Dingus, T.A., Hulse, M.C, McGehee, D.V., & Manakkal, R. (1994). Driver performance resultsfrom the Travtek IVHS camera car evaluation study. Proceedings <strong>of</strong> the 38 th AnnualMeeting <strong>of</strong> the Human Factors & Ergonomics Society (pp. 1118-1122). Santa Monica,CA: Human Factors Society.Dingus, T.A., McGehee, D.V., Manakkal, N., Jahns, S.K., Carney, C., & Hankey, J.M. (1997).Human factors field evaluation <strong>of</strong> automotive headway maintenance/collision warningdevices. Human Factors, 39(2), 216-229.Folds, D.J., & Gerth, J.M. (1994). Auditory monitoring <strong>of</strong> up to eight simultaneous sources.Proceedings <strong>of</strong> the 38 th Annual Meeting <strong>of</strong> the Human Factors Society (pp. 505-509).Santa Monica, CA: Human Factors Society.Fracker, M. L., & Wickens, C. D. (1989). Resources, confusions, <strong>and</strong> compatibility in dual axistracking: Displays, controls, <strong>and</strong> dynamics. Journal <strong>of</strong> Experimental Psychology: HumanPerception <strong>and</strong> Performance, 15, 80-96.Gish, K.W., Staplin, L., Stewart, J., & Perel, M. (1999). Sensory <strong>and</strong> cognitive factors affectingautomotive head-up display effectiveness. Proceedings <strong>of</strong> the 78 th Annual TransportationResearch Board. Washington, DC: Traffic Safety Division.56


Helleberg, J. & Wickens, C. D. (2003). <strong>Effects</strong> <strong>of</strong> data-link modality <strong>and</strong> display redundancy onpilot performance: An attentional perspective. The <strong>In</strong>ternational Journal <strong>of</strong> AviationPsychology, 13(3), 189-210.Horrey, W., & Wickens, C. (2002). <strong>Driving</strong> <strong>and</strong> side task performance: The effects <strong>of</strong> displayclutter, separation, <strong>and</strong> modality (AHFD-02-13/GM-02-2). Savoy, IL: University <strong>of</strong>Illinois, Aviation Human Factors Division.Horrey, W.J. & Wickens, C.D. (2003). Multiple resource modeling <strong>of</strong> task interference in vehiclecontrol, hazard awareness <strong>and</strong> in-vehicle task performance. Proceedings <strong>of</strong> <strong>Driving</strong>Assessment 2003: 2nd <strong>In</strong>ternational <strong>Driving</strong> Symposium on Human Factors in DriverAssessment, Training, <strong>and</strong> <strong>Vehicle</strong> Design. Park City, UT.Hughes, P.K., & Cole, B.L. (1986). What attracts attention when driving? Ergonomics, 29, 377-391.Hulse, M.C., Dingus, T.A., & Barfield, W. (1998). Description <strong>and</strong> applications <strong>of</strong> advancedtraveler information systems. <strong>In</strong> W. Barfield, & T.A. Dingus (Eds.), Human factors inintelligent transportation systems. Human factors in transportation (pp.359-395).Hurwitz, J., & Wheatley, D.J. (2002). Using driver performance measures to estimate workload.Proceedings <strong>of</strong> the 46 th Annual Meeting <strong>of</strong> the Human Factors <strong>and</strong> Ergonomics Society(pp. 1804-1808). Santa Monica, CA: Human Factors <strong>and</strong> Ergonomics Society.Isreal, J. (1980). Structural interference in dual-task performance: Behavioral <strong>and</strong>electrophysiological data. Unpublished doctoral dissertation, University <strong>of</strong> Illinois,Champaign.Isreal, J., Wickens, C.D., Chesney, G., & Donchin, E. (1980). The event-related brain potentialas a selective index <strong>of</strong> display monitoring load. Human Factors, 22, 211-224.Labiale, G. (1990). <strong>In</strong>-car road information: Comparisons <strong>of</strong> auditory <strong>and</strong> visual presentations.Proceedings <strong>of</strong> the 34 th Annual Meeting <strong>of</strong> the Human Factors Society (pp. 623-627).Santa Monica, CA: Human Factors Society.Lamble, D., Laakso, M., & Summala, H. (1999). Detection thresholds in car following situations<strong>and</strong> peripheral vision: Implications for positioning <strong>of</strong> visually dem<strong>and</strong>ing in-car displays.Ergonomics, 42(6), 807-815.Lansdown, T.C., Brook-Carter, N., & Kersloot, T. (2002). Primary task disruption from multiplein-vehicle systems. ITS Journal, 7(2), 151-168.Latorella, K.A. (1998). <strong>Effects</strong> <strong>of</strong> modality on interrupted flight deck performance: implicationsfor data link. Proceedings <strong>of</strong> the 42 nd Meeting <strong>of</strong> the Human Factors <strong>and</strong> ErgonomicsSociety (pp. 87-91). Santa Monica, CA: Human Factors Society.Lee, J.D., Gore, B.F., & Campbell, J.L. (1999). Display alternatives for in-vehicle warning <strong>and</strong>sign information: Message style, location, <strong>and</strong> modality. Transportation Human Factors,1(4), 347-375.Liu, Y-C. (2001). Comparative study <strong>of</strong> the effects <strong>of</strong> auditory, visual, <strong>and</strong> multimodalitydisplays on drivers’ performance in advanced traveler information systems. Ergonomics,44(4), 425-442.57


Liu, Y., & Wickens, C.D. (1992). Visual scanning with or without spatial uncertainty <strong>and</strong>divided <strong>and</strong> selective attention. Acta Psychologica, 79, 131-153.Matthews, G., Sparkes, T., & Bygrave, H. (1996). Attentional overload, stress, <strong>and</strong> simulateddriving performance. Human Performance, 9(1), 77-101.Mollenhauer, M.A, Lee, J., Cho, K., Hulse, M.C., & Dingus, T.A. (1994). The effects <strong>of</strong> sensorymodality <strong>and</strong> information priority on in-vehicle signing <strong>and</strong> information systems.Proceedings <strong>of</strong> the 38 th Annual Meeting <strong>of</strong> the Human Factors <strong>and</strong> Ergonomics Society(pp. 1072-1075). Santa Monica, CA: Human Factors <strong>and</strong> Ergonomics Society.Noy, Y.I. (1990). Attention <strong>and</strong> performance while driving with auxiliary in-vehicle displays.Ottawa: Transport Canada, Road Safety.Parkes, A.M., & Burnett, G.E. (1993). An evaluation <strong>of</strong> medium range “advance information” inroute-guidance displays for use in vehicles. IEEE <strong>Vehicle</strong> Navigation & <strong>In</strong>formationSystems Conference. Ottawa, Canada.Pashler, H. E. (1998). The psychology <strong>of</strong> attention. Cambridge, MA: The MIT Press.Polson, M.C., & Friedman, A. (1988). Task-sharing within <strong>and</strong> between hemispheres: Amultiple-resources approach. Human Factors, 30, 633-643.Ranney, T.A, Harbluk, J.L., & Noy, Y.I. (2002). The effects <strong>of</strong> voice technology on test drivingperformance: Implications for driver distraction. Proceedings <strong>of</strong> the Human Factors <strong>and</strong>Ergonomics 46th Annual Meeting (pp. 1814-1818). Santa Monica, CA: Human Factors<strong>and</strong> Ergonomics Society.Rockwell, T. (1972). Skills, judgment, <strong>and</strong> information acquisition in driving. <strong>In</strong> T. Forbes (Ed.),Human factors in highway traffic safety research (pp. 133-164). New York: Wiley<strong>In</strong>terscience.S<strong>and</strong>ers, M.S., & McCormick, E.J. (1993). Human factors in engineering <strong>and</strong> design (7 th ed.).New York: McGraw-Hill.Sarno, K.J., & Wickens, C.D, (1995). Role <strong>of</strong> multiple resources in predicting time-sharingefficiency: Evaluation <strong>of</strong> three workload models in a multiple-task setting. <strong>In</strong>ternationalJournal <strong>of</strong> Aviation Psychology, 5(1), 107-130.Schons, V., & Wickens, C.D. (1993). Visual separation <strong>and</strong> information access in aircraftdisplay layout (ARL-93-7/NASA-A3I-93-1. Savoy, IL: University <strong>of</strong> Illinois, AviationResearch Laboratory.Seagull, F. J, Wickens, C.D., & Loeb, R.G. (2001). When is less more? Attention <strong>and</strong> workloadin auditory, visual <strong>and</strong> redundant patient-monitoring conditions. Proceedings <strong>of</strong> the 45 thAnnual Meeting <strong>of</strong> the Human Factors & Ergonomics Society. Santa Monica, CA:Human Factors & Ergonomics Society.Serafin, C., Wen, C., Paelke, G., & Green, P. (1993). Car phone usability: A human factorslaboratory test. Proceedings <strong>of</strong> the 37 th Annual Meeting <strong>of</strong> the Human Factors <strong>and</strong>Ergonomics Society (pp. 220-224). Santa Monica, CA: Human Factors <strong>and</strong> ErgonomicsSociety.58


Sorkin, R.D. (1987). Design <strong>of</strong> auditory <strong>and</strong> tactile displays. <strong>In</strong> G. Salvendy (Ed.), H<strong>and</strong>book <strong>of</strong>human factors (pp. 549-576). New York: Wiley.Spence, C., & Driver, J. (1997). Audiovisual links in attention: Implications for interface design.<strong>In</strong> D. Harris (Ed.), Engineering psychology <strong>and</strong> cognitive ergonomics. Hampshire:Ashgate Publishing.Srinivasan, R., & Jovanis, P. (1997). Effect <strong>of</strong> selected in-vehicle route guidance systems ondriver reaction times. Human Factors, 39(2), 200-215.Srinivasan, R., Yang, C., Jovanis, P., Kitamura, R., & Anwar, M. (1994). Simulation study <strong>of</strong>driving performance with selected route guidance systems. Transportation Research,2C(2), 73-90.Stokes, A., Wickens, C., & Kite, K. (1990). Display technology: Human factors concepts.Warrendale, PA: Society <strong>of</strong> Automotive Engineers.Strayer, D. L. & Johnston, W. A. (2001). Driven to distraction: Dual-task studies <strong>of</strong> simulateddriving <strong>and</strong> conversing on cellular telephone. Psychological Science, 12(6), 462-466.Streeter, L.A., Vitello, D., & Wonsiexicz, S.A. (1985). How to tell people where to go:Comparing navigational aids. <strong>In</strong>ternational Journal <strong>of</strong> Man-Machine Studies, 22(5), 549-562.Summala, H., Nieminen, T., & Punto, M. (1996). Maintaining lane position with peripheralvision during in-vehicle tasks. Human Factors, 38(3), 442-451.Tindall-Ford, S., Ch<strong>and</strong>ler, P., & Sweller, J. (1997). When two sensory modes are better thanone. Journal <strong>of</strong> Experimental Psychology: Applied, 3(4), 257-287.Tsang, P.S., & Rothschild, R.A. (1985). To speak or not to speak: A multiple resourceperspective. Proceedings <strong>of</strong> the 29 th Annual Meeting <strong>of</strong> the Human Factors Society (pp.76-80). Human Factors Society.Tsang, P.S., & Wickens, C.D. (1988). The structural constraints <strong>and</strong> strategic control <strong>of</strong> resourceallocation. Human Performance, 1, 45-72.Walker, J., Alic<strong>and</strong>ri, E., Sedney, C., & Roberts, K. (1990). <strong>In</strong>-vehicle navigation devices:<strong>Effects</strong> on the safety <strong>of</strong> driver performance (p. 107). McLean, VA: Performer: FederalHighway Administration, Office <strong>of</strong> Safety <strong>and</strong> Traffic Operations Research <strong>and</strong>Development.Weinstein, L.F., & Wickens, C.D. (1992). Use <strong>of</strong> nontraditional flight displays for the reduction<strong>of</strong> central visual overload in the cockpit. <strong>In</strong>ternational Journal <strong>of</strong> Aviation Psychology,2(2), 121-142.Wickens, C.D. (1980). The structure <strong>of</strong> attentional resources. <strong>In</strong> R. Nickerson (Ed.), Attention<strong>and</strong> performance VIII (pp.239-257). Hillsdale, NJ: Lawrence Erlbaum.Wickens, C.D. (1991). Processing resources <strong>and</strong> attention. <strong>In</strong> D. Damos (Ed.), Multiple taskperformance. London: Taylor & Francis.Wickens, C.D. (1992). Engineering psychology <strong>and</strong> human performance (2 nd ed.). New York:HarperCollins.59


Wickens, C.D. (2002). Multiple resources <strong>and</strong> performance prediction. Theoretical Issues <strong>In</strong>Ergonomics Science, 3(2), 159-177.Wickens, C.D., Braune, R., & Stokes, A. (1987). Age differences in the speed <strong>and</strong> capacity <strong>of</strong>information processing: I. A dual-task approach. Psychology <strong>and</strong> Aging, 2, 70-78.Wickens, C.D., Dixon, S., & Seppelt, B., (2002). <strong>In</strong>-vehicle displays <strong>and</strong> control taskinterference: The effects <strong>of</strong> display location <strong>and</strong> modality (ARL-02-7/NASA-02-5/GM-02-1). Savoy, IL: University <strong>of</strong> Illinois, Aviation Research Lab.Wickens, C.D., & Goettl, B. (1984). Multiple resources <strong>and</strong> display formatting: The implications<strong>of</strong> task integration. <strong>In</strong> Proceedings <strong>of</strong> the 28 th Annual Meeting <strong>of</strong> the Human FactorsSociety (pp. 722-726). Santa Monica, CA. Human Factors Society.Wickens, C., Goh, J., Helleberg, J., Horrey, W., & Talleur, D. (in press). Attentional models <strong>of</strong>multi-task pilot performance using advanced display technology. Human Factors.Wickens, C.D., Gordon, S., & Liu, Y. (1998). An introduction to human factors engineering.New York: Addison Wesley Longman.Wickens, C.D., & Gosney, J.L. (2003). Redundancy, modality, <strong>and</strong> priority in dual taskinterference. Submission to Human Factors.Wickens, C., Helleberg, J., Goh, J., Xu, X., & Horrey, W. (2001). Pilot Task Management:Testing an Attentional Expected Value Model <strong>of</strong> Visual Scanning (ARL-01-14/NASA-01-7). Savoy, IL: University <strong>of</strong> Illinois, Aviation Research Lab.Wickens, C.D., & Holl<strong>and</strong>s, J. (2000). Engineering psychology <strong>and</strong> human performance (3 rded.). New York: Prentice Hall.Wickens, C.D., & Liu, Y. (1988). Codes <strong>and</strong> modalities in multiple resources: A success <strong>and</strong> aqualification. Human Factors, 30, 599-616.Wickens, C. D., S<strong>and</strong>ry, D., & Vidulich, M. (1983). Compatibility <strong>and</strong> resource competitionbetween modalities <strong>of</strong> input, output, <strong>and</strong> central processing. Human Factors, 25, 227-248.Wickens, C.D., & Seppelt, B.D. (2002). <strong>In</strong>terference with driving or in-vehicle task information:The effects <strong>of</strong> auditory versus visual delivery (AHFD-02-18/GM-02-3). Savoy, IL:University <strong>of</strong> Illinois, Aviation Human Factors Division.Wright, P., Holloway, C.M., & Aldrich, A.R. (1974). Attending to visual or auditory verbalinformation while performing other concurrent tasks. Quarterly Journal <strong>of</strong> ExperimentalPsychology, 26, 454-463.Yeh, Y.-Y., & Wickens, C.D. (1988). The dissociation <strong>of</strong> subjective measures <strong>of</strong> mentalworkload <strong>and</strong> performance. Human Factors, 30, 111-120.60


APPENDIX A. PARTICIPANT INFORMATIONSubject Gender AgeHealth(1:5)* VisionValid Driver'sLicenseMileage/Yr.1 Female 20 4 20/20 Yes 52 Male 21 5 20/25 Yes 70003 Male 20 5 20/20 Yes 50004 Male 19 5 20/20 Yes 10005 Male 19 5 20/20 Yes 50006 Male 20 5 20/20 Yes 5007 Male 20 5 20/20 Yes 100008 Male 21 5 20/30 Yes 100009 Male 23 5 20/20 Yes 600010 Male 21 4 20/20 Yes 550011 Male 23 5 20/20 Yes 1400012 Male 23 4 20/20 Yes 1600013 Male 22 5 20/20 Yes 600014 Female 28 5 20/20 Yes 1200015 Male 20 4 20/20 Yes 2000016 Male 21 5 20/20 Yes 800017 Male 26 5 20/20 Yes 1200018 Male 22 5 20/20 Yes 500019 Male 21 5 20/20 Yes 1000020 Male 21 5 20/20 Yes 1000021 Male 20 5 20/20 Yes 500022 Male 20 4 20/20 Yes 800023 Male 21 5 20/20 Yes 1600024 Female 19 5 20/20 Yes 10025 Female 20 4 20/20 Yes 100026 Male 24 4 20/20 Yes 1300027 Male 20 4 20/20 Yes 200028 Male 21 5 20/20 Yes 250029 Male 21 4 20/20 Yes 2500030 Male 31 5 20/20 Yes 600031 Male 21 4 20/20 Yes 400032 Male 22 4 20/20 Yes 400033 Male 19 5 20/20 Yes 1500034 Male 25 5 20/20 Yes 1000035 Male 23 5 20/20 Yes 500036 Male 22 5 20/20 Yes 15000* Health rated on a 5-unit Likert scale with 1 = Poor <strong>and</strong> 5 = Excellent61


APPENDIX B. IN-VEHICLE DISPLAY SPECIFICATIONSIN-VEHICLE DISPLAY*HeightWidthHorizontal Visual SeparationVertical Visual Separation6.0 in.8.25 in.From the center <strong>of</strong> the steering wheel to thetop left <strong>of</strong> the IV display: 20.14 degreeslaterally;From the center <strong>of</strong> the steering wheel to themiddle top <strong>of</strong> the IV display: 26.57 degreeslaterallyFrom the line <strong>of</strong> sight (approximately at thelocation <strong>of</strong> the pedestrian walk sign seen inthe photograph) to the top <strong>of</strong> the IVdisplay: 7.6 degrees down;From the line <strong>of</strong> sight to the center <strong>of</strong> theIV display: 13.1 degrees downDISPLAY TEXT<strong>In</strong>dividual LettersSentencesHeight 1.0 cm. 1.0 cm.Width 0.6 cm Min: 8.0 cm. - Max: 12.0cm. †*The in-vehicle (IV) display is noted by the bracket on the right side <strong>of</strong> the photograph.† Sentences were formatted to wrap the text, thus the minimum <strong>and</strong> maximum valued reflect therange <strong>of</strong> possible word lengths to fit within the display.62


APPENDIX C. WITHIN-WORLD MESSAGE ORDER AND SECONDARY TASKCODING FOR RELEVANT AND IRRELEVANT CONDITIONSRELEVANT CONDITION:Relevant Message<strong>Driving</strong> Environment 1Construction Ahead.Merging vehicle ahead. Beware <strong>of</strong>blind driveways.Read-backAccuracy *CommentsSharp curve ahead. Slow to 35MPH.Pedestrian crossing ahead.Slow vehicle ahead.Accident in road ahead. Slow to45 MPH.Emergency vehicle in lane ahead.Blind spots in curved road ahead.Railroad crossing ahead.Object in road ahead.Heavy traffic ahead. Mergingtraffic from the right.<strong>Driving</strong> Environment 2Slow vehicle ahead. Curve inroad.Merging traffic from the left.School zone ahead. Watch forchildren.Accident in road ahead.Slippery road ahead.63


Object in road ahead.Construction ahead.Traffic congestion ahead.Detour to the right.Speed reduction ahead. SpeedLimit 35 MPH.Heavy traffic ahead. Mergingtraffic from the right.<strong>Driving</strong> Environment 3Merging vehicle ahead. Beware <strong>of</strong>blind driveways.Approaching emergency vehicle.Pedestrian crossing ahead.Railroad crossing ahead.Accident in road ahead.Heavy traffic ahead. Mergingtraffic from the left.Object in road ahead. Usecaution.Construction ahead. Detour to theright.School zone ahead. Watch forchildren.Slow vehicle ahead.* 1 if accurate; .5 if partially accurate; 0 if not accurateMessage Bolded when unexpected events occur64


IRRELEVANT CONDITION:CorrespondingIrrelevantMessage<strong>Driving</strong> Environment 1LiteRock 98.7.<strong>In</strong>formation requestreceived. Sendreceipt toAccounting.Artic chilldeepens. Drop to –10 degrees.Alternative Rock107.1.Terrorist leaderarrested.Mostly sunny <strong>and</strong>warm. Highs inthe 70s.Afternoon flurriesare expected today.Temperatures risingin the earlyevening.Holiday partytonight.San Francisco bansSegway.Smooth Jazz V98.7.Home <strong>of</strong> the Trip-A-Day Give-Away.<strong>Driving</strong> Environment 2Tower <strong>of</strong> Power –What Is Hip?Girl missing afterplane crash.Light snowshowers. Wind toincrease.Smith Job Talktomorrow.ReadbackAccuracy*IrrelevantMessageCategoryRadioE-mailWeatherRadioNationalNewsWeatherWeatherE-mailNationalNewsRadioRadioNationalNewsWeatherE-mailCategoryAccuracy* Comments65


War protestorsarrested.Drew <strong>and</strong> MikeShow.Partly cloudy.Chemicalwarheads found.<strong>In</strong>spectors tocontinue search.Research positionavailable.Applicationdeadline June 5 th .Contact Linda Carr.Office hours from3-5 daily.<strong>Driving</strong> Environment 3National Censustaken. Latinoslargest U.S.minority.Volleyball practicetonight.Light showerstoday.Library itemavailable.Ex-Serb leaderfaces tribunal.Low temperaturestoday. Cold frontfrom the NorthHighs in the 60s.Sunny tomorrow.94.7 WCSX. Rockto the Hits.Smooth Jazz104.3. Tunes toGroove.Send project fax.NationalNewsRadioWeatherNationalNewsE-mailE-mailNationalNewsE-mailWeatherE-mailNationalNewsWeatherWeatherRadioRadioE-mail* 1 if accurate; .5 if partially accurate; 0 if not accurateMessages Bolded when unexpected events occur66


APPENDIX D. EXAMPLE OF CORRESPONDING RELEVANT AND IRRELEVANTDOWNSTREAM EVENTS67


APPENDIX E. SIMULATOR SICKNESS QUESTIONNAIREThis study will require you to drive in a driving simulator. <strong>In</strong> the past, some participants havefelt uneasy after participating using the simulator. To help identify people who might be proneto this feeling, we would like to ask the following questions.1. Do you or have you had a history <strong>of</strong> migraine headaches?If yes, please describe:2. Do you or have you had a history <strong>of</strong> claustrophobia?If yes, please describe:3. Do you or have you had a history <strong>of</strong> motion sickness?If yes, please describe:4. Are you or is there a possibility that you might bepregnant?5. Any health problems that affect driving?6. Lingering effects from stroke, tumor, head trauma, infection?7. Suffer from epileptic seizures?8. Any inner ear problems, dizziness, vertigo, or balance problems?9. Are you currently taking any medications?If yes, please list:68


APPENDIX F. PRE-EXPERIMENTAL QUESTIONNAIREPoor ExcellentAge:_____ Sex: M F H<strong>and</strong>edness: R L Health: 1 2 3 4 5Do you wear Glasses/Contacts on a regular basis? NYHow many years <strong>of</strong> school have you completed?___________________Phone Number:_______________ E-mail:_______________________Can we call you to participate in additional experiments? YesNoWhere did you hear about us?__________________________________Signature <strong>of</strong> Participant:________________________Date:___________Name:______________________________________________________(Please Print)Social Security Number_______-_____-_____************************************************************For Office Use OnlyFar Vision:_______ Near Vision:________Color-Blindness:_______Time IN:_________ Time OUT:_______ Total Time:____________Pay for Exp:_______ Parking:__________ Total Pay:_____________69


APPENDIX G. INFORMED CONSENT FORMMultiple Resource Modeling <strong>of</strong> the Impact <strong>of</strong> <strong>In</strong>-<strong>Vehicle</strong> Technology on Driver WorkloadResearch supported by the General Motors CorporationPrincipal <strong>In</strong>vestigator: Dr. Christopher Wickens<strong>In</strong>stitute <strong>of</strong> AviationAviation Human Factors DivisionWillard Airport#1 Airport RoadSavoy, IL 61874The purpose <strong>of</strong> this experiment is to provide data on the sources <strong>of</strong> in-vehicle distraction. That is, we wishto determine the extent to which in-vehicle technology, such as cell phones, electronic map displays, or e-mail displays diverts the driver's gaze away from the highway. We also wish to establish the extent towhich auditory presentation <strong>of</strong> some <strong>of</strong> this information, or presentation on a head-up display, can reducethe distracting effects <strong>of</strong> such technology, or may actually, increase those distractions.To examine these issues, you will be asked to drive our Saturn driving simulator while performing otherside tasks about which you will be instructed. You should drive as you normally would on the highway. Onsome occasions, we may ask you to wear a small camera attached by a b<strong>and</strong> around your head whichcan record the direction <strong>of</strong> your gaze. You will report to room B500 Beckman for the experiments.Depending on the particular experiment, it will last from 1-3 - one hour sessions.Eye movements are monitored by a device that reflects infrared light <strong>of</strong>f <strong>of</strong> the lens <strong>and</strong> the cornea <strong>of</strong> theeye. The lens, cornea <strong>and</strong> other parts <strong>of</strong> the eye absorb a small amount <strong>of</strong> energy from the infrared light,but the energy is less than 1% <strong>of</strong> the Maximum Permissible Exposure level as certified by the AmericanSt<strong>and</strong>ards <strong>In</strong>stitute (ANSI Z 136.1-1973). This is about as much energy as you get on a bright sunny day.There are no known risks or physical discomforts associated with this experiment beyond those <strong>of</strong>ordinary life, <strong>and</strong> the possibility that the simulation might cause some mild motion sickness. If it does so,please tell the experimenter. You will be paid at the rate <strong>of</strong> $6/hr. You may terminate your participation atany time, <strong>and</strong> you will still be paid for the number <strong>of</strong> hours that you have completed.We thank you for your involvement. If you have any further questions, please let the experimenter knowat any time throughout the experiment, or call Dr. Wickens at 244-8617. If you have any questions aboutthe rights <strong>of</strong> research subjects, please contact the University <strong>In</strong>stitutional Review Board at 217/333-2670.Statement <strong>of</strong> ConsentI acknowledge that my participation in this experiment is entirely voluntary <strong>and</strong> that I am free to withdrawat any time. I have been informed <strong>of</strong> the general scientific purposes <strong>of</strong> this experiment <strong>and</strong> I know that Iwill be compensated at a rate <strong>of</strong> $6.00/hour for my participation. If I withdraw from the experiment beforeits termination, I will receive my total fee earned to that time. I underst<strong>and</strong> that my data will be maintainedin confidence, <strong>and</strong> that I may have a copy <strong>of</strong> this consent form.Signature <strong>of</strong> participant: _______________________________Signature <strong>of</strong> experimenter: ______________________________Date :____________Date: __________70


APPENDIX H. EXPERIMENTAL PROTOCOLAdminister Eye ExamProvide participant with <strong>Driving</strong> Simulator QuestionnaireStart up numbers.bat program on computerStart up VariableSender program on computerEnter subject data in Subject fieldProvide participant with informed consentThank you very much for participating in this study. It should take you approximately 1-1/2 to 2hours to complete. I would like to remind you that you are free to withdraw from this study atany time. Please look through the informed consent.Participant reads / signs form.Administer pre-experimental questionnaire.Today you will be asked to drive through 3 different scenarios. Each drive lasts approximately20-25 minutes. There will be a short break between each <strong>of</strong> the three drives to ensure that youare not experiencing any symptoms <strong>of</strong> simulator sickness. The drives consist <strong>of</strong> curved <strong>and</strong>straight sections <strong>of</strong> rural <strong>and</strong> residential roads. Some hills will be encountered. During eachdrive, you will be asked to perform a secondary task in addition to the primary driving task (thistask will be described in detail momentarily).Seat participant in the driving simulator. Adjust seat <strong>and</strong> mirrors to fit to the driver.The controls in the vehicle will respond as they would in normal driving. Please operate thevehicle as you would normally <strong>and</strong> properly navigate the vehicle to ensure safe driving. It isimportant that you obey all traffic laws <strong>and</strong> abide by the posted speed limits. The speed limit inrural sections <strong>of</strong> the drive is 55 MPH. Stay within 5 MPH <strong>of</strong> this speed. The speed limit inresidential sections is 35 MPH. Stay within 5 MPH <strong>of</strong> this speed. If your speed exceeds or fallsbelow the specified range, you will be informed <strong>and</strong> asked to adjust your speed accordingly. <strong>In</strong>addition to maintaining a proper speed limit, we ask that you keep the vehicle in the center <strong>of</strong> thelane throughout the drives. Please respond to traffic events as you would normally (addressesresponse to IVTs). No stop signs or stoplights are present in the drives. The transitions betweenrural <strong>and</strong> residential sections should be smooth <strong>and</strong> are indicated by speed limit signs.<strong>In</strong> order to familiarize you with the driving task <strong>and</strong> the different driving environments (rural <strong>and</strong>residential), I will now run you through a brief 2-minute practice drive. Please familiarizeyourself with the controls in the vehicle <strong>and</strong> attend to the posted speed limits.(Note: motion sickness may be a side effect <strong>of</strong> driving in the simulator. You will be asked on anumber <strong>of</strong> occasions during each drive if you feel sick; please report ANY feelings <strong>of</strong> nausea ordiscomfort. If you begin to feel sick, please inform me so that we stop the experiment in order toprevent you from experiencing further discomfort.)71


Start practice trial. (If participant experiences motion sickness, end the experiment <strong>and</strong> thank theparticipant for their time. Offer payment for one hour if they end the experiment after thepractice trial).RELEVANT CONDITIONS:As I mentioned earlier, you will be performing a secondary task in addition to the driving task.Messages that carry direct relevance to the driving task will be presented to you eitherauditorially, visually, or both auditorially <strong>and</strong> visually. These messages are examples <strong>of</strong> thefuture intelligent transportation systems. The messages will inform you <strong>of</strong> events or conditionsthat will take place in the road ahead, such as traffic congestion, or school zones. You willreceive a message approximately every 2 minutes; a total <strong>of</strong> 10 messages will be presented toyou in every scenario. The visual messages will appear on the display located next to the steeringwheel in the center console. The auditory messages will sound from the car speakers. I willinform you before each scenario in which modality (visual, auditory, or auditory-visualcombined) the messages will be presented to you.When the messages either appear on the display or sound through the speakers (or both), you arerequired to read them back word for word. As soon as you notice the appearance <strong>of</strong> the messageon the screen or hear the auditory message, press either one <strong>of</strong> the two buttons located on thesteering column (point to the buttons to illustrate) a single time. Begin your read-back <strong>of</strong> themessage precisely when you press the steering wheel button (or immediately following theauditory message in the auditory <strong>and</strong> auditory-visual combined conditions). <strong>In</strong> the visualconditions, the message will stay on the screen for 10 seconds to increase the opportunity for youto notice <strong>and</strong> read-back the message. <strong>In</strong> the auditory conditions, the message will be vocalizedonly once. As you would in normal driving with such a system, or in reading road signs, it isimportant that you comply with the information contained in such messages, <strong>and</strong> use thatinformation to prepare yourself for the situation ahead.IRRELEVANT CONDITIONS:As I mentioned earlier, you will be performing a secondary task in addition to the driving task.“<strong>In</strong>fotainment” messages that contain information on National News, Weather, Radio stations, orE-mail topics will be presented to you either auditorially, visually, or both auditorially <strong>and</strong>visually. The messages will be <strong>of</strong> one type <strong>of</strong> information with each presentation; however, the 4different types <strong>of</strong> information will be presented to you r<strong>and</strong>omly through each drive. You willreceive a message approximately every 2 minutes; a total <strong>of</strong> 10 messages will be presented toyou in every scenario. The visual messages will appear on the display located next to the steeringwheel in the center console. The auditory messages will sound from the car speakers. I willinform you before each scenario in which modality (visual, auditory, or auditory-visualcombined) the messages will be presented to you.When the messages either appear on the display or sound through the speakers (or both), you arerequired to read them back word for word. As soon as you notice the appearance <strong>of</strong> the messageon the screen or hear the auditory message, press either one <strong>of</strong> the two buttons located on thesteering column (point to the buttons to illustrate) a single time. Begin your read-back <strong>of</strong> the72


message precisely when you press the steering wheel button (or immediately following theauditory message in the auditory <strong>and</strong> auditory-visual combined conditions). <strong>In</strong> the visualconditions, the message will stay on the screen for 10 seconds to increase the opportunity for youto notice <strong>and</strong> read-back the message. <strong>In</strong> the auditory conditions, the message will be vocalizedonly once. After you read-back the message, you must categorize the message by verballyreporting the particular category <strong>of</strong> information in which it is associated with: News, Weather,Radio, or E-mail. Please be as accurate as possible in your categorization <strong>of</strong> the message.(The following instructions are relevant to ALL conditions)Again, the messages will be presented to you approximately every 2 minutes.Do you have any questions concerning the secondary task?You should attend to the IVT messages as promptly as possible but also make sure that youproperly navigate for safe driving.Reminder: <strong>In</strong> response to the secondary task, press a button on the steering wheel as soon as themessage is noticed, then read-back the message in its entirety (this read-back should be at thesame time as when you press the button upon noticing the message OR immediately following theauditory message), (<strong>In</strong> Irrelevant Conditions: <strong>and</strong> categorize the message according to theparticular category from which it comes – either News, Weather, Radio, or E-mail). Driveaccording to the speed limit <strong>and</strong> maintain the vehicle in the center <strong>of</strong> the lane.Again, please be mindful <strong>of</strong> any feelings <strong>of</strong> nausea or discomfort during each <strong>of</strong> the three drives<strong>and</strong> report them immediately.((For the control condition) <strong>In</strong> this scenario, no secondary task will be presented to you.Respond to traffic events <strong>and</strong> obey traffic laws as you would normally. Please remember to keepyour speed limit within 5 MPH <strong>of</strong> the posted speed.)After completion <strong>of</strong> the 3 experimental blocks, thank participants for their participation <strong>and</strong>reimburse them for their time.Provide participant with post-experimental questionnaire.73


APPENDIX I. COUNTERBALANCED TRIAL ORDERSubject<strong>Relevance</strong> <strong>Modality</strong> World Order1 Relevant A1 V2 AV3 bds1_A bds2_V bds3_AV2 Irrelevant AV3 V2 A1 bds3_AV_im bds2_V_im bds1_A_im3 Relevant V2 A1 AV3 bds2_V bds1_A bds3_AV4 Irrelevant A2 V1 AV3 bds2_A_im bds1_V_im bds3_AV_im5 Relevant AV3 A1 V2 bds3_AV bds1_A bds2_V6 Irrelevant V1 A2 AV3 bds1_V_im bds2_A_im bds3_AV_im7 Relevant AV2 A1 V3 bds2_AV bds1_A bds3_V8 Irrelevant V3 AV1 A2 bds3_V_im bds1_AV_im bds2_A_im9 Relevant V2 AV3 A1 bds2_V bds3_AV bds1_A10 Irrelevant A2 AV3 V1 bds2_A_im bds3_AV_im bds1_V_im11 Relevant A1 AV2 V3 bds1_A bds2_AV bds3_V12 Irrelevant AV1 A2 V3 bds1_AV_im bds2_A_im bds3_V_im13 Relevant A2 V3 AV1 bds2_A bds3_V bds1_AV14 Irrelevant A1 AV3 V2 bds1_A_im bds3_AV_im bds2_V_im15 Relevant AV1 A3 V2 bds1_AV bds3_A bds2_V16 Irrelevant AV2 A3 V1 bds2_AV_im bds3_A_im bds1_V_im17 Relevant V3 AV2 A1 bds3_V bds2_AV bds1_A18 Irrelevant V3 A1 AV2 bds3_V_im bds1_A_im bds2_AV_im19 Relevant V3 A2 AV1 bds3_V bds2_A bds1_AV20 Irrelevant V2 AV1 A3 bds2_V_im bds1_AV_im bds3_A_im21 Relevant AV1 V2 A3 bds1_AV bds2_V bds3_A22 Irrelevant AV1 V3 A2 bds1_AV_im bds3_V_im bds2_A_im23 Relevant A2 AV1 V3 bds2_A bds1_AV bds3_V24 Irrelevant A3 V1 AV2 bds3_A_im bds1_V_im bds2_AV_im25 Relevant AV3 V1 A2 bds3_AV bds1_V bds2_A26 Irrelevant A3 AV2 V1 bds3_A_im bds2_AV_im bds1_V_im27 Relevant A3 AV1 V2 bds3_A bds1_AV bds2_V28 Irrelevant V2 A3 AV1 bds2_V_im bds3_A_im bds1_AV_im29 Relevant V1 A3 AV2 bds1_V bds3_A bds2_AV30 Irrelevant AV2 V3 A1 bds2_AV_im bds3_V_im bds1_A_im31 Relevant V1 AV3 A2 bds1_V bds3_AV bds2_A32 Irrelevant AV3 A2 V1 bds3_AV_im bds2_A_im bds1_V_im33 Relevant A3 V2 AV1 bds3_A bds2_V bds1_AV34 Irrelevant V1 AV2 A3 bds1_V_im bds2_AV_im bds3_A_im35 Relevant AV2 V1 A3 bds2_AV bds1_V bds3_A36 Irrelevant A1 V3 AV2 bds1_A_im bds3_V_im bds2_AV_im74


APPENDIX J. POST-EXPERIMENTAL QUESTIONNAIRE1. Do you have a valid Driver’s License? Yes No2. How many years have you had a Driver’s License? ___________________3. About how many miles per year do you drive? ___________ miles/year4. How many moving violations have you had in the last two years? _______________5. Have you had any accidents where you were responsible?YesNo_________________________________________________________________________________Use the following scale to respond to questions 4 through 7.1 2 3 4 5Never 1-2 times per month 3-4 times per month 3-4 times per week Everyday6. How <strong>of</strong>ten do you drive? 1 2 3 4 57. How <strong>of</strong>ten do you drive on city streets? 1 2 3 4 58. How <strong>of</strong>ten do you drive on rural / country roads? 1 2 3 4 59. How <strong>of</strong>ten do you drive on freeways? 1 2 3 4 5Use the following scale to respond to questions 10 through 17.1 2 3 4 5Never Rarely Occasionally Often AlwaysIf you were in a hurry to get to an important appointment how <strong>of</strong>ten would you (remember there are no right or wronganswers):10. Run a red light to get to an appointment sooner 1 2 3 4 511. Drive at 5-15 mph over the speed limit 1 2 3 4 512. Drive around lowered gates at a railway crossing 1 2 3 4 513. Speed in a school zone on a Saturday 1 2 3 4 514. Do a rolling stop through a stop sign(i.e. not a complete stop ) 1 2 3 4 515. Tailgate other people to get them to drive faster 1 2 3 4 516. Get angry at other drivers for being in your way 1 2 3 4 517. Talk on the cellular phone 1 2 3 4 5Use the following scale to respond to questions 18 through 21.1 2 3 4 5Never Rarely Occasionally Often Always18. I felt nauseous in the driving simulator. 1 2 3 4 519. The driving simulator allowed me to brake appropriately. 1 2 3 4 520. The gas pedal <strong>and</strong> brake in the simulator allowed me toadequately control my speed. 1 2 3 4 521. The steering <strong>of</strong> the driving simulator allowed me to make maneuverscorrectly. 1 2 3 4 575

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!