the user can use her preferred expression next time instead of using the default expression or give the input manually. [8] states that voice operation of in-vehicle information systems (IVIS) is desirable from the point of view of safety <strong>and</strong> acceptance. The study reported in this paper supports that, but we recommend a multimodal interface. Although the drivers, when asked, preferred to interact using speech, a manual alternative is useful when a speech interface is not wanted. For example, if the environment is noisy, the driver does not want to disturb the passengers or the speech recognizer does not seem to underst<strong>and</strong> what the driver is saying. When it is not possible or appropriate to use speech, a multimodal interface gives the driver an opportunity to choose. 7. SUMMARY AND CONCLUSIONS To sum up: • Regarding task performance, SUI gets the lowest scores whereas MM <strong>and</strong> GUI get roughly similar scores • Regarding driving ability, GUI gets the lowest score, whereas MM <strong>and</strong> SUI get roughly similar scores • Regarding task completion time, GUI is the fastest whereas MM <strong>and</strong> SUI get roughly similar scores The MM condition gives the same task performance as GUI, the same driving ability as VUI, <strong>and</strong> beats both with respect to driving ability. Future research includes improving the MM system to decrease task completion time, <strong>and</strong> investigating whether (<strong>and</strong> how) this affects driving ability. 8. FUTURE RESEARCH Since the study reported here was conducted, the multimodal Dico application has been extended with a speech cursor [4]. The speech cursor enables the user to use spoken interaction in combination with haptic input to access all functionality (including browsing long lists) without ever having to look at the screen. It requires a haptic manu navigation device, such as a mouse (trackball, touch pad, TrackPoint T M ) with buttons, pointers <strong>and</strong> drivers, keyboard with arrow keys or jog dial/shuttle wheel. A typical invehicle menu navigation device consists of three or four buttons (UP, DOWN, OK <strong>and</strong> possibly also BACK). Every time a new item gets focus, the system reads out a ”voice icon” - a spoken representation of the item. This representation can be textual, intended to be realised using a TTS, or in the form of audio data, to be played directly. Every time a new element gets focus, all any ongoing voice output is interrupted by the ”voice icon” for the element in focus. This means that you can speed up the interaction by browsing to a new element before the system has read out the previous one. In future work, we plan to perform a study comparing a multimodal system with speech cursor to voice-only, GUIonly, <strong>and</strong> multimodal interaction without speech cursor. Additional future research includes implementation of theories of interruption <strong>and</strong> resumption strategies. In-vehicle dialogue involves multitasking. The driver must be able to switch between tasks such as, for example, adding songs to a playlist while getting help to find the way from a navigation system. To avoid increasing the cognitive load of the driver, the system needs to know when it is appropriate to interrupt the current dialogue to give time-critical information [1]. When resuming the interrupted dialogue, the system should resume when the driver is prepared to do so <strong>and</strong> in a way that does not increase the cognitive load [7]. 9. ACKNOWLEDGEMENTS The study presented here was carried out within DICO, a joint project between Volvo Technology, Volvo Car Corporation, Gothenburg University, TeliaSonera <strong>and</strong> Veridict with funding from the Swedish Governmental Agency for Innovation Systems, VINNOVA (project P28536-1). The authors wish to thank Alex Berman <strong>and</strong> Fredrik Kronlid at Talkamatic AB for helping out with the implementation. 10. REFERENCES [1] K. L. Fors <strong>and</strong> J. Villing. Reducing cognitive load in in-vehicle dialogue system interaction. In R. Artstein, M. Core, D. DeVault, K. Georgila, E. Kaiser, <strong>and</strong> A. Stent, editors, Proceedings of the 15th Workshop on the Semantics <strong>and</strong> Pragmatics of Dialogue, SemDial 2011, pages 55–62, 2011. [2] U. Gärtner, W. König, <strong>and</strong> T. Wittig. Evaluation of manual vs. speech input when using a driver information system in real traffic. In Proceedings of the First International Driving Symposium on Human Factors in Driver Assessment, Training <strong>and</strong> Vehicle Design, 2002. [3] S. Larsson. Issue-Based Dialogue Management. PhD thesis, Department of Linguistics, University of Gothenburg, 2002. [4] S. Larsson, A. Berman, <strong>and</strong> J. Villing. Adding a speech cursor to a multimodal dialogue system. In Proceedings of Interspeech 2011, 2011. [5] Z. Medenica <strong>and</strong> A. L. Kun. Comparing the influence of two user interfaces for mobile radios on driving performance. In Proceedings of the Fourth International Driving Symposium on Human Factors in Driver Assessment, Training <strong>and</strong> Vehicle Design, 2007. [6] D. Traum <strong>and</strong> S. Larsson. Current <strong>and</strong> New Directions in Discourse & Dialogue, chapter The Information State Approach to Dialogue Management. Kluwer Academic, 2003. [7] J. Villing. Now, where was i? resumption strategies for an in-vehicle dialogue system. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 798–805. Association for Computational Linguistics, 2010. [8] M. Vollrath <strong>and</strong> J. Maciej. In-car distraction study final report. Technical report, the HMI laboratory of the Technical University of Brunswick, Germany, 2008.
<strong>Automotive</strong>UI 2011 Third International Conference on <strong>Automotive</strong> <strong>User</strong> <strong>Interfaces</strong> <strong>and</strong> <strong>Interactive</strong> <strong>Vehicular</strong> <strong>Applications</strong> November 29—December 2, 2011 ICT&S Center, University of Salzburg Salzburg, Austria ADJUNCT PROCEEDINGS WORKSHOP “AUTONUI: AUTOMOTIVE NATURAL USER INTERFACES” Workshop Organizers: Bastian Pfleging VIS, University of Stuttgart Tanja Döring Paluno, University of Duisburg-Essen Martin Knobel University of Munich Albrecht Schmidt VIS, University of Stuttgart