13.07.2015 Views

Interaction with co-located haptic feedback in virtual reality

Interaction with co-located haptic feedback in virtual reality

Interaction with co-located haptic feedback in virtual reality

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Virtual Reality (2006) 10: 24–30DOI 10.1007/s10055-006-0027-5ORIGINAL ARTICLEDavid Swapp Æ Vijay Pawar Æ Ce´l<strong>in</strong>e Los<strong>co</strong>s<strong>Interaction</strong> <strong>with</strong> <strong>co</strong>-<strong>located</strong> <strong>haptic</strong> <strong>feedback</strong> <strong>in</strong> <strong>virtual</strong> <strong>reality</strong>Received: 22 December 2005 / Accepted: 31 March 2006 / Published onl<strong>in</strong>e: 27 April 2006Ó Spr<strong>in</strong>ger-Verlag London Limited 2006Abstract This paper outl<strong>in</strong>es a study <strong>in</strong>to the effects of<strong>co</strong>-location (the term ‘<strong>co</strong>-location’ is used throughout torefer to the <strong>co</strong>-location of <strong>haptic</strong> and visual sensorymodes, except where otherwise specified) of <strong>haptic</strong> andvisual sensory modes <strong>in</strong> VR simulations. The studyhypothesis is that <strong>co</strong>-location of these sensory modeswill lead to improved task performance <strong>with</strong><strong>in</strong> a VRenvironment. Technical challenges and technologicallimitations are outl<strong>in</strong>ed prior to a description of theimplementation adopted for this study. Experimentswere <strong>co</strong>nducted to evaluate the effect on user performanceof <strong>co</strong>-<strong>located</strong> <strong>haptic</strong>s (force <strong>feedback</strong>) <strong>in</strong> a 3D<strong>virtual</strong> environment. Results show that <strong>co</strong>-location is animportant factor, and when <strong>co</strong>upled <strong>with</strong> <strong>haptic</strong> <strong>feedback</strong>the performance of the user is greatly improved.Keywords Haptics Æ Co-location1 IntroductionPresence is likely to be enhanced by multi-modal <strong>in</strong>put:<strong>in</strong> a <strong>virtual</strong> <strong>reality</strong> (VR) environment, the simulation ofadditional sensory modes should <strong>co</strong>nsolidate our senseof presence although this is <strong>with</strong> the proviso that allsimulated sensory modes accurately <strong>co</strong>mplement oneanother (D<strong>in</strong>h et al. 1999) Conflict<strong>in</strong>g sensory cues areliable to degrade the sense of presence. Research <strong>in</strong> VRis currently dom<strong>in</strong>ated by simulation for the visual andaudio sensory modes. While this can be partly expla<strong>in</strong>edby the dom<strong>in</strong>ance of these sensory modes <strong>in</strong> shap<strong>in</strong>g ourexperience of the world, there are also technologicalreasons. The visual and audio sensory modes benefitD. Swapp (&) Æ V. Pawar Æ C. Los<strong>co</strong>sDepartment of Computer Science, University College London,Malet Place, WC1E 6BT London, UKE-mail: d.swapp@cs.ucl.ac.ukTel.: +44-20-76797211Fax: +44-20-73871397E-mail: vijaympawar@hotmail.<strong>co</strong>mE-mail: c.los<strong>co</strong>s@cs.ucl.ac.ukfrom a high availability of good quality display equipmentthat, <strong>in</strong> <strong>co</strong>mb<strong>in</strong>ation <strong>with</strong> accurate and low-latencytrack<strong>in</strong>g, allows the development of <strong>co</strong>mpell<strong>in</strong>g simulations,sufficient to <strong>in</strong>duce a strong feel<strong>in</strong>g of presence <strong>in</strong>users. In many application areas it is likely that touchcan also be a <strong>co</strong>mpell<strong>in</strong>g factor <strong>in</strong> presence (Durlachet al. 2005; Frisoli et al. 2005; Los<strong>co</strong>s et al. 2004)transform<strong>in</strong>g a simulation from a world of ghosts to aworld of solid forms. Haptics does not refer to a s<strong>in</strong>gularsensory apparatus, and can be <strong>co</strong>nsidered to be <strong>co</strong>mposedof a number of sensory and motor elementsthough there is a degree of overlap among the tactile,force-<strong>feedback</strong> and proprioceptive elements (D<strong>in</strong>h et al.1999; Hoffman 1998). The proprioceptive element can<strong>co</strong>nsolidate visual cues of depth and spatial arrangement<strong>in</strong> a simulated 3D environment (Hayward et al. 2004).Other studies show that the addition of <strong>haptic</strong>s can leadto improved task performance (Hoffman 1998; Sallna¨set al. 2000).Ideally all simulated sensory cues should be <strong>co</strong>-<strong>located</strong>—forexample a sound-produc<strong>in</strong>g object should getlouder when it visually appears closer and sound shouldbe perceived to <strong>co</strong>me from the same direction that we seethe object. Likewise, for <strong>haptic</strong>s we should be able to feelthe edge of an object at precisely the location where wesee that edge. Precise <strong>co</strong>-location of <strong>haptic</strong>s is, however,technically harder to achieve than <strong>co</strong>-location of audioand visual cues. This is primarily due to the greaterperceptual latitude when locat<strong>in</strong>g the source of a stimulusvia both audio and visual cues. The phenomenonknown as the ‘‘ventriloquism effect’’, whereby an audiostimulus that is spatially close to a visual stimulus isperceived to emanate from the location of the visualstimulus (Jack and Thurlow 1973; Wallace et al. 2004)has also been demonstrated for spatial dom<strong>in</strong>ance ofother sensory cues over audio cues (Cacl<strong>in</strong> et al. 2002).In other words, while we are to some degree tolerant of<strong>in</strong>accuracies <strong>in</strong> visual-audio <strong>co</strong>-location, this is not thecase for visual-<strong>haptic</strong> <strong>co</strong>-location.A <strong>co</strong>mmonly-implemented <strong>co</strong>mpromise to <strong>co</strong>-locationis the use of visual markers to represent the <strong>haptic</strong>


25<strong>co</strong>ntact po<strong>in</strong>ts. In the study described <strong>in</strong> this paper, asmall <strong>co</strong>ne-shaped marker is used—this also provides an<strong>in</strong>dication of the orientation of the <strong>haptic</strong> <strong>co</strong>ntact. Otherstudies have used cross-hairs to p<strong>in</strong>po<strong>in</strong>t the <strong>haptic</strong><strong>co</strong>ntact (e.g. Los<strong>co</strong>s et al. 2004) Because these markersare visually rendered by the same graphics system as the<strong>virtual</strong> environment, spatial <strong>co</strong>rrespondence is guaranteed—<strong>in</strong>other words, it will never be the case that thevisual marker is slightly offset from an object’s edgewhen the <strong>haptic</strong> <strong>co</strong>ntact is felt (assum<strong>in</strong>g that the <strong>haptic</strong><strong>co</strong>llision detection algorithm is effective). In the currentstudy, such a setup is referred to as non-<strong>co</strong><strong>located</strong><strong>haptic</strong>s.For many applications it is possible for a non-<strong>co</strong><strong>located</strong><strong>in</strong>terface to work well, <strong>with</strong> users adjust<strong>in</strong>g quiterapidly to the built-<strong>in</strong> spatial offset. However this is notalways the case. The proprioceptive element of the<strong>haptic</strong> <strong>feedback</strong> is of dim<strong>in</strong>ished benefit <strong>in</strong> a non-<strong>co</strong><strong>located</strong>setup, s<strong>in</strong>ce there is an offset (albeit <strong>co</strong>nstant)between the visual depth cues and those <strong>in</strong>ferred from<strong>haptic</strong> <strong>feedback</strong>. In this paper we present the results of a<strong>co</strong>mparative study of user performance when us<strong>in</strong>g <strong>co</strong><strong>located</strong><strong>haptic</strong> <strong>in</strong>teraction versus non-<strong>co</strong><strong>located</strong> <strong>haptic</strong><strong>in</strong>teraction. The specific tasks <strong>in</strong>volved are designed totest users’ spatial awareness of the simulated environment,and to dis<strong>co</strong>ver if there is a benefit to visual-<strong>haptic</strong><strong>co</strong>-location. The follow<strong>in</strong>g section reviews work relat<strong>in</strong>gto the current paper. Section 3 discusses the problems ofimplement<strong>in</strong>g <strong>co</strong>-location <strong>in</strong> immersive VR systems.Section 4 expla<strong>in</strong>s the set up and the measurements of<strong>co</strong>-location. Section 5 presents the results of the experiments.2 Related workImprov<strong>in</strong>g the efficiency, friendl<strong>in</strong>ess and ease of user<strong>in</strong>teraction <strong>with</strong> multimedia devices is one of the ma<strong>in</strong>goals of research <strong>in</strong> human–<strong>co</strong>mputer <strong>in</strong>teraction. Inearly studies, it was demonstrated that direct manipulationof objects improves user performance and systemusability for novice users (Shneiderman 1983). Furtherdevelopment allowed the use of direct <strong>in</strong>teraction forexample through tactile screens or portable devices(Shneiderman 1997). The <strong>co</strong>ncept of direct manipulationand direct <strong>in</strong>teraction has been tried <strong>in</strong> <strong>virtual</strong> <strong>reality</strong>(Cheng and Pulo 2003) and augmented <strong>reality</strong> (Hoffman1998). The addition of passive <strong>haptic</strong>s has also beenshown to enhance presence <strong>in</strong> VR (Meehan et al. 2001).This <strong>co</strong>nfirms the importance of design<strong>in</strong>g direct <strong>in</strong>teraction<strong>in</strong> VR.While desktop <strong>haptic</strong>s is <strong>co</strong>mmon (Sensable Technologies,http://www.sensable.<strong>co</strong>m/) add<strong>in</strong>g <strong>haptic</strong><strong>in</strong>teraction <strong>in</strong> <strong>virtual</strong> <strong>reality</strong> is more challeng<strong>in</strong>g. A few<strong>haptic</strong> systems have been developed to fit <strong>in</strong> immersiveenvironments (Los<strong>co</strong>s et al. 2004; Dettori et al. 2003;Sato 2002). Desktop <strong>haptic</strong> devices have been used fordirect <strong>in</strong>teraction <strong>with</strong> stereo-displayed applications, andspecific display devices have been created for that purpose.However <strong>with</strong> these devices the hand and the deviceare masked by the display surface.Several research studies have <strong>in</strong>vestigated the effect of<strong>haptic</strong>s and stereovision on systems similar to the Reach<strong>in</strong>device (Reach<strong>in</strong>, http://www.reach<strong>in</strong>.se/) Arsenaultand Ware performed experiments demonstrat<strong>in</strong>g thebenefits of <strong>co</strong>rrect visual perspective and <strong>haptic</strong> <strong>feedback</strong>on user performance <strong>in</strong> a spatial task (Arsenault andWare 2000) Also <strong>in</strong> the <strong>co</strong>ntext of <strong>co</strong>-<strong>located</strong> <strong>haptic</strong>s,Bouguila et al. (2000) <strong>in</strong>vestigated calibration issuesbetween the visual and <strong>haptic</strong> modes <strong>in</strong> <strong>virtual</strong> environments.This study also demonstrated an improvement<strong>in</strong> users’ perception of depth when the <strong>in</strong>teractionis <strong>co</strong>upled <strong>with</strong> <strong>haptic</strong> <strong>feedback</strong>. Similarly, Wall et al.(2002) showed that <strong>haptic</strong> <strong>feedback</strong> <strong>co</strong>upled <strong>with</strong> stereovision assists user performance <strong>in</strong> achiev<strong>in</strong>g tasks. Ernstet al. (2000) demonstrated a benefit of <strong>haptic</strong> <strong>feedback</strong> <strong>in</strong>the perception of surface orientation via texture. Wareand Rose (1999) <strong>in</strong>vestigated the task of object rotation<strong>in</strong> a <strong>virtual</strong> environment <strong>with</strong> reference to a variety offactors. Among their f<strong>in</strong>d<strong>in</strong>gs was that displacement ofthe visual representation of a real object (held <strong>in</strong> onehand) by 60 cm led to a 35% slowdown <strong>in</strong> task <strong>co</strong>mpletiontime.The notion of <strong>co</strong>-location of <strong>haptic</strong> and visual sensorymodes can also be <strong>co</strong>nsidered <strong>in</strong> the more generalterms of sensory displacement. Experiments <strong>in</strong>volv<strong>in</strong>gthe use of prisms to spatially offset visual <strong>in</strong>formationcan be traced to the work of n<strong>in</strong>eteenth century psychophysicistsvon Helmholtz (1867) and Stratton (1896)Both noted that adaptation to the ‘‘shift<strong>in</strong>g’’ of objects<strong>in</strong> the field of view was achieved over a short period oftime for the simple task of reach<strong>in</strong>g and touch<strong>in</strong>g anobject. The precise mechanisms <strong>in</strong>volved <strong>in</strong> such visuomotoradaptation are still a matter of debate.The experiments described <strong>in</strong> the current study do notallow for such adaptation effects—we are <strong>in</strong>stead <strong>in</strong>terested<strong>in</strong> the ability of people to <strong>in</strong>teract <strong>with</strong> a <strong>virtual</strong>environment <strong>with</strong>out a tra<strong>in</strong><strong>in</strong>g phase. However, it isnotable that <strong>in</strong> some <strong>co</strong>ntexts visuo-motor adaptationhas not been observed to be total: Rolland et al. (1995)describe a study us<strong>in</strong>g a head-mounted display thatsignificantly displaces the wearer’s viewpo<strong>in</strong>t. While theyobserved some adaptation of hand–eye <strong>co</strong>-ord<strong>in</strong>ationdur<strong>in</strong>g trials, performance did not reach basel<strong>in</strong>e levels.The aim of the current study is to demonstrate thebenefit of <strong>co</strong>-<strong>located</strong> <strong>haptic</strong> <strong>feedback</strong> for <strong>in</strong>teractivetasks <strong>in</strong> a <strong>virtual</strong> environment. While the studies described<strong>in</strong> this section (Arsenault and Ware 2000; Bouguilaet al. 2000; Wall et al. 2002; Ernst et al. 2000; Wareand Rose 1999) po<strong>in</strong>t to benefits of <strong>co</strong>-location (and<strong>co</strong>sts of non-<strong>co</strong>location) <strong>in</strong> specific <strong>co</strong>ntexts, the aim ofthis current study is to <strong>in</strong>vestigate the benefit of <strong>co</strong>locationacross different classes of <strong>in</strong>teractive task, us<strong>in</strong>gboth <strong>co</strong>-location and <strong>haptic</strong> <strong>feedback</strong> as <strong>co</strong>ntrol variables.Additionally, these studies all use an implementationthat relies upon a reflection of a video monitor toachieve <strong>co</strong>-location. Such a setup cannot be scaled up toan immersive cave-like implementation. Although the


26current study is performed on a desktop setup, the largergoal is to implement <strong>co</strong>-<strong>located</strong> <strong>haptic</strong>s <strong>in</strong> an immersivecave-like sett<strong>in</strong>g—this goal is achievable via a scal<strong>in</strong>g-upof the desktop sett<strong>in</strong>g implemented here.In <strong>co</strong>mmon <strong>with</strong> the studies described, the <strong>haptic</strong>device <strong>feedback</strong> provided <strong>in</strong> the study described here is aPhantom desktop force-<strong>feedback</strong> device, provid<strong>in</strong>g force<strong>feedback</strong> and proprioceptive elements, but <strong>with</strong>out anytactile stimulus.As discussed <strong>in</strong> the follow<strong>in</strong>g section, <strong>co</strong>-location isdifficult to implement and may require specifically builtsystems, thus we believe that it is important to show that<strong>co</strong>-location is a significant factor for user performance.3 Implementation issues for <strong>co</strong>-location <strong>in</strong> immersive<strong>virtual</strong> <strong>reality</strong>Various technical obstacles must be over<strong>co</strong>me <strong>in</strong> orderto successfully implement visual-<strong>haptic</strong> <strong>co</strong>-location <strong>in</strong>immersive VR environments. Some issues associated<strong>with</strong> depth perception are presented <strong>in</strong> (Bouguila et al.2000). In this paper, we discuss the issues <strong>in</strong> a moregeneral <strong>co</strong>ntext of us<strong>in</strong>g <strong>haptic</strong>s <strong>with</strong> stereo displays <strong>in</strong>VR. We have identified three broad classes of implementationproblem for visual-<strong>haptic</strong> <strong>co</strong>-location: occlusion,ac<strong>co</strong>mmodation and calibration; we discuss theseproblems <strong>in</strong> the follow<strong>in</strong>g sections.3.1 OcclusionFor screen-projection systems (as opposed to HMDs),occlusion problems arise when we reach beh<strong>in</strong>d a displayedgraphical object: <strong>in</strong>stead of our hand be<strong>in</strong>g occludedby the object, the reverse is the case.3.2 Ac<strong>co</strong>mmodationAc<strong>co</strong>mmodation, or focus, of the user’s eyes is an issue <strong>in</strong>any stereos<strong>co</strong>pic display. While appropriate <strong>co</strong>nvergenceof the eyes can be elicited by adjust<strong>in</strong>g the disparity of theleft and right stereo pair images ac<strong>co</strong>rd<strong>in</strong>g to the distanceof the displayed object from the eyes, ac<strong>co</strong>mmodation isdeterm<strong>in</strong>ed by the distance to the display surface (Wannet al. 1994). For HMDs this distance tends to be both verysmall and <strong>in</strong>variant, lead<strong>in</strong>g to a <strong>co</strong>nsiderable de<strong>co</strong>upl<strong>in</strong>gof ac<strong>co</strong>mmodation and <strong>co</strong>nvergence (by <strong>co</strong>ntrast, whenview<strong>in</strong>g the real 3D world around us, there is a precise<strong>co</strong>rrespondence between the ac<strong>co</strong>mmodation and <strong>co</strong>nvergencesystems). This ac<strong>co</strong>mmodation-vergence issue,as well as its associated symptoms of visual fatigue, is lesssignificant <strong>with</strong> larger-screen 3D display systems, wherethe eye-display separation is larger. However, if a realobject is <strong>in</strong>troduced <strong>in</strong>to the field of view (e.g. the <strong>co</strong>ntactpo<strong>in</strong>t of a <strong>haptic</strong> device) and is spatially <strong>co</strong>-<strong>located</strong> <strong>with</strong> a<strong>virtual</strong> object, this gives rise to a perceptual dissonance—wecan feel the object at our f<strong>in</strong>gertip via <strong>haptic</strong><strong>feedback</strong>, but we cannot visually focus on both <strong>virtual</strong>object and f<strong>in</strong>gertip simultaneously. While there is nosimple technical solution to this problem, it can be mitigatedby design<strong>in</strong>g, where possible, <strong>haptic</strong> simulations tolocate <strong>haptic</strong> <strong>co</strong>ntact po<strong>in</strong>ts close to the display surface.3.3 CalibrationFor visual-<strong>haptic</strong> <strong>co</strong>-location to be successfully implemented,it is necessary to align the <strong>co</strong>-ord<strong>in</strong>ate systemsof both these sensory modes as closely as possible. The<strong>haptic</strong> device itself can normally be calibrated veryprecisely. Once the base of a <strong>haptic</strong> device is positionedand registered, track<strong>in</strong>g of the <strong>co</strong>ntact po<strong>in</strong>t can beachieved <strong>with</strong> very high (sub-millimeter) accuracy. Errors<strong>in</strong> calibration of <strong>haptic</strong> devices should normally be<strong>in</strong>significant <strong>in</strong> relation to other sources of error(track<strong>in</strong>g and display).In large-scale immersive environments (e.g. CAVEs),the user must have their head tracked so that the view<strong>in</strong>gpo<strong>in</strong>t for the display can be <strong>co</strong>nt<strong>in</strong>uously updated. Inthis case, track<strong>in</strong>g errors will lead to <strong>in</strong><strong>co</strong>rrect visualdepth perception. For desktop environments, it is feasiblefor the user of a system to position their eyes at theappropriate view<strong>in</strong>g po<strong>in</strong>ts for the left and right projectionsof the stereo display. In this case <strong>in</strong>advertenthead movements will lead to <strong>in</strong><strong>co</strong>rrect visual perceptionof the 3D location of objects. In either case, the magnitudeof 3D location errors is dependent on severalfactors, all <strong>co</strong>ntribut<strong>in</strong>g to a de<strong>co</strong>upl<strong>in</strong>g of the visual and<strong>haptic</strong> render<strong>in</strong>gs:• The magnitude of the disparity between projectionview<strong>in</strong>g po<strong>in</strong>ts and eye positions.• The position of the viewed object relative to the displayscreen.• The position of the viewed object relative to the eyeposition.CRT display systems are liable to non-l<strong>in</strong>earity <strong>in</strong> bothaxes across the projection surface. Physical measurementstaken of a calibrated grid projected from such adisplay system <strong>in</strong>dicated errors of up to 1.2% of screenwidth <strong>in</strong> some regions of the screen. This was <strong>in</strong> spite ofcareful calibration of the CRT projector prior to measurement.Such non-l<strong>in</strong>earity can affect the perceivedposition of a displayed object <strong>in</strong> all three axes.4 Design of experimentsA series of experiments was designed to test thehypothesis that <strong>co</strong>-location of the visual and <strong>haptic</strong>sensory modes will lead to improved task performanceand enhanced sense of presence <strong>with</strong><strong>in</strong> a VR environment.In order to evaluate the effect of <strong>co</strong>-location on userperformance, we designed three experiments to test if


27there is a benefit, <strong>in</strong> terms of enhanced spatial awarenessof the user, to be derived from <strong>co</strong>-location. The threeexperiments each test a different aspect of spatialawareness:• <strong>Interaction</strong> accuracy: the ability to locate, move toand touch an object <strong>in</strong> 3D space.• Ease of manipulation: the ability to <strong>co</strong>ntrol the motionof an object <strong>in</strong> 3D space via physical <strong>co</strong>ntact.• Agility: the ability to <strong>in</strong>tercept or catch an objectmov<strong>in</strong>g through 3D space.4.1 Equipment and calibrationThe experiments were run on a PC <strong>with</strong> Pentium 4,1.7 GHz processor and NVidia Quadro FX1100graphics, displayed on a CRT monitor. The participantswore CrystalEyes shutter glasses for stereo view<strong>in</strong>g.Haptic <strong>in</strong>teraction was provided <strong>with</strong> a PhantomDesktop from Sensable technology (Sensable Technologies.http://www.sensable.<strong>co</strong>m/) The Phantom device isa force-reflect<strong>in</strong>g <strong>in</strong>terface and can thus affect the force<strong>feedback</strong> and proprioceptive elements of <strong>haptic</strong> <strong>feedback</strong>.The workspace area of the device is16 · 12 · 12 cm, appropriate for desktop <strong>in</strong>teraction.The Phantom was positioned to allow <strong>co</strong>-location andthe full workspace of the device. The <strong>in</strong>teraction workspacewas between the screen and the participant, thesupport be<strong>in</strong>g on the right hand side of the participant(see Fig. 1).For each participant there was a two-stage calibrationprocedure. In the first stage, the participant was seated atarms length from the monitor screen such that they werelook<strong>in</strong>g directly at the centre of the screen. Eye-separationvalues used for the stereo render<strong>in</strong>g were then f<strong>in</strong>elyadjusted to match those of the participant. This wasachieved by ask<strong>in</strong>g the participant to <strong>co</strong>mpare the depthsof rendered objects to real reference positions.Position<strong>in</strong>g of the Phantom <strong>haptic</strong> device to allow <strong>co</strong>locationformed the se<strong>co</strong>nd stage of calibration. This<strong>in</strong>volved manually align<strong>in</strong>g the Phantom device workspace<strong>with</strong> the <strong>virtual</strong> scene. This manual alignment wasf<strong>in</strong>ely tuned us<strong>in</strong>g the Phantom calibration program.The <strong>in</strong>teraction tasks were programmed us<strong>in</strong>g GhostSDK and OpenGL.4.2 Task designFor each of the three tasks there are two <strong>in</strong>dependentvariables: <strong>co</strong>-location and <strong>haptic</strong> <strong>feedback</strong>. For <strong>co</strong>location,the Phantom is carefully positioned such thatthe po<strong>in</strong>t of <strong>in</strong>teraction on the Phantom <strong>co</strong><strong>in</strong>cidesvisually <strong>with</strong> the po<strong>in</strong>t of <strong>co</strong>ntact <strong>in</strong> the 3D scene. Fornon-<strong>co</strong>location, visual markers <strong>in</strong>dicate this po<strong>in</strong>t of<strong>co</strong>ntact. When <strong>haptic</strong> <strong>feedback</strong> is turned off, the Phantomis used as a 3D joystick. Thus there are four classesof <strong>in</strong>teraction:• Co-<strong>located</strong> <strong>haptic</strong>s• Non-<strong>co</strong><strong>located</strong> <strong>haptic</strong>s• Co-location <strong>with</strong> no <strong>haptic</strong> <strong>feedback</strong>• Non-<strong>co</strong>location, no <strong>haptic</strong> <strong>feedback</strong>.For all tasks, there are three levels of difficulty, <strong>with</strong><strong>in</strong>creas<strong>in</strong>g numbers of objects, more <strong>co</strong>mplex spatialarrangement, and decreas<strong>in</strong>g object size. For each trial,the time taken to <strong>co</strong>mplete the task is measured.The first task tests spatial accuracy. The participant isrequired to touch, one by one <strong>in</strong> a given sequence, a setof objects distributed <strong>in</strong> 3D space. Three levels of difficultyare designed. On the first level objects are distributedon a plane parallel to the viewport. On the se<strong>co</strong>ndlevel, objects are distributed <strong>in</strong> 3D space, and theirnumber <strong>in</strong>creased. On the third level, the number ofobjects <strong>in</strong>creases and their size is reduced. A screenshotof the setup of this task for level 1 is shown <strong>in</strong> Fig. 2.For each level, the time spent by the participant to<strong>co</strong>mplete the task is measured.The se<strong>co</strong>nd task tests spatial manipulation. It <strong>in</strong>volvesmanipulat<strong>in</strong>g a ball through an environment<strong>co</strong>nsist<strong>in</strong>g of a sequence of objects, ak<strong>in</strong> to mov<strong>in</strong>g itthrough a maze. Aga<strong>in</strong>, three levels of difficulty weredesigned, similar to the ones used for the task on spatialaccuracy. A screenshot of level 2 is shown <strong>in</strong> Fig. 3. Foreach level, the time needed for the participant toac<strong>co</strong>mplish the task is measured.The third task tests spatial response. Gravity is simulatedand the participant must juggle objects <strong>in</strong> theenvironment. The task stops when an object is dropped.Fig. 1 Experiment set up Fig. 2 Spatial accuracy test, level 1


28Fig. 3 Spatial manipulation, level 2Three levels were implemented, <strong>with</strong> an <strong>in</strong>creased numberof objects at each level. A screenshot of level 2 isshown <strong>in</strong> Fig. 4. Time is measured from the beg<strong>in</strong>n<strong>in</strong>g ofthe task until the first dropped object.4.3 Experiment procedureA <strong>with</strong><strong>in</strong>-groups design was employed on a set of sixparticipants. All participants were male and technicallyliterate,although none had experience of us<strong>in</strong>g <strong>haptic</strong><strong>in</strong>terfaces. Each participant was given a verbal andwritten description of the tasks. For the first task, thiswas to touch each object <strong>in</strong> the environment <strong>in</strong> a predef<strong>in</strong>ed<strong>co</strong>lour-<strong>co</strong>ded sequence, <strong>with</strong> the <strong>co</strong>lour sequencechang<strong>in</strong>g at each level. For the se<strong>co</strong>nd task, the<strong>in</strong>struction was to manipulate a ball such that it touchedeach object <strong>in</strong> the environment, aga<strong>in</strong> <strong>in</strong> a <strong>co</strong>lour-<strong>co</strong>dedsequence. For the third task, the participant was askedto <strong>in</strong>tercept each object as it fell, effectively juggl<strong>in</strong>g <strong>with</strong>the objects.The participant was then asked to sit at arm’s lengthfrom the monitor. The system was calibrated (see Fig. 5)for stereo adaptation and visual-<strong>haptic</strong> <strong>co</strong>-location. Theparticipant was asked to keep their head as still aspossible to ma<strong>in</strong>ta<strong>in</strong> <strong>co</strong>rrect stereo and <strong>co</strong>-location. Atra<strong>in</strong><strong>in</strong>g period of a few m<strong>in</strong>utes followed.The tasks were then presented <strong>in</strong> the follow<strong>in</strong>g order:spatial accuracy, spatial manipulation, then spatial response.The order of presentation of the different<strong>in</strong>teraction classes was designed such that any learn<strong>in</strong>geffects would bias aga<strong>in</strong>st our <strong>in</strong>itial hypothesis—weexpected performance to be best when both <strong>haptic</strong>Fig. 4 Spatial response, level 2Fig. 5 Experiment calibration<strong>feedback</strong> and <strong>co</strong>-location were implemented, and worstwhen neither were implemented. Thus each task wasperformed us<strong>in</strong>g the four <strong>in</strong>teraction classes <strong>in</strong> the follow<strong>in</strong>gorder: <strong>co</strong>-<strong>located</strong> <strong>haptic</strong>s; non-<strong>co</strong><strong>located</strong> <strong>haptic</strong>s;<strong>co</strong>-location <strong>with</strong> no <strong>haptic</strong> <strong>feedback</strong>; non-<strong>co</strong>-location<strong>with</strong> no <strong>haptic</strong> <strong>feedback</strong>.5 ResultsAll participants <strong>co</strong>mpleted the set of tasks and timeswere re<strong>co</strong>rded. For the spatial response task, tim<strong>in</strong>gstopped when the participant failed to <strong>in</strong>tercept one ofthe fall<strong>in</strong>g target objects. For the spatial accuracy andspatial manipulation tasks, tim<strong>in</strong>g stopped when the lastobject was touched. In some cases participants touchedthe same object twice or an <strong>in</strong><strong>co</strong>rrect object (i.e. not thenext one <strong>in</strong> the sequence). In the former case this appearedto be due to participants not be<strong>in</strong>g certa<strong>in</strong> thatthey had touched the target (thus they touched it aga<strong>in</strong>),and <strong>in</strong> the latter case appeared to be accidental. In allcases participants went on to successfully <strong>co</strong>mplete thetasks.The mean times for task <strong>co</strong>mpletion are plotted <strong>in</strong>Figs. 6, 7 and 8 for the different tasks. Error bars <strong>in</strong>dicatethe standard deviation across the times re<strong>co</strong>rded forall participants. For Figs. 6 and 7 shorter time <strong>in</strong>dicatesbetter performance (i.e. quicker <strong>co</strong>mpletion of the spatialaccuracy and spatial manipulation tasks respectively).For Fig. 8, longer time <strong>in</strong>dicates betterperformance (i.e. prolonged ability to respond quickly toobjects mov<strong>in</strong>g <strong>in</strong> 3D space).While the graphs <strong>in</strong>dicate a trend for all tasks ofbetter performance when <strong>in</strong>teraction is via <strong>co</strong>-<strong>located</strong><strong>haptic</strong>s, the error bars <strong>in</strong>dicate substantial variance <strong>in</strong>the re<strong>co</strong>rded times.For each of the tasks, results were analysed via twowayrelated-measures ANOVA for each level of difficulty.The levels of difficulty had been <strong>in</strong>troduced to theexperiment to allow measurements at some appropriatelevel (i.e. such that the tasks were neither trivially easynor impossible). <strong>Interaction</strong>s among results from dif-


29Co<strong>located</strong> <strong>haptic</strong>sNo <strong>co</strong>location, <strong>haptic</strong>sColocation, no <strong>haptic</strong>sNo <strong>co</strong>location, No <strong>haptic</strong>sCo<strong>located</strong> <strong>haptic</strong>sNo <strong>co</strong>location, <strong>haptic</strong>sColocation, no <strong>haptic</strong>sNo <strong>co</strong>location, No <strong>haptic</strong>sSpatial accuracy taskSpatial response task50Time (s)403020100Level 1 Level 2 Level 3Task levelFig. 6 Results for spatial accuracyferent levels of difficulty were not of <strong>in</strong>terest and <strong>in</strong> anycase it would be difficult to draw <strong>co</strong>nclusions from this.However, add<strong>in</strong>g these additional tasks to the designalso has the negative effect of reduc<strong>in</strong>g the power ofresults. This was ac<strong>co</strong>unted for <strong>in</strong> our analysis by theBonferroni method, thus each F-ratio <strong>co</strong>mputed by theANOVAs is then divided by 9 (the number of tasks).This <strong>co</strong>rrection method is <strong>co</strong>nservative, and lowers therisk of Type I errors, although at the risk of <strong>in</strong>troduc<strong>in</strong>gType II errors.For the spatial accuracy task, <strong>co</strong>mpletion times weresignificantly improved (p < 0.05) by <strong>co</strong>-location at alllevels, while a marg<strong>in</strong>al improvement (0.05 < p < 0.1)was noted for the effect of <strong>haptic</strong> force <strong>feedback</strong> at levelsTime (s)180160140120100806040200Spatial manipulation taskLevel 1 Level 2 Level 3Task levelFig. 7 Results for spatial manipulationCo<strong>located</strong> <strong>haptic</strong>sNo <strong>co</strong>location, <strong>haptic</strong>sColocation, no <strong>haptic</strong>sNo <strong>co</strong>location, No <strong>haptic</strong>sTime (s)4030201002 and 3 of the task. This suggests that for tasks requir<strong>in</strong>gaccurate position<strong>in</strong>g, <strong>haptic</strong> <strong>feedback</strong> alone (i.e. whenunac<strong>co</strong>mpanied by accurate proprioceptive <strong>feedback</strong>) isof less benefit than hav<strong>in</strong>g the accurate proprioceptionprovided by <strong>co</strong>-location (<strong>with</strong> or <strong>with</strong>out <strong>haptic</strong> <strong>feedback</strong>).The results for the spatial manipulation task (Fig. 7)<strong>in</strong>dicate a less pronounced effect, and this is borne outby the lack of statistical significance <strong>in</strong> the data.The spatial response task was almost impossible toperform unless <strong>co</strong>-<strong>located</strong> <strong>haptic</strong> <strong>feedback</strong> is implemented.Without <strong>co</strong>-<strong>located</strong> <strong>haptic</strong> <strong>feedback</strong>, the averagetime that participants managed to ma<strong>in</strong>ta<strong>in</strong> thejuggl<strong>in</strong>g of objects was less than five se<strong>co</strong>nds. However,this observation is <strong>co</strong>nfirmed by statistically significantresults only for level 3 of the task, for which both <strong>co</strong>locationand <strong>haptic</strong> <strong>feedback</strong> elicit a significant(p < 0.05) improvement <strong>in</strong> task performance.6 ConclusionsLevel 1 Level 2 Level 3Fig. 8 Results for spatial responseTask levelPrevious studies have demonstrated the beneficial effectof <strong>haptic</strong> <strong>feedback</strong> on task performance (Hoffman 1998;Sallna¨s et al. 2000). This study additionally <strong>in</strong>dicatesthat <strong>co</strong>-location is a significant factor <strong>in</strong> improv<strong>in</strong>g<strong>in</strong>teraction performance <strong>in</strong> a 3D environment, for tasksrequir<strong>in</strong>g accuracy and rapid motion <strong>in</strong> user <strong>in</strong>teraction.The experiments described have been performed only <strong>in</strong>a desktop sett<strong>in</strong>g <strong>with</strong> limitations <strong>in</strong> terms of track<strong>in</strong>g,field of view, freedom of movement and the associatedloss of perceptual cues. However <strong>in</strong> spite of these limitationsand the low power of the experimental design(ma<strong>in</strong>ly due to the small number of participants), significantbenefits for <strong>co</strong>-location have been demonstrated.The spatial manipulation task produced no significanteffects—it is possibly the case that this type of ‘‘close


30<strong>co</strong>ntrol’’ is not benefited by <strong>co</strong>-location. It may havebeen useful to measure the relationship between thedegree of <strong>co</strong>ntrol (e.g. mean distance ma<strong>in</strong>ta<strong>in</strong>ed between<strong>haptic</strong> <strong>co</strong>ntact po<strong>in</strong>t and target) and task performance.The next step for this research is to extend it to a fullyimmersive VE system equipped <strong>with</strong> a larger <strong>haptic</strong> device(Frisoli et al. 2005; Dettori et al. 2003; Sato2002).Head-track<strong>in</strong>g and a larger <strong>haptic</strong> workspace will allowus to <strong>in</strong>vestigate more fully some of the implementationproblems described earlier. A more immersive systemwill also enable a broader <strong>in</strong>vestigation <strong>in</strong><strong>co</strong>rporat<strong>in</strong>gthe impact of multi-sensory <strong>co</strong>-location on presence.Acknowledgements We would like to thank Deepti Narwani andEmanuele Ruffaldi for their help <strong>in</strong> the current study. The workpresented <strong>in</strong> this paper was partially funded by the <strong>co</strong>llaborativeEuropean project PUREFROM (IST-2000-29580), a 3-year RTDproject funded by the 5th Framework Information Society Technologies(IST) Programme of the European Union.ReferencesArsenault R, Ware C (2000) Eye–hand <strong>co</strong>-ord<strong>in</strong>ation <strong>with</strong> force<strong>feedback</strong>. In: Proceed<strong>in</strong>gs of the SIGCHI <strong>co</strong>nference on humanfactors <strong>in</strong> <strong>co</strong>mput<strong>in</strong>g systems, The Hague, pp 408–414Bouguila L, Ishii M, Sato M (2000) Effect of <strong>co</strong>upl<strong>in</strong>g <strong>haptic</strong>s andstereopsis on depth perception <strong>in</strong> <strong>virtual</strong> environment. Worldmulti<strong>co</strong>nference on systemics, cybernetics and <strong>in</strong>formatics (SCI2000), Orlando, pp 406–414Cacl<strong>in</strong> A, Soto-Fara<strong>co</strong> S, K<strong>in</strong>gstone A, Spence C (2002) Tactile‘‘capture’’ of audition. Percept Psychophys 64(4):616–630Cheng K, Pulo K (2003) Direct <strong>in</strong>teraction <strong>with</strong> large scale displaysystem us<strong>in</strong>g <strong>in</strong>frared laser track<strong>in</strong>g devices. In: Australasiansymposium on <strong>in</strong>formation visualisation, Adelaide, 2003.Conferences <strong>in</strong> research and practice <strong>in</strong> <strong>in</strong>formation technology,vol 24Dettori A, Avizzano CA, Marcheschi S, Angerilli M, Bergamas<strong>co</strong>M, Los<strong>co</strong>s C, Guerraz A (2003) Art touch <strong>with</strong> CREATE<strong>haptic</strong> Interface, ICAR 2003. In: 11th International <strong>co</strong>nferenceon advanced robotics, University of Coimbra, Portugal, June30–July 3D<strong>in</strong>h HG, Walker N, Hodges LF, Song C, Kobayashi A (1999)Evaluat<strong>in</strong>g the importance of multi-sensory <strong>in</strong>put on memoryand the sense of presence <strong>in</strong> <strong>virtual</strong> environments. In: RosenblumL, Astheimer P, Teichmann D (eds) Proceed<strong>in</strong>gs of theIEEE <strong>virtual</strong> <strong>reality</strong> ‘99 <strong>co</strong>nference. IEEE Computer SocietyPress, Los Alamitos, pp 222–228Durlach P, Fowlkes J, Metevier C (2005) Effect of variations <strong>in</strong>sensory <strong>feedback</strong> on performance <strong>in</strong> a <strong>virtual</strong> reach<strong>in</strong>g task.Presence 14(4):450–462Ernst MO, Banks MS, Bulthoff HH (2000) Touch can change visualslant perception. Nat Neurosci 3(1):69–73Frisoli A, Jansson G, Bergamas<strong>co</strong> M, Los<strong>co</strong>s C (2005) Evaluationof the pure-form <strong>haptic</strong> displays used for exploration of worksof art at museums, World <strong>haptic</strong>s <strong>co</strong>nference, Pisa, March 18–20Hayward V, Astley OR, Cruz-Hernandez M, Grant D, Robles-De-La-Torre G (2004) Haptic <strong>in</strong>terfaces and devices. Sensor Rev24(1):16–29von Helmholtz H (1867) Treatise on physiological optics, vol III(English translation by Southall J, 1925)Hoffman HG (1998) Physically touch<strong>in</strong>g <strong>virtual</strong> objects us<strong>in</strong>g tactileaugmentation enhances the realism of <strong>virtual</strong> environments.In: Proceed<strong>in</strong>gs of the IEEE <strong>virtual</strong> <strong>reality</strong> annual <strong>in</strong>ternationalsymposium’98, Atlanta, pp 59–63Jack CE, Thurlow WR (1973) Effects of degree of visual associationand angle of displacement on the ‘‘ventriloquism’’ effect.Percept Mot Skills 37:967–979Los<strong>co</strong>s C, Tecchia F, Carroz<strong>in</strong>o M, Frisoli A, Ritter Widenfeld H,Swapp D, Bergamas<strong>co</strong> M (2004) The museum of pure form:touch<strong>in</strong>g real statues <strong>in</strong> a <strong>virtual</strong> museum, VAST 2004. In: 5thInternational symposium on <strong>virtual</strong> <strong>reality</strong>, archaeology andcultural heritage, BrusselsMeehan M, Insko B, Whitton M, Brooks FP (2001) Physiologicalmeasures of presence <strong>in</strong> <strong>virtual</strong> environments. In: Proceed<strong>in</strong>gsof 4th <strong>in</strong>ternational workshop on presence, Philadelphia, pp21–23Rolland JP, Biocca FA, Barlow T, Kancherla A (1995) Quantificationof adaptation to <strong>virtual</strong>-eye location <strong>in</strong> see-thru headmounteddisplays. In: Proceed<strong>in</strong>gs of the VRAIS, pp 56–66Sallnäs E, Rassmus-Gro¨hn K, Sjo¨stro¨m C (2000) Support<strong>in</strong>gpresence <strong>in</strong> <strong>co</strong>llaborative environments by <strong>haptic</strong> force <strong>feedback</strong>.ACM Trans Comput-Hum Interact 7(4):461–476Sato M (2002) Development of str<strong>in</strong>g-based force display: SPI-DAR. In: 8th International <strong>co</strong>nference on <strong>virtual</strong> systems andmultimedia (VSMM2002), Gyeongju (alias Kyongju), KoreaShneiderman B (1983) Direct manipulation: a step beyond programm<strong>in</strong>glanguages. IEEE Comput 16(8):57–69Shneiderman B (1997) Design<strong>in</strong>g the user <strong>in</strong>terface, Chapter 9,<strong>in</strong>teraction devices (Sections 9.1–9.3), 3rd edn. Addison-Wesley,Read<strong>in</strong>g, pp 306–327Stratton G (1896) Some prelim<strong>in</strong>ary experiments on vision <strong>with</strong>out<strong>in</strong>version of the ret<strong>in</strong>al image. Psychol Rev 3:611–617Wall SA, Paynter K, Shillito AM, Wright M, Scali S (2002) Theeffect of <strong>haptic</strong> <strong>feedback</strong> and stereo graphics <strong>in</strong> a 3D targetacquisition task. In: Proceed<strong>in</strong>gs of euro<strong>haptic</strong>s 2002, Universityof Ed<strong>in</strong>burgh, 8–10th July, pp 23–29Wallace M, Roberson G, Hairston W, Ste<strong>in</strong> B, Vaughan J, SchirilloJ (2004) Unify<strong>in</strong>g multisensory signals across time and space.Exp Bra<strong>in</strong> Res 158(2):252–258Wann J, Rushton S, Mon-Williams M (1994) Natural problems forstereos<strong>co</strong>pic depth perception <strong>in</strong> <strong>virtual</strong> environments. Vis Res35(19):2731–2736Ware C, Rose J (1999) Rotat<strong>in</strong>g <strong>virtual</strong> objects <strong>with</strong> real handles.ACM Trans Comput-Hum Interact 6(2):162–180

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!