30.06.2013 Views

Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL

Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL

Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.1. Virtual Reality Exposure Therapy<br />

Jacob proposed to use the eye as a replacement to the mouse [Jacob, 1990]. He proposed<br />

an intelligent gaze-based informational display. In this system, a <strong>text</strong> window would unscroll<br />

in order to give information about items selected by gaze. In this work, the author equally<br />

identified one of the main problems in using the eyes as a pointing and selection tool: the<br />

difficulty to know whether the eyes are scanning or selecting. This is also known as the<br />

Midas touch problem. In order to sidestep this, he proposed to use dwell time for selection.<br />

In their paper, Starker and Bolt presented an information display system [Starker and<br />

Bolt, 1990]. In their system, a user equipped with an eye-tracker could control navigation<br />

in a 3D environment by gaze. These 3D environments equally contained characters which<br />

would change behavior when looked at. More specifically, they would start blushing or<br />

speaking. They used synthesized speech to interactively describe the objects which were<br />

being looked at on screen. Here, dwell times were used to zoom into the environment. Their<br />

setup was in front of a monitor screen and the users had to use a chin-rest in order to avoid<br />

head movements, which were not monitored.<br />

Colombo et al. proposed a system coupling eye- and head-tracking [Colombo et al.,<br />

1995]. They monitored the various types of possible movements to trigger different types of<br />

events. Smooth gaze shifts were assimilated to image scanning, head movements were used<br />

to drag objects on screen, and “eye pointing”, if long enough, was identified as a mouseclick.<br />

They tested their method on a virtual museum application, where a user could explore<br />

the museum environment and select desired information on the various paintings.<br />

Cassel and Thórisson conducted an experiment in which they tested different types of<br />

conversational agents [Cassell and Thorisson, 1999]. In a first phase, the agent gave content<br />

feedback only. In the second, it gave content and envelope feedback (non-verbal behaviors<br />

related to conversation such as gaze or tapping of the fingers), and in the third, content and<br />

emotional feedback. Their aim was to confirm their hypothesis, that envelope feedback was<br />

much more important than any other feedback. In their study, the subject was eye-tracked in<br />

order for the conversational agents, which consisted in simple 2D cartoon characters, to be<br />

able to respond with respect to where the subject was looking.<br />

More recently, Tanriverdi and Jacob proposed an interactive system in VR [Tanriverdi<br />

and Jacob, <strong>20</strong>00]. They used eye-tracking to select objects in a 3D environment. They then<br />

compared the efficacy of using eyes instead of hands for object selection. They concluded<br />

that the use of eyes for selection was more efficient than hands for far away objects but not<br />

for close ones. However, they also concluded that subjects had more difficulty remembering<br />

interaction locations when using their eyes than when using their hands.<br />

In most of the above mentioned work, there was a requirement for the user’s head to be<br />

static because it was not tracked. Zhu and Ji developed an eye-tracking system which did not<br />

require a static head [Zhu and Ji, <strong>20</strong>04]. Moreover, their system did not require calibration.<br />

One of the tests they did to evaluate their system, was to use eye-tracking for region selection<br />

and magnification, which was achieved by blinking thrice.<br />

Finally, Wang et al. developed a system in which they used eye-tracking in order to<br />

change the behavior of a software agent in tutoring exercises [Wang et al., <strong>20</strong>06]. When<br />

eye movement fell under a certain threshold and/or when the pupil size was smaller than a<br />

given threshold, indicating loss of interest, the software agent reacted by showing anger or by<br />

alerting the subject. On the other hand, if both these values were above the given thresholds,<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!