12.07.2015 Views

Deliverable 4.4 - INSEAD CALT

Deliverable 4.4 - INSEAD CALT

Deliverable 4.4 - INSEAD CALT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

AtGentive IST-4-027529-STP - Attentive Agents for Collaborative Learnerswhile a computer user interacted with an agent. They found out that nearly 20 % of thetime was spent looking at the agent and over 50 % of the time reading the agent’sspeech bubble. However, in a computer program or a virtual environment, an agent’sfunction is often to guide the user to other elements or activities happening on the screeninstead of just drawing attention to itself. An example would be a situation where the useris working on a task and the agent will alert him or her to new information (such asreceived emails or messages). Thus, one challenge for designing agents and theirbehavior is to support the management of attention effectively. This means guiding anddirection an user’s attention in a way that is subtle and not too distracting.In the study by Witkowski and others (2001, 2003) attention was mostly directed to theagent character’s face. Other studies have also proven that facial expressions areeffective social cues in human-agent interaction (Partala & Surakka, 2004; Partala,Surakka, & Lahti, 2004; Vanhala et al., 2007). Besides facial expression, embodiedagents are also capable of using a multitude of gestures, movements and changes inbody language to convey emotions and emphasize or clarify what they arecommunicating. In fact, embodied agents can effectively mimic the same properties andbehavior that humans exhibit in face-to-face conversation (Cassell, 2000). Early studiesof task-oriented dialogues between a human and an agent by Deutsch (1974) haveclearly shown the importance of nonverbal communication in such tasks. Recent studiesby Marsi and van Rooden (2007) have shown that users prefer a non-verbal visualindication of an embodied system's internal state to a verbal indication.Despite the potential benefits of embodied computer agents, there is little empiricalevidence to help in designing characters and cues that are effective in guiding attention.Several studies have also had limitations, such as a lack of experimental conditions ordifferent focus of interest, as pointed out by Dehn and Van Mulken (2000). In manycases, the effects of using an embodied agent have been compared to having no agentat all. Thus, the results of those studies can be used to argue for using artificial agents ingeneral, but they are not particularly helpful in designing characteristics and behaviourfor agents. Task-oriented studies about agents with respect to learning and collaboratingwith users have also often focused solely on verbal dialogues, with less emphasis on thepotentially effective nonverbal cues that can be provided by the agents (Rickel andJohnson, 2000).Eye tracking is one established technique for determining the focus of a person’s visualattention at any particular time. By analyzing gaze paths, that is, sequences of saccadesand fixations, it is possible to determine not only where a person is looking and for howlong, but how and when his or her focus of attention changes (e.g. Hyrskykari et al.,2003; Merten & Conati, 2006). By utilizing eye tracking techniques we can, for example,study fixations of gaze to determine the proportion of time a user spends looking at anagent and other visual elements and cues on a computer screen. By analyzing gazepaths, we can study how an agent’s actions and gestures influence the direction ofusers’ visual attention.The aim of the present study was to get insight into the attention-guiding properties of anembodied computer agent. The objective was to get concrete, empirically validatedevidence concerning the effects of an agent character’s gestures and their potential toguide attention. To do this, we conducted an experiment consisting of tasks where anagent character used gestures to guide a user’s attention to visual information shown onthe screen. Our aim was to find out how varying the type and direction of the gesture andthe place of the agent affected the user’s visual attention. Another aim was to find outwhat effect, if any, this has on the user’s ability to remember the information the agentwas targeting. Gaze tracking was used to determine the focus of the user’s visualattention, and learning performance was investigated by having the user answerstatements about information shown on the screen.Del <strong>4.4</strong>: AtGentive Final Evaluation Report – Appendix B page 5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!