29.06.2013 Views

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

NUI Galway – UL Alliance First Annual ENGINEERING AND - ARAN ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Non Verbal Communication within Collaborative Virtual Environments<br />

Alan Murphy and Dr. Sam Redfern<br />

Department of Information Technology<br />

National University of Ireland <strong>Galway</strong><br />

a.murphy30@nuigalway.ie, sam.redfern@nuigalway.ie<br />

Abstract<br />

This research aims to examine the usefulness of<br />

achieving unobtrusive avatar control in Collaborative<br />

Virtual Environments (CVE’s) as a means of tracking<br />

and communicating non verbal cues. Through the<br />

capture of user data in real time their avatar can in<br />

turn be animated to replicate user movement in real<br />

time. This passive control of the avatar will provide for<br />

a more realistic representation of a user in the virtual<br />

world, which in turn will provide a channel through<br />

which non verbal clues can be communicated and<br />

interpreted by participating users of the CVE.<br />

1. Introduction<br />

Collaborative Virtual Environments are distributed<br />

virtual reality systems with multi user access. There are<br />

many potential applications of these groupware systems<br />

ranging from learning environments to remote<br />

conferencing or simply virtual business meetings.<br />

In everyday face to face interactions, participants are<br />

able to utilize a full range on non-verbal<br />

communicational resources. These resources include<br />

the ability to move their head to look at each other,<br />

point or use hand gestures to address objects, change<br />

their gaze direction, posture, facial expression or their<br />

position [1].<br />

Social psychologists have argued that more than<br />

65% of the communicational information exchanged<br />

during such a face to face encounter is carried on the<br />

non verbal band [2]. Therefore, it is given that there is a<br />

need to provide support for such channels of<br />

communication when designing a platform for remote<br />

person to person communication.<br />

2. Previous Work<br />

Much work has been done in the areas of presence,<br />

immersion and awareness in CVE’s, all pertinent topics<br />

with regard to rating the quality of interactions in a<br />

virtual setting. To date many approaches taken to<br />

capture user data to recognize a hand gestures have<br />

involved mountable sensors which are used for<br />

mapping a hand movement of to that of an avatar [3]. In<br />

terms of capturing a participants emotional state much<br />

research has been based on capturing the fundamental<br />

human emotions as described in the works of emotional<br />

psychologists like Robert Plutchnik [4] and Paul Ekman<br />

[5], who propose separate but similar lists of 6-8<br />

fundamental human emotions. Most prototypic<br />

development of emotion capture solutions as applied to<br />

CVE’s to date, have solely been trained to capture this<br />

12<br />

static & finite list of primary emotions, lacking the<br />

dynamic capacity of capturing other unread emotions.<br />

3. Proposed Work<br />

The proposed work will investigate the usefulness of<br />

a user passively controlling their avatar’s facial<br />

expression by mapping their own expressions onto their<br />

avatar in real time, to convey further unread emotional<br />

states. Work to date has limited itself to focus on a few<br />

trained fundamental emotions, whereas in reality there<br />

are many more subtle emotions that only facial<br />

expressions capture. By adopting the more dynamic<br />

approach of monitoring all expressions instead of<br />

searching for a trained few, the non verbal channel of<br />

communication could be fully utilized thus improving<br />

user communication and sense of presence. A sample<br />

experiment may involve giving subject A tasks or<br />

readings to trigger these untrained emotions, and have<br />

subject B monitor the expressions of A’s avatar keeping<br />

a log of interpreted emotions. Consultation between<br />

both subjects would then determine the accuracy of B’s<br />

perceptions. The technique of tracking user’s facial data<br />

from a live video stream [Fig 1] and using this data to<br />

control an avatar’s expression in real time could offer a<br />

dynamic solution for tracking emotional states beyond<br />

Plutchnik and Ekman’s fundamental taxonomies.<br />

Fig 1. Extracting user’s head orientation from video stream.<br />

4. References<br />

[1] Steptoe, W., Wolff, R., “Eye Tracking for Avatar<br />

Eye-Gaze and Interactional Analysis in Immersive<br />

CVE’s”, CSCW’08, ACM, USA, 2008, pg 197.<br />

[2] Fabri, M., Moore, D.J., Hobbs, D.J., Smith, A.B.,<br />

“The Emotional Avatar: Non-verbal Communication<br />

between Inhabitants of CVEs”, GW’99, Springer-<br />

Verlag, Berlin, 1999, pg 269.<br />

[3] Broll,W. Meier, E., Schardt, T. “Symbolic Avatar<br />

Acting in Shared Virtual Environments”, Workshop on<br />

Behavior Planning for Life-Like Characters and<br />

Avatars, Sitges, Spain, 1999<br />

[4] Plutchik, R., “The Emotions: Facts, Theories, and a<br />

New Model”, New York: Random House. 1962

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!