11.07.2015 Views

Creating the Virtual Self: Re-Using Facial Animation in Different ...

Creating the Virtual Self: Re-Using Facial Animation in Different ...

Creating the Virtual Self: Re-Using Facial Animation in Different ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

generic face model us<strong>in</strong>g radial basis functions, <strong>in</strong>terpolat<strong>in</strong>g <strong>the</strong> poses. None of <strong>the</strong>se methods deals directlywith <strong>the</strong> artists needs, and most are oriented towards human look.3 Animat<strong>in</strong>g <strong>Virtual</strong> CharactersToday, facial animation of avatars is typically done us<strong>in</strong>g pre-rendered animations of each character, <strong>in</strong>clud<strong>in</strong>gpre-def<strong>in</strong>ed phonemes and expressions, which are stored <strong>in</strong> a database and played back later based onautomatic text-to-speech or pre-recorded audio syn<strong>the</strong>sis. With our method, <strong>the</strong>re is no need to create <strong>the</strong>animations for each character; it allows hav<strong>in</strong>g multiple models already def<strong>in</strong>ed or creat<strong>in</strong>g new ones, and<strong>in</strong>stantly animate <strong>the</strong>m on demand.Our method beg<strong>in</strong>s with two 3D face models. The first one, we call source model, is rigged and <strong>in</strong>cludes aseries of attributes: a control skeleton, a number of <strong>in</strong>fluence objects that represent animation controls and<strong>the</strong> <strong>in</strong>ner structure of <strong>the</strong> face, facial expressions (shapes) and animation scripts. The rig doesn’t have <strong>in</strong>itialconstra<strong>in</strong>s and can be created by an artist. The second model, we call target model, doesn’t have a characterrig associated to it. The source and target models can have different descriptors: one can be def<strong>in</strong>ed as apolygonal mesh and <strong>the</strong> o<strong>the</strong>r as a NURBS surface. Also, <strong>the</strong> faces do not need to have a po<strong>in</strong>t-to-po<strong>in</strong>tcorrespondence (figure 2 shows an overview of <strong>the</strong> process).The method is based on geometric deformations techniques, which re-locate and re-shape <strong>the</strong> source modelattributes to fit <strong>the</strong> target model. As a result, <strong>the</strong> artist is now free to modify <strong>the</strong> target model, re-use <strong>the</strong>animation scripts of <strong>the</strong> source model or generate animations us<strong>in</strong>g both animation controls and sparse data<strong>in</strong>terpolation of <strong>the</strong> shapes (for a detailed description see [OZS06]).Figure 2: Overview: def<strong>in</strong>e <strong>the</strong> source and target model; adapt <strong>the</strong> source geometry to fit <strong>the</strong> target; transferattributes and shapes; b<strong>in</strong>d <strong>the</strong> <strong>in</strong>fluence objects and skeleton to <strong>the</strong> target. The result is a model ready to beanimated. (Copyright 2005 Dygrafilms)4 Application: <strong>Virtual</strong> GuidesThe primary motivation of our research was to f<strong>in</strong>d a solution that speeds up <strong>the</strong> rigg<strong>in</strong>g process for filmsand videogames. We developed an application based on our method, which is implemented <strong>in</strong> C++ as aplug-<strong>in</strong> for Maya 7.0 and can be <strong>in</strong>tegrated <strong>in</strong>to animation pipel<strong>in</strong>es. This can serve as <strong>the</strong> core technologyto be applied for <strong>the</strong> <strong>the</strong> content creation process of virtual characters, and can lead to to <strong>the</strong> development ofnew applications to be <strong>in</strong>tegrated <strong>in</strong>to exist<strong>in</strong>g virtual environment frameworks. <strong>Virtual</strong> guides for historicalheritage sites or museums are usually limited to a few characters, due to <strong>the</strong> time it takes to create multiplemodels. This limitation can be overcome <strong>in</strong> a three step process. First, <strong>the</strong> user def<strong>in</strong>es <strong>the</strong> new 3D characterfor <strong>the</strong> virtual guide, which can be created by sculpt<strong>in</strong>g it by hand or generated by a scanner. Second, us<strong>in</strong>gan application based on our technology, <strong>the</strong> user can transfer a sophisticated rig to <strong>the</strong> new character, whichbecomes ready to be animated. Last, <strong>the</strong> user can transfer <strong>the</strong> animations def<strong>in</strong>ed <strong>in</strong> <strong>the</strong> source model to <strong>the</strong>virtual guide character, which are later played back dur<strong>in</strong>g <strong>the</strong> virtual tour. For <strong>in</strong>stance, <strong>the</strong> default tour canbe guided by an avatar of an old lady, but at <strong>the</strong> request of <strong>the</strong> visitor can be <strong>in</strong>stantly changed to an avatarof a cartoon, a small child or a fantastic creature (figure 3 shows how animations can be transferred betweencharacters with different looks). Also, it is very easy to create or change <strong>the</strong> animations of <strong>the</strong> source modeland quickly propagate <strong>the</strong>m to all avatars.5 ConclusionThe method automatically transfers rig and animations between models, so new avatars can be <strong>in</strong>stantlycreated and animated with high quality by only provid<strong>in</strong>g <strong>the</strong> 3D model geometry. This, not only <strong>in</strong>creases <strong>the</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!