Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
2.3. Motion Editing<br />
Peters and O’Sullivan equally proposed a model based on the saliency maps previously<br />
discussed [Peters and O’Sullivan, <strong>20</strong>03]. They actually combined it with the stage theory<br />
model of memory presented in [Peters and O’Sullivan, <strong>20</strong>02].<br />
Courty et al. also proposed a model based on saliency maps [Courty et al., <strong>20</strong>03]. They<br />
modeled the human perception process by using a saliency map based on geometric and<br />
depth information. In order to do this, they combined a spatial frequencies feature map with<br />
a depth feature map. They then applied this to a virtual character in order for it to perceive<br />
its environment in a biologically plausible way.<br />
On a different note, Peters and Itti conducted an experiment in which they tracked subjects’<br />
gazes while they played computer games [Peters and Itti, <strong>20</strong>06]. They tested various<br />
heuristics to predict where the users would direct their attention. They compared outlierbased<br />
heuristics and local heuristics. Their results showed that heuristics which detect outliers<br />
from the global distribution of visual features were better predictors than the local ones.<br />
They concluded that bottom-up image analysis could predict an important part of human<br />
gaze targets in the case of video games.<br />
Yu and Terzopoulos proposed a decision network framework to simulate how people<br />
make decisions on what to attend to and on how to react [Yu and Terzopoulos, <strong>20</strong>07]. Their<br />
virtual characters are endowed with an intention generator, based on internal attributes and<br />
memory. They receive perceptual data by querying the environment. This data comes under<br />
the form of position, speed and orientation. They then decide on what to attend to depending<br />
on their current intention and on possible abrupt visual onsets. Finally, they endow their<br />
virtual characters with a memory system which allows them to remain consistent in their<br />
behaviors and adapt to changes in the environment. This approach equally aims at animating<br />
single characters or small groups of characters, but not large amounts of them, such as would<br />
be seen in virtual crowds.<br />
The approach we propose for character attention behavior synthesis resides on the automatic<br />
detection of interest points based on bottom-up visual attention. Our method uses<br />
character trajectories from pre-existing crowd animations to automatically determine the interest<br />
points in a dynamic environment. Since it relies on trajectories only, it is generic, and<br />
can be used with any kind of crowd animation engine. Moreover, it allows the generation of<br />
attention behaviors for large crowds of characters. In a second step, we propose an alleviated<br />
version of our method, directly integrated in a crowd engine. In this method, we determine<br />
the interest points from user position, user’s interest position, and character positions. It also<br />
allows the generation of attention behaviors for large crowds of characters in real-time.<br />
2.3 Motion Editing<br />
Motion editing, i.e. the modification of character movements, is also a vast domain which has<br />
been worked on in profusion. A large category of methods relies on the skillful manipulation<br />
of motion clips from a motion capture database by blending [Kovar and Gleicher, <strong>20</strong>03]orby<br />
defining motion graphs [Kovaretal., <strong>20</strong>02a; Arikan and Forsyth, <strong>20</strong>02; Lee et al., <strong>20</strong>02a].<br />
Due to the many possible configurations in attention behaviors, this would require a very<br />
dense database in our case.<br />
31