Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
Texte intégral / Full text (pdf, 20 MiB) - Infoscience - EPFL
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
5.4. Automatic Motion Adaptation<br />
The main difference with the method proposed in the previous chapter is that this method<br />
works online. We have integrated the gaze motion adaptation directly into the crowd simulation<br />
animation loop. We thus do not have any a priori knowledge of the duration of an<br />
interest point. We therefore do not have a set of pre-determined gaze constraints as in the<br />
offline method. This does not have much impact on the spatial resolution but it does on<br />
the temporal resolution. The computation of the overall displacement map to be applied to<br />
the current character posture remains the same as for the offline method. We thus do not<br />
discuss this part of the method in this chapter. However, the computation of the amount of<br />
rotation to be applied to each joint at each frame needs to be adapted in order to meet online<br />
requirements. These modifications are discussed in the following section.<br />
5.4.1 Temporal Resolution<br />
The main problem of not having any a priori knowledge of the constraint durations is that we<br />
can not filter out the constraints which last under the minimum gaze duration. We therefore<br />
have to deal with this on the fly in order to avoid very small, saccadic movements when a<br />
point of interest lasts only a fraction of a second.<br />
Whenever a new point of interest is selected, we define the amount of time the character<br />
should take to perform the gaze motion, depending on the angle of rotation which has to<br />
be done; the larger the angle, the longer it takes to perform the motion. We then simply<br />
interpolate at each time step to determine the amount of total rotation which has to be done<br />
by each joint.<br />
However, the problem which can arise is that the interest point can change or exit the field<br />
of view before the gaze motion is finished, i.e. before the joints have attained the final gaze<br />
posture. The fact that the motion is not finished is not actually a big problem. However, if<br />
this is the case, the gaze duration will necessarily be very small. This induces very unrealistic<br />
behaviors where characters perform very small saccadic movements. In order to counter this,<br />
we have defined a minimum gaze duration of half a second. If the interest point is deactivated<br />
before attaining this minimum duration, we artificially maintain the interest at the previous<br />
point of interest. Actually, we maintain the previous character posture until the minimum<br />
gaze duration threshold is attained.<br />
Another difference in the gaze simulation loop between the offline and online methods<br />
is that, in the online method, we adapt each character at the current frame, whereas in the<br />
offline method, we adapt each character’s complete animation before going on to the next<br />
one. We thus have to keep track of each character’s previous posture in the online method.<br />
Moreover, we also need to keep track of the starting posture when initiating gaze deactivation<br />
or a change in interest point. Indeed, at each time step, we start from the character’s original<br />
walking posture and not from the adapted posture at the previous time step.<br />
Finally, the remainder of the temporal resolution stays the same as in the offline method.<br />
More specifically, the computation of the different time values for eyes, head, and spine<br />
to satisfy the gaze constraints and the desynchronization between these three sets of joints,<br />
remains the same.<br />
69