06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

of the target human subject constructed from multiple images is found to be instrumental. In the operation phase, the 3D<br />

pose of the target subject in the subsequent frames of the input video is tracked. A bottom-up framework is used, which<br />

for any current image frame extracts firstly the tentative candidates of each body part in the image space. The human<br />

model, with the appearance facets already learned, and with the pose entries initialized with those for the previous image<br />

frame, is then brought in under a belief propagation algorithm, to establish correlation with the above 2D body part candidates<br />

while enforcing the proper articulation between the body parts, thereby determining the 3D pose of the human<br />

body in the current frame. The tracking performance on a number of monocular videos is shown.<br />

09:00-11:10, Paper ThAT9.31<br />

Resampling Approach to Facial Expression Recognition using 3D Meshes<br />

Murthy, O. V. Ramana, NUS<br />

Venkatesh, Y. V., NUS<br />

Kassim, Ashraf, NUS<br />

We propose a novel strategy, based on resampling of 3D meshes, to recognize facial expressions. This entails conversion<br />

of the existing irregular 3D mesh structure in the database to a uniformly sampled 3D matrix structure. An important consequence<br />

of this operation is that the classical correspondence problem can be dispensed with. In the present paper, in<br />

order to demonstrate the feasibility of the proposed strategy, we employ only spectral flow matrices as features to recognize<br />

facial expressions. Experimental results are presented, along with suggestions for possible refinements to the strategy to<br />

improve classification accuracy.<br />

09:00-11:10, Paper ThAT9.33<br />

Facial Expression Mimicking System<br />

Fukui, Ryuichi, Toyohashi Univ. of Tech.<br />

Katsurada, Kouichi, Toyohashi Univ. of Tech.<br />

Iribe, Yurie, Toyohashi Univ. of Tech.<br />

Nitta, Tsuneo, Toyohashi Univ. of Tech.<br />

We propose a facial expression mimicking system that copies the facial expression of one person on the image of another.<br />

The system uses the active appearance model (AAM), a commonly used model in the field of facial expression processing.<br />

AAM compositionally comprises some parameters representing facial shape, brightness, and illumination environment.<br />

Therefore, in addition to the facial expression elements, the model parameters express other elements, such as individuality<br />

and direction of the face. In order to extract the facial expression elements from compositional parameters of AAM, we<br />

applied principal component analysis (PCA) to the AAM parameter values, collected with each change in facial expression.<br />

The obtained facial expression model is applied to the facial expression mimicking system and the experiment shows its<br />

effectiveness for mimicking.<br />

09:00-11:10, Paper ThAT9.34<br />

A Framework for Hand Gesture Recognition and Spotting using Sub-Gesture Modeling<br />

Malgireddy, Manavender, Univ. at Buffalo, SUNY<br />

Corso, Jason, Univ. at Buffalo, SUNY<br />

Setlur, Srirangaraj, Univ. at Buffalo<br />

Govindaraju, Venu, Univ. at Buffalo<br />

Mandalapu, Dinesh, HP Lab.<br />

Hand gesture interpretation is an open research problem in Human Computer Interaction (HCI), which involves locating<br />

gesture boundaries (Gesture Spotting) in a continuous video sequence and recognizing the gesture. Existing techniques<br />

model each gesture as a temporal sequence of visual features extracted from individual frames which is not efficient due<br />

to the large variability of frames at different timestamps. In this paper, we propose a new sub-gesture modeling approach<br />

which represents each gesture as a sequence of fixed sub-gestures (a group of consecutive frames with locally coherent<br />

context) and provides a robust modeling of the visual features. We further extend this approach to the task of gesture spotting<br />

where the gesture boundaries are identified using a filler model and gesture completion model. Experimental results<br />

show that the proposed method outperforms state-of-the-art Hidden Conditional Random Fields (HCRF) based methods<br />

and baseline gesture spotting techniques.<br />

- 272 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!