06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

In this paper, we propose a new algorithm for Brain-Computer Interface (BCI): Spatially Regularized Common Spatial<br />

Patterns (SRCSP). SRCSP is an extension of the famous CSP algorithm which includes spatial a priori in the learning<br />

process, by adding a regularization term which penalizes spatially non smooth filters. We compared SRCSP and CSP algorithms<br />

on data of 14 subjects from BCI competitions. Results suggested that SRCSP can improve performances, around<br />

10% more in classification accuracy, for subjects with poor CSP performances. They also suggested that SRCSP leads to<br />

more physiologically relevant filters than CSP.<br />

09:00-11:10, Paper ThAT9.16<br />

Comparing Multiple Classifiers for Speech-Based Detection of Self-Confidence – A Pilot Study<br />

Krajewski, Jarek, Univ. of Wuppertal<br />

Batliner, Anton, Univ. of Erlangen-Nuremberg<br />

Kessel, Silke, Univ. of Wuppertal<br />

The aim of this study is to compare several classifiers commonly used within the field of speech emotion recognition<br />

(SER) on the speech based detection of self-confidence. A standard acoustic feature set was computed, resulting in 170<br />

features per one-minute speech sample (e.g. fundamental frequency, intensity, formants, MFCCs). In order to identify<br />

speech correlates of self-confidence, the lectures of 14 female participants were recorded, resulting in 306 one-minute<br />

segments of speech. Five expert raters independently assessed the self-confidence impression. Several classification models<br />

(e.g. Random Forest, Support Vector Machine, Naive Bayes, Multi-Layer Perceptron) and ensemble classifiers (AdaBoost,<br />

Bagging, Stacking) were trained. AdaBoost procedures turned out to achieve best performance, both for single models<br />

(AdaBoost LR: 75.2% class-wise averaged recognition rate) and for average boosting (59.3%) within speaker-independent<br />

settings.<br />

09:00-11:10, Paper ThAT9.17<br />

Hierarchical Human Action Recognition by Normalized-Polar Histogram<br />

Ziaeefard, Maryam, Sahand Univ. of Tech.<br />

Ebrahimnezhad, Hossein, Sahand Univ. of Tech.<br />

This paper proposes a novel human action recognition approach which represents each video sequence by a cumulative<br />

skeletonized images (called CSI) in one action cycle. Normalized-polar histogram corresponding to each CSI is computed.<br />

That is the number of pixels in CSI which is located in the certain distance and angles of the normalized circle. Using hierarchical<br />

classification in two levels, human action is recognized. In first level, course classification is performed with<br />

whole bins of histogram. In the second level, the more similar actions are examined again employing the special bins and<br />

the fine classification is completed. We use linear multi-class SVM as the classifier in two steps. Real human action dataset,<br />

Weizmann, is selected for evaluation. The resulting average recognition rate of the proposed method is 97.6%.<br />

09:00-11:10, Paper ThAT9.18<br />

Automatic 3D Facial Expression Recognition based on a Bayesian Belief Net and a Statistical Facial Feature Model<br />

Zhao, Xi, Ec. Centrale de Lyon<br />

Huang, Di, Ec. Centrale de Lyon<br />

Dellandréa, Emmanuel, Ec. Centrale de Lyon<br />

Chen, Liming, Ec. Centrale de Lyon<br />

Automatic facial expression recognition on 3D face data is still a challenging problem. In this paper we propose a novel<br />

approach to perform expression recognition automatically and flexibly by combining a Bayesian Belief Net (BBN) and<br />

Statistical facial feature models (SFAM). A novel BBN is designed for the specific problem with our proposed parameter<br />

computing method. By learning global variations in face landmark configuration (morphology) and local ones in terms of<br />

texture and shape around landmarks, morphable Statistic Facial feature Model (SFAM) allows not only to perform an automatic<br />

landmarking but also to compute the belief to feed the BBN. Tested on the public 3D face expression database<br />

BU-3DFE, our automatic approach allows to recognize expressions successfully, reaching an average recognition rate<br />

over 82%.<br />

- 268 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!