Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
- TAGS
- abstract
- icpr
- icpr2010.org
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
14:50-15:10, Paper WeBT7.5<br />
Gaussian ERP Kernel Classifier for Pulse Waveforms Classification<br />
Zuo, Wangmeng, Harbin Inst. of Tech.<br />
Zhang, Dongyu, Harbin Inst. of Tech.<br />
Zhang, David, The Hong Kong Pol. Univ.<br />
Wang, Kuanquan, Harbin Inst. of Tech.<br />
Li, Naimin, Harbin Inst. of Tech.<br />
While advances in sensor and signal processing techniques have provided effective tools for quantitative research on traditional<br />
Chinese pulse diagnosis (TCPD), the automatic classification of pulse waveforms is remained a difficult problem.<br />
To address this issue, this paper proposed a novel edit distance with real penalty (ERP)-based k-nearest neighbors (KNN)<br />
classifier by referring to recent progresses in time series matching and KNN classifier. Taking advantage of the metric<br />
property of ERP, we first develop a Gaussian ERP kernel, and then embed it into kernel difference-weighted KNN classifier.<br />
The proposed Gaussian ERP kernel classifier is evaluated on a dataset which includes 2470 pulse waveforms. Experimental<br />
results show that the proposed classifier is much more accurate than several other pulse waveform classification approaches.<br />
WeCT1 Marmara Hall<br />
Tracking and Surveillance - IV Regular Session<br />
Session chair: Carneiro, Gustavo (Technical Univ. of Lisbon)<br />
15:40-16:00, Paper WeCT1.1<br />
Human 3D Motion Recognition based on Spatial-Temporal Context of Joints<br />
Zhao, Qiong, Univ. of Science and Tech. of China<br />
Wang, Lihua, City Univ. of Hong Kong<br />
Ip, Horace,<br />
Zhou, Xuehai, Univ. of Science and Tech. of China<br />
The paper presents a novel human motion recognition method based on a new form of the Hidden Markov Models, called<br />
spatial-temporal hidden markov models (ST-HMM), which can be learnt from a sequence of joints positions. To cope with<br />
the high dimensionality of the pose space, in this paper, we exploit the spatial dependency between each pair of spatially<br />
connected joints in the articulated skeletal structure, as well as the temporal dependency due to the continuous movement<br />
of each of the joints. The spatial-temporal contexts of these joints are learnt from the sequences of joints movements and<br />
captured by our ST-HMM. Results of recognizing 11 different action classes on a large number of motion capture sequences<br />
as well as synthetic tracking data show that our approach outperforms traditional HMM approach in terms of robustness<br />
and recognition rates.<br />
16:00-16:20, Paper WeCT1.2<br />
Matching Groups of People by Covariance Descriptor<br />
Cai, Yinghao, Univ. of Oulu<br />
Takala, Valtteri, Univ. of Oulu<br />
Pietikäinen, Matti, Univ. of Oulu<br />
In this paper, we present a new solution to the problem of matching groups of people across multiple non-overlapping<br />
cameras. Similar to the problem of matching individuals across cameras, matching groups of people also faces challenges<br />
such as variations of illumination conditions, poses and camera parameters. Moreover, people often swap their positions<br />
while walking in a group. In this paper, we propose to use covariance descriptor in appearance matching of group images.<br />
Covariance descriptor is shown to be a discriminative descriptor which captures both appearance and statistical properties<br />
of image regions. Furthermore, it presents a natural way of combining multiple heterogeneous features together with a<br />
relatively low dimensionality. Experimental results on two different datasets demonstrate the effectiveness of the proposed<br />
method.<br />
16:20-16:40, Paper WeCT1.3<br />
Boosting Incremental Semi-Supervised Discriminant Analysis for Tracking<br />
Wang, Heng, Chinese Acad. of Sciences<br />
Hou, Xinwen, Chinese Acad. of Sciences<br />
- 203 -