Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
- TAGS
- abstract
- icpr
- icpr2010.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
09:00-11:10, Paper WeAT9.6<br />
Human State Classification and Predication for Critical Care Monitoring by Real-Time Bio-Signal Analysis<br />
Li, Xiaokun, DCM Res. Res. LLC<br />
Porikli, Fatih, MERL<br />
To address the challenges in critical care monitoring, we present a multi-modality bio-signal modeling and analysis modeling<br />
framework for real-time human state classification and predication. The novel bioinformatic framework is developed<br />
to solve the human state classification and predication issues from two aspects: a) achieve 1:1 mapping between the biosignal<br />
and the human state via discriminant feature analysis and selection by using probabilistic principle component<br />
analysis (PPCA); b) avoid time-consuming data analysis and extensive integration resources by using Dynamic Bayesian<br />
Network (DBN). In addition, intelligent and automatic selection of the most suitable sensors from the bio-sensor array is<br />
also integrated in the proposed DBN.<br />
09:00-11:10, Paper WeAT9.7<br />
Automated Cephalometric Landmark Identification using Shape and Local Appearance Models<br />
Keustermans, Johannes, K.U. Leuven<br />
Mollemans, Wouter, Medicim nv.<br />
Vandermeulen, Dirk<br />
Suetens, Paul, K.U.Leuven<br />
In this paper a method is presented for the automated identification of cephalometric anatomical landmarks in craniofacial<br />
cone-beam CT images. This method makes use of statistical models, incorporating both local appearance and shape knowledge<br />
obtained from training data. Firstly, the local appearance model captures the local intensity pattern around each<br />
anatomical landmark in the image. Secondly, the shape model contains a local and a global component. The former improves<br />
the flexibility, whereas the latter improves the robustness of the algorithm. Using a leave-one-out approach to the<br />
training data, we assess the overall accuracy of the method. The mean and median error values for all landmarks are equal<br />
to 2.55mm and 1.72mm, respectively.<br />
09:00-11:10, Paper WeAT9.8<br />
Color Analysis for Segmenting Digestive Organs in VCE<br />
Vu, Hai, The Inst. of Scientific and Industrial Res. Osaka<br />
Echigo, Tomio, Osaka Electro-Communication Univ.<br />
Yagi, Yasushi, Osaka Univ.<br />
Yagi, Keiko, Kobe Pharmaceutical Univ.<br />
Shiba, Masatsugu, Osaka City Univ.<br />
Higuchi, Kazuhide, Osaka City Univ.<br />
Arakawa, Tetsuo, Osaka City Univ.<br />
This paper presents an efficient method for automatically segmenting the digestive organs in a Video Capsule Endoscopy<br />
(VCE) sequence. The method is based on unique characteristics of color tones of the digestive organs. We first introduce<br />
a color model of the gastrointestinal (GI) tract containing the color components of GI wall and non-wall regions. Based<br />
on the wall regions extracted from images, the distribution along the time dimension for each color component is exploited<br />
to learn the dominant colors that are candidates for discriminating digestive organs. The strongest candidates are then<br />
combined to construct a representative signal to detect the boundary of two adjacent regions. The results of experiments<br />
are comparable with previous works, but computation cost is more efficient.<br />
09:00-11:10, Paper WeAT9.9<br />
A New Application of Meg and DTI on Word Recognition<br />
Meng, Lu, Northeastern Univ.<br />
Xiang, Jing, CCHMC<br />
Zhao, Hong, Northeastern Univ.<br />
Zhao, Dazhe, Northeastern Univ.<br />
This paper presented a novel application of Magneto encephalography (MEG) and diffusion tensor image (DTI) on word<br />
recognition, in which the spatiotemporal signature and the neural network of brain activation associated with word recognition<br />
were investigated. The word stimuli consisted of matched and mismatched words, which were visually and acousti-<br />
- 184 -