Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
- TAGS
- abstract
- icpr
- icpr2010.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
13:50-14:10, Paper TuBT4.2<br />
Localized Multiple Kernel Regression<br />
Gönen, Mehmet, Bogazici Univ.<br />
Alpaydin, Ethem, Bogazici Univ.<br />
Multiple kernel learning (MKL) uses a weighted combination of kernels where the weight of each kernel is optimized<br />
during training. However, MKL assigns the same weight to a kernel over the whole input space. Our main objective is the<br />
formulation of the localized multiple kernel learning (LMKL) framework that allows kernels to be combined with different<br />
weights in different regions of the input space by using a gating model. In this paper, we apply the LMKL framework to<br />
regression estimation and derive a learning algorithm for this extension. Canonical support vector regression may over fit<br />
unless the kernel parameters are selected appropriately; we see that even if provide more kernels than necessary, LMKL<br />
uses only as many as needed and does not overfit due to its inherent regularization.<br />
14:10-14:30, Paper TuBT4.3<br />
Probabilistic Clustering using the Baum-Eagon Inequality<br />
Rota Bulo’, Samuel, Univ. Ca’ Foscari di Venezia<br />
Pelillo, Marcello, Ca’ Foscari Univ.<br />
The paper introduces a framework for clustering data objects in a similarity-based context. The aim is to cluster objects<br />
into a given number of classes without imposing a hard partition, but allowing for a soft assignment of objects to clusters.<br />
Our approach uses the assumption that similarities reflect the likelihood of the objects to be in a same class in order to<br />
derive a probabilistic model for estimating the unknown cluster assignments. This leads to a polynomial optimization in<br />
probability domain, which is tackled by means of a result due to Baum and Eagon. Experiments on both synthetic and real<br />
standard datasets show the effectiveness of our approach.<br />
14:30-14:50, Paper TuBT4.4<br />
Ensemble Clustering via Random Walker Consensus Strategy<br />
Abdala, Daniel Duarte, Univ. of Münster<br />
Wattuya, Pakaket, Univ. of Münster<br />
Jiang, Xiaoyi, Univ. of Münster<br />
In this paper we present the adaptation of a random walker algorithm for combination of image segmentations to work<br />
with clustering problems. In order to achieve it, we pre-process the ensemble of clusterings to generate its graph representation.<br />
We show experimentally that a very small neighborhood will produce similar results if compared with larger choices.<br />
This fact alone improves the computational time needed to produce the final consensual clustering. We also present an experimental<br />
comparison between our results against other graph based and well known combination clustering methods in<br />
order to assess the quality of this approach.<br />
14:50-15:10, Paper TuBT4.5<br />
Bhattacharyya Clustering with Applications to Mixture Simplifications<br />
Nielsen, Frank, Ecole Polytechnique/SONY CLS<br />
Boltz, Sylvain, Ecole Polytechnique/SONY CLS<br />
Schwander, Olivier, Ecole Polytechnique/SONY CLS<br />
Bhattacharrya distance (BD) is a widely used distance in statistics to compare probability density functions (PDFs). It has<br />
shown strong statistical properties (in terms of Bayes error) and it relates to Fisher information. It has also practical advantages,<br />
since it strongly relates on measuring the overlap of the supports of the PDFs. Unfortunately, even with common<br />
parametric models on PDFs, few closed-form formulas are known. Moreover, the BD centroid estimation was limited to<br />
univariate gaussian PDFs in the literature and no convergence guarantees were provided. In this paper, we propose a<br />
closed-form formula for BD on a general class of parametric distributions named exponential families. We show that the<br />
BD is a Burbea-Rao divergence for the log normalizer of the exponential family. We propose an efficient iterative scheme<br />
to compute a BD centroid on exponential families. Finally, these results allow us to define a Bhattacharrya hierarchical<br />
clustering algorithms (BHC). It can be viewed as a generalization of k-means on BD. Results on image segmentation<br />
shows the stability of the method.<br />
- 115 -