06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

optimization problem which returns a global optimal solution. Experimental results on several databases show that the<br />

learned distance metric improves the performances of the subsequent classification and clustering algorithms.<br />

09:20-09:40, Paper ThAT4.2<br />

A Comparitive Study on the Use of an Ensemble of Feature Extractors for the Automatic Design of Local Image Descriptors<br />

Carneiro, Gustavo, Tech. Univ. of Lisbon<br />

The use of an ensemble of feature spaces trained with distance metric learning methods has been empirically shown to be<br />

useful for the task of automatically designing local image descriptors. In this paper, we present a quantitative analysis<br />

which shows that in general, nonlinear distance metric learning methods provide better results than linear methods for automatically<br />

designing local image descriptors. In addition, we show that the learned feature spaces present better results<br />

than state of- the-art hand designed features in benchmark quantitative comparisons. We discuss the results and suggest<br />

relevant problems for further investigation.<br />

09:40-10:00, Paper ThAT4.3<br />

A Study on Combining Sets of Differently Measured Dissimilarities<br />

Ibba, Alessandro, Delft Univ. of Tech.<br />

Duin, Robert, Delft Univ. of Tech.<br />

Lee, Wan-Jui, Delft Univ. of Tech.<br />

The ways distances are computed or measured enable us to have different representations of the same objects. In this paper<br />

we want to discuss possible ways of merging different sources of information given by differently measured dissimilarity<br />

representations. We compare here a simple averaging scheme [1] with dissimilarity forward selection and other techniques<br />

based on the learning of weights of linear and quadratic forms. Our general conclusion is that, although the more advanced<br />

forms of combination cannot always lead to better classification accuracies, combining given distance matrices prior to<br />

training is always worthwhile. We can thereby suggest which combination schemes are preferable with respect to the problem<br />

data.<br />

10:00-10:20, Paper ThAT4.4<br />

Efficient Kernel Learning from Constraints and Unlabeled Data<br />

Soleymani Baghshah, Mahdieh, Sharif Univ. of Tech.<br />

Bagheri Shouraki, Saeed, Sharif Univ. of Tech.<br />

Recently, distance metric learning has been received an increasing attention and found as a powerful approach for semisupervised<br />

learning tasks. In the last few years, several methods have been proposed for metric learning when must-link<br />

and/or cannot-link constraints as supervisory information are available. Although many of these methods learn global Mahalanobis<br />

metrics, some recently introduced methods have tried to learn more flexible distance metrics using a kernelbased<br />

approach. In this paper, we consider the problem of kernel learning from both pairwise constraints and unlabeled<br />

data. We propose a method that adapts a flexible distance metric via learning a nonparametric kernel matrix. We formulate<br />

our method as an optimization problem that can be solved efficiently. Experimental evaluations show the effectiveness of<br />

our method compared to some recently introduced methods on a variety of data sets.<br />

10:20-10:40, Paper ThAT4.5<br />

Semi-Supervised Graph Learning: Near Strangers or Distant Relatives<br />

Chen, Weifu, Sun Yat-sen Univ.<br />

Feng, Guocan, Sun Yat-Sen Univ.<br />

In this paper, an easily implemented semi-supervised graph learning method is presented for dimensionality reduction and<br />

clustering, using the most of prior knowledge from limited pairwise constraints. We extend instance-level constraints to<br />

space-level constraints to construct a more meaningful graph. By decomposing the (normalized) Laplacian matrix of this<br />

graph, to use the bottom eigenvectors leads to new representations of the data, which are hoped to capture the intrinsic<br />

structure. The proposed method improves the previous constrained learning methods. Furthermore, to achieve a given<br />

clustering accuracy, fewer constraints are required in our method. Experimental results demonstrate the advantages of the<br />

proposed method.<br />

- 245 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!