06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

transformed into a polar histogram. Finally, we use these local descriptors to detect and locate similar objects within any<br />

images. The experiment results show that the proposed method outperforms the existing state-of-the-art work.<br />

10:20-10:40, Paper TuAT1.5<br />

Inverse Multiple Instance Learning for Classifier Grids<br />

Sternig, Sabine, Graz Univ. of Tech.<br />

Roth, Peter M., Graz Univ. of Tech.<br />

Bischof, Horst, Graz Univ. of Tech.<br />

Recently, classifier grids have shown to be a considerable alternative for object detection from static cameras. However,<br />

one drawback of such approaches is drifting if an object is not moving over a long period of time. Thus, the goal of this<br />

work is to increase the recall of such classifiers while preserving their accuracy and speed. In particular, this is realized<br />

by adapting ideas from Multiple Instance Learning within a boosting framework. Since the set of positive samples is well<br />

defined, we apply this concept to the negative samples extracted from the scene: Inverse Multiple Instance Learning. By<br />

introducing temporal bags, we can ensure that each bag contains at least one sample having a negative label, providing<br />

the required stability. The experimental results demonstrate that using the proposed approach state-of-the-art detection results<br />

can by obtained, however, showing superior classification results in presence of non-moving objects.<br />

TuAT2 Topkapı Hall B<br />

Clustering Regular Session<br />

Session chair: Tasdizen, Tolga (Univ. of Utah)<br />

09:00-09:20, Paper TuAT2.1<br />

On Dynamic Weighting of Data in Clustering with K-Alpha Means<br />

Chen, Si-Bao, Anhui Univ.<br />

Wang, Hai-Xian, Southeast Univ.<br />

Luo, Bin, Anhui Univ.<br />

Although many methods of refining initialization have appeared, the sensitivity of K-Means to initial centers is still an<br />

obstacle in applications. In this paper, we investigate a new class of clustering algorithm, K-Alpha Means (KAM), which<br />

is insensitive to the initial centers. With K-Harmonic Means as a special case, KAM dynamically weights data points<br />

during iteratively updating centers, which deemphasizes data points that are close to centers while emphasizes data points<br />

that are not close to any centers. Through replacing minimum operator in K-Means by alpha-mean operator, KAM significantly<br />

improves the clustering performances.<br />

09:20-09:40, Paper TuAT2.2<br />

ARImp: A Generalized Adjusted Rand Index for Cluster Ensembles<br />

Zhang, Shaohong, City Univ. of Hong Kong<br />

Wong, Hau-San, City Univ. of Hong Kong<br />

Adjusted Rand Index (ARI) is one of the most popular measure to evaluate the consistency between two partitions of data<br />

sets in the areas of pattern recognition. In this paper, ARI is generalized to a new measure, Adjusted Rand Index between<br />

a similarity matrix and a cluster partition (ARImp), to evaluate the consistency between a set of clustering solutions (or<br />

cluster partitions) and their associated consensus matrix in a cluster ensemble. The generalization property of ARImp from<br />

ARI is proved and its preservation of desirable properties of ARI is illustrated with simulated experiments. Also, we show<br />

with application experiments on several real data sets that ARImp can serve as a filter to identify the less effective cluster<br />

ensemble methods.<br />

09:40-10:00, Paper TuAT2.3<br />

On the Scalability of Evidence Accumulation Clustering<br />

Lourenço, André, Inst. Superior de Engenharia de Lisboa (ISEL), Inst. Superior Técnico (IST), IT<br />

Fred, Ana Luisa Nobre, Inst. Superior Técnico<br />

Jain, Anil, Michigan State Univ.<br />

This work focuses on the scalability of the Evidence Accumulation Clustering (EAC) method. We first address the space<br />

- 72 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!