Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
- TAGS
- abstract
- icpr
- icpr2010.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
09:40-10:00, Paper TuAT5.3<br />
Adding Affine Invariant Geometric Constraint for Partial-Duplicate Image Retrieval<br />
Wu, Zhipeng, Chinese Acad. of Sciences<br />
Xu, Qianqian, Chinese Acad. of Sciences<br />
Jiang, Shuqiang, Chinese Acad. of Sciences<br />
Huang, Qingming, Chinese Acad. of Sciences<br />
Cui, Peng, Chinese Acad. of Sciences<br />
Li, Liang, Chinese Acad. of Sciences<br />
The spring up of large numbers of partial-duplicate images on the internet brings a new challenge to the image retrieval<br />
systems. Rather than taking the image as a whole, researchers bundle the local visual words by MSER detector into groups<br />
and add simple relative ordering geometric constraint to the bundles. Experiments show that bundled features become<br />
much more discriminative than single feature. However, the weak geometric constraint is only applicable when there is<br />
no significant rotation between duplicate images and it couldn’t handle the circumstances of image flip or large rotation<br />
transformation. In this paper, we improve the bundled features with an affine invariant geometric constraint. It employs<br />
area ratio invariance property of affine transformation to build the affine invariant matrix for bundled visual words. Such<br />
affine invariant geometric constraint can cope well with flip, rotation or other transformations. Experimental results on<br />
the internet partial-duplicate image database verify the promotion it brings to the original bundled features approach. Since<br />
currently there is no available public corpus for partial-duplicate image retrieval, we also publish our dataset for future<br />
studies.<br />
10:00-10:20, Paper TuAT5.4<br />
Outlier-Resistant Dissimilarity Measure for Feature-Based Image Matching<br />
Palenichka, Roman, Univ. of Quebec<br />
Lakhssassi, Ahmed, Univ. of Quebec<br />
Zaremba, Marek, Univ. of Quebec<br />
A novel dissimilarity measure is proposed to perform correspondence image matching for object recognition, image registration<br />
and content-based image retrieval. This is a feature-based matching, which supposes image representation (object<br />
description) in the form of a set of multi-location descriptor vectors. The proposed measure called intersection matching<br />
distance eliminates outlies (false or missing feature points) while transformation-invariantly matching two sets of descriptor<br />
vectors. A block-subdivision algorithm for time-efficient image matching is also described.<br />
10:20-10:40, Paper TuAT5.5<br />
The University of Surrey Visual Concept Detection System at ImageCLEF@<strong>ICPR</strong>: Working Notes<br />
Tahir, Muhammad Atif, Univ. of Surrey<br />
Fei, Yan, Univ. of Surrey<br />
Barnard, Mark, Univ. of Surrey<br />
Awais, Muhammad, Univ. of Surrey<br />
Mikolajczyk, Krystian, Univ. of Surrey<br />
Kittler, Josef, Univ. of Surrey<br />
Visual concept detection is one of the most important tasks in image and video indexing. This paper describes our system<br />
in the Image CLEF@<strong>ICPR</strong> Visual Concept Detection Task which ranked {\it first} for large-scale visual concept detection<br />
tasks in terms of Equal Error Rate (EER) and Area under Curve (AUC) and ranked {\it third} in terms of hierarchical<br />
measure. The presented approach involves state-of-the-art local descriptor computation, vector quantisation via clustering,<br />
structured scene or object representation via localised histograms of vector codes, similarity measure for kernel construction<br />
and classifier learning. The main novelty is the classifier-level and kernel-level fusion using Kernel Discriminant Analysis<br />
with RBF/Power Chi-Squared kernels obtained from various image descriptors. For 32 out of 53 individual concepts, we<br />
obtain the best performance of all 12 submissions to this task.<br />
- 77 -