06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

accuracy degradation when comparing images of differing resolutions. This is common in surveillance environments<br />

where a gallery of high resolution mugshots is compared to low resolution CCTV probe images, or where the size of a<br />

given image is not a reliable indicator of the underlying resolution (e.g. poor optics). To alleviate this degradation, we<br />

propose a compensation framework which dynamically chooses the most appropriate face recognition system for a given<br />

pair of image resolutions. This framework applies a novel resolution detection method which does not rely on the size of<br />

the input images, but instead exploits the sensitivity of local features to resolution using a probabilistic multi-region histogram<br />

approach. Experiments on a resolution-modified version of the “Labeled Faces in the Wild” dataset show that the<br />

proposed resolution detector frontend obtains a 99% average accuracy in selecting the most appropriate face recognition<br />

system, resulting in higher overall face discrimination accuracy (across several resolutions) compared to the individual<br />

baseline face recognition systems.<br />

09:00-11:10, Paper TuAT9.21<br />

Patch-Based Similarity HMMs for Face Recognition with a Single Reference Image<br />

Vu, Ngoc-Son, Gipsa-Lab.<br />

Caplier, Alice, GIPSA-Lab. Grenoble Univ.<br />

In this paper we present a new architecture for face recognition with a single reference image, which completely separates<br />

the training process from the recognition process. In the training stage, by using a database containing various individuals,<br />

the spatial relations between face components are represented by two Hidden Markov Models (HMMs), one modeling<br />

within-subject similarities, and the other modeling inter-subject differences. This allows us during the recognition stage<br />

to take a pair of face images, neither of which has been seen before, and to determine whether or not they come from the<br />

same individual. Whilst other face-recognition HMMs use Maximum Likelihood criterion, we test our approach using<br />

both Maximum Likelihood and Maximum a Posteriori (MAP) criterion, and find that MAP provides better results. Importantly,<br />

the training database can be entirely separated from the gallery and test images: this means that adding new individuals<br />

to the system can be done without re-training. We present results based upon models trained on the FERET training<br />

dataset, and demonstrate that these give satisfactory recognition rates on both the FERET database itself and more impressively<br />

the unseen AR database. When compared to other HMM based face recognition techniques, our algorithm is of<br />

much lower complexity due to the small size of our observation sequence.<br />

09:00-11:10, Paper TuAT9.22<br />

How to Control Acceptance Threshold for Biometric Signatures with Different Confidence Values?<br />

Makihara, Yasushi, The Inst. of Scientific and Industrial Res. Univ.<br />

Hossain, Md. Altab, Osaka Univ.<br />

Yagi, Yasushi, Osaka Univ.<br />

In the biometric verification, authentication is given when a distance of biometric signatures between enrollment and test<br />

phases is less than an acceptance threshold, and the performance is usually evaluated by a so-called Receiver Operating<br />

Characteristics (ROC) curve expressing a trade off between False Rejection Rate (FRR) and False Acceptance Rate (FAR).<br />

On the other hand, it is also well known that the performance is significantly affected by the situation differences between<br />

enrollment and test phases. This paper describes a method to adaptively control an acceptance threshold with quality measures<br />

derived from situation differences so as to optimize the ROC curve. We show that the optimal evolution of the adaptive<br />

threshold in the domain of the distance and quality measure is equivalent to a constant evolution in the domain of the error<br />

gradient defined as a ratio of a total error rate to a total acceptance rate. An experiment with simulation data demonstrates<br />

that the proposed method outperforms the previous methods, particularly under a lower FAR or FRR tolerance condition.<br />

09:00-11:10, Paper TuAT9.23<br />

Binary Representations of Fingerprint Spectral Minutiae Features<br />

Xu, Haiyun, Univ. of Twente<br />

Veldhuis, Raymond, Univ. of Twente<br />

A fixed-length binary representation of a fingerprint has the advantages of a fast operation and a small template storage.<br />

For many biometric template protection schemes, a binary string is also required as input. The spectral minutiae representation<br />

is a method to represent a minutiae set as a fixed-length real-valued feature vector. In order to be able to apply the<br />

spectral minutiae representation with a template protection scheme, we introduce two novel methods to quantize the<br />

spectral minutiae features into binary strings: Spectral Bits and Phase Bits. The experiments on the FVC2002 database<br />

show that the binary representations can even outperformed the spectral minutiae real-valued features.<br />

- 100 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!