06.02.2013 Views

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

Abstract book (pdf) - ICPR 2010

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

This paper addresses the problem of improving the accuracy of character recognition with a limited quantity of data. The<br />

key ideas are twofold. One is distortion-tolerant template matching via hierarchical global/partial affine transformation<br />

(GAT/PAT) correlation to absorb both linear and nonlinear distortions in a parametric manner. The other is use of multiple<br />

templates per category obtained by k-means clustering in a gradient feature space for dealing with topological distortion.<br />

Recognition experiments using the handwritten numerical database IPTP CDROM1B show that the proposed method<br />

achieves a much higher recognition rate of 97.9% than that of 85.8% obtained by the conventional, simple correlation<br />

matching with a single template per category. Furthermore, comparative experiments show that the k-NN classification<br />

using the tangent distance and the GAT correlation technique achieves recognition rates of 97.5% and 98.7%, respectively.<br />

17:00-17:20, Paper WeCT7.5<br />

Structure Adaptation of HMM Applied to OCR<br />

Ait Mohand, Kamel, Univ. of Rouen<br />

Paquet, Thierry, Univ. of Rouen<br />

Ragot, Nicolas, Univ. François Rabelais Tours<br />

Heutte, Laurent, Univ. of Rouen<br />

In this paper we present a new algorithm for the adaptation of Hidden Markov Models (HMM models). The principle of<br />

our iterative adaptive algorithm is to alternate an HMM structure adaptation stage with an HMM Gaussian MAP adaptation<br />

stage of the parameters. This algorithm is applied to the recognition of printed characters to adapt the character models of<br />

a poly font general purpose character recognizer to new fonts of characters, never seen during training. A comparison of<br />

the results with those of MAP classical adaptation scheme show a slight increase in the recognition performance.<br />

WeBCT8 Upper Foyer<br />

SVM, NN, Kernel and Learning; Object Detection and Recognition Poster Session<br />

Session chair: Ross, Arun (West Virginia Univ.)<br />

13:30-16:30, Paper WeBCT8.1<br />

Multi-Class Pattern Classification in Imbalanced Data<br />

Ghanem, Amal Saleh, Univ. of Bahrain<br />

Venkatesh, Svetha, Curtin Univ. of Tech.<br />

West, Geoff, Curtin Univ. of Tech.<br />

The majority of multi-class pattern classification techniques are proposed for learning from balanced datasets. However,<br />

in several real-world domains, the datasets have imbalanced data distribution, where some classes of data may have few<br />

training examples compared for other classes. In this paper we present our research in learning from imbalanced multiclass<br />

data and propose a new approach, named Multi-IM, to deal with this problem. Multi-IM derives its fundamentals<br />

from the probabilistic relational technique (PRMs-IM), designed for learning from imbalanced relational data for the twoclass<br />

problem. Multi-IM extends PRMs-IM to a generalized framework for multi-class imbalanced learning for both relational<br />

and non-relational domains.<br />

13:30-16:30, Paper WeBCT8.2<br />

Deep Quantum Networks for Classification<br />

Zhou, Shusen, Harbin Inst. of Tech.<br />

Chen, Qingcai, Harbin Inst. of Tech.<br />

Wang, Xiaolong, Harbin Inst. of Tech.<br />

This paper introduces a new type of deep learning method named Deep Quantum Network (DQN) for classification. DQN<br />

inherits the capability of modeling the structure of a feature space by fuzzy sets. At first, we propose the architecture of<br />

DQN, which consists of quantum neuron and sigmoid neuron and can guide the embedding of samples divisible in new<br />

Euclidean space. The parameter of DQN is initialized through greedy layer-wise unsupervised learning. Then, the parameter<br />

space of the deep architecture and quantum representation are refined by supervised learning based on the global gradient-descent<br />

procedure. An exponential loss function is introduced in this paper to guide the supervised learning procedure.<br />

Experiments conducted on standard datasets show that DQN outperforms other feed forward neural networks and neurofuzzy<br />

classifiers.<br />

- 212 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!