23.03.2013 Views

TOWARDS CLASSIFYING CLASSICAL BATIK IMAGES - Unpar

TOWARDS CLASSIFYING CLASSICAL BATIK IMAGES - Unpar

TOWARDS CLASSIFYING CLASSICAL BATIK IMAGES - Unpar

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1992). Fourier descriptor methods: (1) The image is presented to Canny edge detector. (2) The<br />

centroid of the edge points is computed. (3) Compute the distance of each sample edge point to<br />

the centroid. (4) Apply Discrete Fourier Transform to the result of step 3. (5) Normalize the<br />

coefficients of step 4 by dividing each coefficient (Fi) with the first coefficient (F0) and store<br />

them as the features. Moments invariant method: Three order moments are computed from the<br />

preprocessed images and the result are stored as the feature. Elastic template matching methods:<br />

(1) The image is presented to Canny edge detector. (2) The output of step 1 (an array containing<br />

1s and 0s) is stored as the feature.<br />

Classification using k-Nearest Neighbor. The basic idea of the classification process is adopted<br />

from (Sheikholestami et.all, 1998) with some modification. The classification process consists of<br />

four main steps: (1) Compute the new image shape and texture features and the similarity values<br />

between the new image and all training images using shape and texture features. Find two sets of<br />

training images that are most similar to the new image in terms of the ornaments shape and the<br />

image texture. (2) Combine the two sets resulting in step 1. Let Rq be the new set. (3) Compute<br />

the overall image similarities of images in Rq using multi-layer back-propagation neural network.<br />

(4) Sort the new similarity values obtained from step (3), and using k-nearest neighbor<br />

classification technique, determine the class of the new image.<br />

Computing image similarities. Texture feature. For content-based, the distance between 2<br />

images is computed using Euclidean distance. Then the distance is transformed into similarity<br />

value as follows:<br />

SimValue(i,j) = 1 – distance(i,j) / max of Distance (1)<br />

where i,j denoting the index of the two images being compared and Distance is a set of all<br />

distance between images. For region-content-based, the method in (Bartolini, 2001) is applied.<br />

Shape feature. The main steps are: (1) The new image is segmented into windows and features<br />

are generated from each window. (2) The feature of a training image is compared to the feature<br />

of each new image windows. (3) For moments invariant and Fourier descriptor, the similarity<br />

value is computed using Eq. 1. For elastic template matching: similarity is computed by<br />

determining the correlation between the two features. The largest value among all similarity<br />

values of the windows is selected as the similarity value between the new image and training<br />

image.<br />

5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!