06.03.2013 Views

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

Artificial Intelligence and Soft Computing: Behavioral ... - Arteimi.info

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Principal component analysis: The detail of the analysis [4] is beyond<br />

the scope of the book. We just illustrate the concept from a practical<br />

st<strong>and</strong>point. We, in this study, considered 9 facial images of (32 x 32) size for<br />

40 individuals. Thus we have 360 images each of (32 x 32) size. The average<br />

gray value of each image matrix is then computed <strong>and</strong> subtracted from the<br />

corresponding image matrix elements. One such image after subtraction of the<br />

average value is presented in fig. 17.9. This is some form of a normalization<br />

that keeps the image free from illumination bias of the light source. Now, we<br />

construct a matrix X of ((32 x 32) x 360) = (1024 x 360) dimension with the<br />

above data points. We also construct a covariance matrix by taking the<br />

product X X T <strong>and</strong> evaluate the eigen vectors of X X T. . Since X is of dimension<br />

(1024 x 360), X X T will have a dimension of (1024 x 1024). X X T thus will<br />

have 1024 eigen values, out of which we select the first 12, on experimental<br />

basis. The eigen vectors corresponding to these 12 eigen values are called the<br />

first 12 principal components. Since the dimension of each principal<br />

component is (1024 x 1), grouping these 12 principal components in columnwise<br />

fashion, we construct a matrix of dimension (1024 x 12). We denote this<br />

matrix by EV (for eigen vector).<br />

To represent an image in the eigen space, we first represent that image<br />

in (1 x 1024) format <strong>and</strong> then project it onto the face space by taking the dot<br />

product of the image matrix IM, represented in (1 x 1024) format <strong>and</strong> the EV<br />

<strong>and</strong> call it a point (PT) in the face space. Thus<br />

( PT ) 1 x 12 = ( IM ) 1 x 1024 . ( EV) 1024 x 12. (17.11)<br />

The projection of images onto face space as 12-dimensional points is<br />

presented in fig. 17.10. Representing all 360 images by the above expression,<br />

we thus get 360 points in 12-dimensional image space.<br />

Now, suppose given a test image <strong>and</strong> we want to classify it to one of the<br />

40 persons. One simple way to solve this problem is to determine the<br />

corresponding image point (PT) in 12-dimensional space <strong>and</strong> then determine<br />

the image point (out of 360 points) that has the least Euclidean distance w.r.t<br />

the test image point. The test image thus can be classified to one of 360<br />

images.<br />

The principal component analysis (PCA) thus reduces the dimension of<br />

matching from (32 x 32) to (1 x 12) but requires computing the distance of a<br />

test point with all image points. An alternative scheme that reduces less<br />

computing time is by a self-organizing neural net. The self-organizing scheme<br />

inputs the (1 x 12) points for all of the 360 images, constructs a network <strong>and</strong><br />

searches a test point by the best first search paradigm in the search space. A

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!