10.07.2015 Views

artificial neural network based face detection using gabor ... - ijater

artificial neural network based face detection using gabor ... - ijater

artificial neural network based face detection using gabor ... - ijater

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

prominently suppress the heavy-tailed nature of traininginstances and improve efficiency of computation. Multi-LayerPerceptron (MLP) with a feed forward learning algorithmswas chosen for the proposed system because of its simplicityand its capability in supervised pattern matching. It has beensuccessfully applied to many pattern classification problems[6]. Our problem has been considered to be suitable with thesupervised rule since the pairs of input-output are available.For training the <strong>network</strong>, we used the classical feed forwardalgorithm. An example is picked from the training set, theoutput is computed.International Journal of Advanced Technology & Engineering Research (IJATER)V. ALGORITHM DEVELOPMENTS& RESULTFig. 4: Gabor filters correspond to 5 spatial frequencies and 8Orientation (Gabor filters in Time domain)An image can be represented by the Gabor wavelet transformallowing the description of both the spatial frequency structureand spatial relations. Convolving the image with complexGabor filters with 5 spatial frequency (v = 0,…,4) and 8orientation (μ = 0,…,7) captures the whole frequencyspectrum, both amplitude and phase.One of the techniques used in the literature for Gabor <strong>based</strong><strong>face</strong> recognition is <strong>based</strong> on <strong>using</strong> the response of a gridrepresenting the facial topography for coding the <strong>face</strong>. Insteadof <strong>using</strong> the graph nodes, high-energized points can be used incomparisons which form the basis of this work. This approachnot only reduces computational complexity, but also improvesthe performance in the presence of occlusions.A. Feature extraction: Feature extraction algorithm for theproposed method has two main steps in (1) Feature pointlocalization, (2) Feature vector computation.B. Feature point localization: In this step, feature vectors areextracted from points with high information content on the<strong>face</strong> image. In mostFig. 3: Algorithm DevelopmentsVI. 2D GABOR WAVELETREPRESENTATIONS OF FACESSince <strong>face</strong> recognition is not a difficult task for human beings,selection of biologically motivated Gabor filters is well suitedto this problem. Gabor filters, modeling the responses ofsimple cells in the primary visual cortex, are simply planewaves restricted by a Gaussian envelope function [7].feature-<strong>based</strong> methods, facial features are assumed to be theeyes, nose and mouth. However, we do not fix the locationsand also the number of feature points in this work. Thenumber of feature vectors and their locations can vary in orderto better represent diverse facial characteristics of different<strong>face</strong>s, such as dimples, moles, etc., which are also the featuresthat people might use for recognizing <strong>face</strong>s.Fromthe responses of the <strong>face</strong> image to Gabor filters, peaks arefound by searching the locations in a window W 0 of sizeWxW by the following procedure:A feature point is located at (x 0 , y 0 ), ifR j x 0 , y 0 = max x,y αw 0R j x, y (2)R j x 0 , y 0 ˃ 1N∑ 1N 1 N x=1 2N∑ 2y=1R j x, y (3)j=1,…,40 where Rj is the response of the <strong>face</strong> image to the jthGabor filter N 1 N 2 is the size of <strong>face</strong> peaks of the responses. InISSN No: 2250-3536 Volume 2, Issue 4, July 2012 222


International Journal of Advanced Technology & Engineering Research (IJATER)our experiments a 9x9 window is used to search feature pointson Gabor filter responses. A feature map is constructed for the<strong>face</strong> by applying above process to each of 40 Gabor filters.C. Feature vector generation: Feature vectors are generated atthe feature Points as a composition of Gabor wavelettransform coefficients.v i,k = x k , y k , R i.j x k , y k j = 1, … . ,40 (4)While there are 40 Gabor filters, feature Vectors have 42components. The first two components represent the locationof that feature point by storing (x, y) coordinates. Since wehave no other information about the locations of the featurevectors, the first two components of feature vectors are veryimportant during matching (comparison) process. Theremaining 40 components are the samples of the Gabor filterresponses at that point. Although one may use some edgeinformation for feature point selection, here it is important toconstruct feature vectors as the coefficients of Gabor wavelettransform.Feature vectors, as the samples of Gabor wavelet transform atfeature points, allow representing bothSecond Section:Fig .6.1: Cell. NetFig .6.2: Filtering above pattern for values above threshold (xy_)Fig6.3: Dilating pattern with a disk structure (xy_)Fig6.4: Finding the center of each regionFig. 6.5: Draw a rectangle for each pointFig6.6: Final Result will be like this-Fig.5: Flowchart of the feature extraction stage of the facialimages.the spatial frequency structure and spatial relations of the localimage region around the corresponding feature point.First Section:In this section the algorithm will check all potential <strong>face</strong>contained windows and the windows around them <strong>using</strong><strong>neural</strong> <strong>network</strong>. The result will be the output of the <strong>neural</strong><strong>network</strong> for checked regions.Fig6.7: Detected <strong>face</strong>sThis architecture was implemented <strong>using</strong> Matlab in agraphical environment allowing <strong>face</strong> <strong>detection</strong> in a database. Ithas been evaluated <strong>using</strong> the test data of 500 imagescontaining <strong>face</strong>s; on this test set we obtained a good <strong>detection</strong>.VII. CONCLUSION & FUTUREWORKISSN No: 2250-3536 Volume 2, Issue 4, July 2012 223


International Journal of Advanced Technology & Engineering Research (IJATER)Face <strong>detection</strong> has been an attractive field of research for bothneuroscientists and computer vision scientists. Humans areable to identify reliably a large number of <strong>face</strong>s andneuroscientists are interested in understanding the perceptualand cognitive mechanisms at the base of the <strong>face</strong> <strong>detection</strong>process. Those researches illuminate computer visionscientists’ studies. Although designers of <strong>face</strong> <strong>detection</strong>algorithms and systems are aware of relevant psychophysicsand neuropsychological studies, they also should be prudent in<strong>using</strong> only those that are applicable or relevant from apractical/implementation point of view. Since 1888, manyalgorithms have been proposed as a solution to automatic <strong>face</strong><strong>detection</strong>.Although none of them could reach the human <strong>detection</strong>performance, currently two biologically inspired methods,namely Eigen <strong>face</strong>s and elastic graph matching methods, havereached relatively high <strong>detection</strong> rates. Eigen <strong>face</strong>s algorithmhas some shortcomings due to the use of image pixel grayvalues. As a result system becomes sensitive to illuminationchanges, scaling, etc. and needs a beforehand pre-processingstep. Satisfactory recognition performances could be reachedby successfully aligned <strong>face</strong> images. When a new <strong>face</strong> attendsto the database system needs to run from the beginning, unlessa universal database exists. Unlike the Eigen <strong>face</strong>s method,elastic graph matching method is more robust to illuminationchanges, since Gabor wavelet transforms of images is beingused, instead of directly <strong>using</strong> pixel gray values. Although<strong>detection</strong> performance of elastic graph matching method isreported higher than the eigen<strong>face</strong>s method, due to itscomputational complexity and execution time, the elasticgraph matching approach is less attractive for commercialsystems. Although <strong>using</strong> 2- D Gabor wavelet transform seemsto be well suited to the problem, graph matching makesalgorithm bulky. Moreover, as the local information isextracted from the nodes of a predefined graph, some detailson a <strong>face</strong>, which are the special characteristics of that <strong>face</strong> andcould be very useful in <strong>detection</strong> task, might be lost. In thispaper, a new approach to <strong>face</strong> <strong>detection</strong> with Gabor wavelets&feed forward <strong>neural</strong> <strong>network</strong> is presented. The method usesGabor wavelet transform& feed forward <strong>neural</strong> <strong>network</strong> forboth finding feature points and extracting feature vectors.From the experimental results, it is seen that proposed methodachieves better results compared to the graph matching andeigen<strong>face</strong>s methods, which are known to be the mostsuccessive algorithms. Although the proposed method showssome resemblance to graph matching algorithm, in ourapproach, the location of feature points also containsinformation about the <strong>face</strong>. Feature points are obtained fromthe special characteristics of each individual <strong>face</strong>automatically, instead of fitting a graph that is constructedfrom thegeneral <strong>face</strong> idea. In the proposed algorithm, since the facialfeatures are compared locally, instead of <strong>using</strong> a generalstructure, it allows us to make a decision from the parts of the<strong>face</strong>. For example, when there are sunglasses, the algorithmcompares <strong>face</strong>s in terms of mouth, nose and any other featuresrather than eyes. Moreover, having a simple matchingprocedure and low computational cost proposed method isfaster than elastic graph matching methods. Proposed methodis also robust to illumination changes as a property of Gaborwavelets, which is the main problem with the eigen<strong>face</strong>approaches. A new facial image can also be simply added byattaching new feature vectors to reference gallery while suchan operation might be quite time consuming for systems thatneed training. Feature points, found from Gabor responses ofthe <strong>face</strong> image, can give small deviations between differentconditions (expression, illumination, having glasses or not,rotation, etc.), for the same individual. Therefore, an exactmeasurement of corresponding distances is not possible unlikethe geometrical feature <strong>based</strong> methods. Moreover, due toautomatic feature <strong>detection</strong>, features represented by thosepoints are not explicitly known, whether they belong to an eyeor a mouth, etc. Giving information about the match of theoverall facial structure, the locations of feature points are veryimportant. However <strong>using</strong> such a topology cost amplifies thesmall deviations of the locations of feature points that are not ameasure of match. Gabor wavelet transform of a <strong>face</strong> imagetakes 1.1 seconds, feature extraction step of a single <strong>face</strong>image takes 0.2 seconds and matching an input image with asingle gallery image takes 0.12 seconds on a Pentium IV PC.Note that above execution times are measured without codeoptimization. Although <strong>detection</strong> performance of the proposedmethod is satisfactory by any means, it can further beimproved with some small modifications and/or additionalpreprocessing of <strong>face</strong> images. Such improvements can besummarized as;1) Since feature points are found from the responses of imageto Gabor filters separately, a set of weights can be assigned tothese feature points by counting the total times of a featurepoint occurs at those responses.2) A motion estimation stage <strong>using</strong> feature points followed byan affined transformation could be applied to minimizerotation effects. This process will not create muchcomputational complexity since we already have featurevectors for recognition. By the help of this step <strong>face</strong> imageswould be aligned.3) As it is mentioned in problem definition, a <strong>face</strong> <strong>detection</strong>algorithm is supposed to be done beforehand. A robust andsuccessive <strong>face</strong> <strong>detection</strong> step will increase the <strong>detection</strong>performance. Implementing such a <strong>face</strong> <strong>detection</strong> method is animportant future work for successful applications.4) In order to further speed up the algorithm, number of Gaborfilters could be decreased with an acceptable level of decreasein <strong>detection</strong> performance. It must be noted that performance of<strong>detection</strong> systems is highly application dependent andsuggestions for improvements on the proposed algorithm mustbe directed to a specific purpose of the <strong>face</strong> <strong>detection</strong>application.VIII.REFERENCES[1] Ming-Husan Yang, David J.Kriegman, and NarendraAhuja, “Detecting Faces in Images: A Survey “, IEEEISSN No: 2250-3536 Volume 2, Issue 4, July 2012 224


transaction on pattern analysis and machine intelligence,vol.24 no.1, January 2002.[2] H. A. Rowley, S. Baluja, T. Kanade, “Neural Network-Based Face Detection”, IEEE Trans. On Pattern Analysis andMachine Intelligence, vol.20, No. 1, Page(s). 39-51, 1998.[3] Zhang ZhenQiu, Zhu Long, S.Z. Li, Zhang HongJiang, “Real-time multi-view <strong>face</strong> <strong>detection</strong>” Proceeding ofthe Fifth IEEE International Conference on automatic FaceandGesture Recognition, Page(s): 142-147, 20-21 May 2002.[4] Gavrila, D.M; Philomin, V.”Real-Time Object <strong>detection</strong>for Smart Vehicules”. International Conference on ComputerVision (ICCV99). Vol. 1.Corfu, Greece, 20-25 September 1999.[5] Rolf F. Molz, Paulo M. Engel, Fernando G. Moraes,Lionel Torres, Michel robert,”System Prototyping dedicated toNeural Network Real-Time Image Processing”, ACM/SIGDAninth internationalSymposium On Field Programmable Gate Arrays( FPGA2001).[6] Theocharis Theocharides, Gregory Link, VijaykrishnanNarayanan, Mary Jane Irwin, “Embedded Hardware FaceDetection”, 17th Int’l Conf.on VLSI Design, Mumbai, India.January 5-9, 2004.[7] Fan Yang and Michel Paindavoine,”Prefiltering for patternRecognitionUsing Wavelet Transform and Neural Networks”,Adavances in imaging and Electron physics, Vol. 127, 2003.[8] Paindavoine,”Implementation of an RBF NeuralNetwork on Embedded Systems: Real-Time Face Trackingand Identity Verification”, IEEE Transactions on NeuralNetworks, vol.14, No.5, September 2003.[9]Fast Face Recognition <strong>using</strong> BPNN, in national conferenceGTSEM-1-2012(23, 25-Feb).BiographiesMr Kanhaiya Kumar – I am working as Assistant Professorin the Department of Electrical Engineering inVishvesshwarya Group of Institution Dadri, G. B. Nagar (U.P)and received my Masters Degree in Instrumentation & ControlEngineering, Degree in Instrumentation Engineering andDiploma in Applied Electronics Engineering. My areas ofinterest are Digital Signal processing and Image processingand Neural Network <strong>based</strong> system.International Journal of Advanced Technology & Engineering Research (IJATER)ISSN No: 2250-3536 Volume 2, Issue 4, July 2012 225

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!