12.07.2015 Views

IEEE Paper Template in A4 (V1) - icact

IEEE Paper Template in A4 (V1) - icact

IEEE Paper Template in A4 (V1) - icact

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A Fatigue Detection System withEyeglasses RemovalWen-Chang Cheng*, Hsien-Chou Liao*, M<strong>in</strong>-Ho Pan*, Chih-Chuan Chen*** Department of Computer Science and Information Eng<strong>in</strong>eer<strong>in</strong>g, Chaoyang University of Technology,168, Jifeng E. Rd. Wufeng District, Taichung, 41349 Taiwan, R.O.C.** International Integrated System Inc., Taiwan, R.O.C.{wccheng, hcliao, s9927622}@cyut.edu.twAbstract—A fatigue detection system can alert the driver toprevent the possible accident. The detection is ma<strong>in</strong>ly based onthe result of the face/eye detection or the pupil of the eyeball todeterm<strong>in</strong>e whether the driver <strong>in</strong> the fatigue condition or not.Most of the pass studies assume that the driver does not wear theeyeglasses. However, the eye detection is easily <strong>in</strong>fluenced by theeyeglasses and thus decreases the correct detection rate. In orderto overcome the <strong>in</strong>fluence of the eyeglasses, a fatigue detectionmethod with eyeglass removal is proposed <strong>in</strong> this paper. Firstly,the face areas are detected by us<strong>in</strong>g the OpenCV library. Then,the eyeglasses are removed by us<strong>in</strong>g the morphologicaloperations. After eyeglasses removal, the eye areas are detectedby us<strong>in</strong>g same OpenCV library and track<strong>in</strong>g by us<strong>in</strong>g a templatematch<strong>in</strong>g method. The b<strong>in</strong>arization result of the eye area isperformed the horizontal projection and Kalman filter. Then, theopen/close state of eyes is determ<strong>in</strong>ed, and then fatigue isdeterm<strong>in</strong>ed based on the series state of eyes. Four test<strong>in</strong>g videosare used to evaluate the performance of the proposed method.The average correct detection ratio of the eye state is 88.5% andthe fatigue detection can reach 100%. The prelim<strong>in</strong>ary resultsshow that the proposed method is feasible.Keywords—Face detection, Eye detection, Eye track<strong>in</strong>g, Kalmanfilter, <strong>Template</strong> match<strong>in</strong>gI. INTRODUCTIONWhen a user drives a car for a long time, the user is easilytired and a traffic accident may occur under such condition.Therefore, the fatigue detection system is useful to prevent thecar accident by sound<strong>in</strong>g the alarm to the driver. The fatiguemust be detected <strong>in</strong> real-time. Currently, the eyes are the ma<strong>in</strong>target of the fatigue detection. If the eyes close for a period oftime, it means that the fatigue of the driver occurs.Therefore, the first step of the fatigue detection is facedetection. A popular method proposed by Viola and Jones isused here [1]. The eyes area is then detected based on the facearea. However, the eyes detection is easily <strong>in</strong>fluenced by theeyeglasses. Several methods for remov<strong>in</strong>g eyeglasses wereproposed <strong>in</strong> the previous studies [2-4]. Some of them aredifficult to implement on the embedded system due to theprocess<strong>in</strong>g burden. The method proposed by Liu, Sun et al.was based on the HSL (Hue, Saturation, Lightness) colorspace [5]. It is quite efficient and is utilized <strong>in</strong> the proposedmethod.In this paper, a method is proposed to detect the fatigueof the driver. The method can remove the eyeglasses <strong>in</strong> thereal-time image to improve the accuracy of the fatigue alarmand reduce the number of false alarms.The rest of the paper is organized as follows. Section IIpresents the related work. Section III presents the fatiguedetection method. Section IV presents the experimental studyand Section V is the conclusion of this paper.II. RELATED WORKIn-vehicle camera is popularly <strong>in</strong>stalled to realize thepossible reasons of car accidents. Such a camera can also beused to detect the fatigue of the driver. Several studies relatedto the fatigue detection are described as follows.Authors Horng et al. [6] and Sharma et al. [7] utilized thenumber of pixels <strong>in</strong> the eye image to determ<strong>in</strong>e the eye state,open or close. Horng et al. [6] established an edge map tolocate the eyes locations and the eye state is determ<strong>in</strong>ed basedon the HSL color space of the eye image. Its accuracy isdepended on the location of the eyes. Sharma and Banga [7]converted the face image to YCbCr color space. The averageand standard deviation of the pixel number <strong>in</strong> the b<strong>in</strong>arizationimage is computed. Then, fuzzy rules [8] are used todeterm<strong>in</strong>e the eye state. Liu et al. [9] and Tabrizi et al. [10]proposed methods to detect the upper and lower eyelids basedon the edge map. The distance between the upper and lowereyelids is then used to analyse the eye state. Besides, Dong etal. [11] and Li et al. [12] proposed methods by utiliz<strong>in</strong>g AAM(Active Appearance Model) to locate the eyes. Then, aPERCLO (PERcentage of eye CLOsure) was computed todetect the fatigue. For the above methods, the locat<strong>in</strong>g of eyeareas was easily <strong>in</strong>fluenced by the change of brightness.Circular Hough transform is popular method to overcome the<strong>in</strong>fluence of brightness. Several studies [13-15] proposedmethods to locate the pupil of eyes by us<strong>in</strong>g circular Houghtransform. Then, the eye state was analysed accord<strong>in</strong>g to thelocations of pupils.The above related studies didn’t address the issue ofeyeglasses. Eyeglasses may reduce the correct ratio of thefatigue detection. A driver wear<strong>in</strong>g eyeglasses is often seen.Therefore, a fatigue detection method should considerate sucha condition.


III. METHODThe process of the proposed method is depicted <strong>in</strong> Figure1. When the face area is detected successfully <strong>in</strong> the real-timeimage frame, the eyeglasses removal step is applied to the facearea based on the method proposed <strong>in</strong> [16]. If the template ofthe eye area is not established yet, an eye detection step isperformed to get the <strong>in</strong>itial area of eyes. Otherwise, the eyetrack<strong>in</strong>g step is performed to keep track<strong>in</strong>g the eyes. When theeyes area is detected, the RGB color space of the area isconverted to HSL color space. The S-channel is then projectedhorizontally. The projected results are then filtered by Kalmanfilter to analyse the eye state. Then, the fatigue is detectedaccord<strong>in</strong>g to the successive eye states.Face/Eye DetectionNext FrameFaceDetectionFailureSuccessYesEyeglassesRemovalIs template<strong>in</strong>itialized?NoEyeDetectionFailureSuccessCreate<strong>Template</strong>Eye Track<strong>in</strong>gFatigue DetectionConvertRGB to HSLExtract<strong>in</strong>gS channel imageHorizontalProjectionKalmanFilter<strong>in</strong>gEye StateAnalysisIs fatigue?NoYesGenerateAlarmFigure 1. The fatigue detection processA. Face and Eye DetectionIn this subsection, the methods of face detection,eyeglasses removal, and eye detection are presented <strong>in</strong> turns.1) Face DetectionFace detection is the first step of fatigue detection. Aconventional function of OpenCV library is utilized <strong>in</strong> thisstep [1]. The resolution of a real-time camera frame is reducedto 320×240 pixels. The face area of the current frame is thedetection result with a shortest distance to the area <strong>in</strong> theprevious frame.2) Eyeglasses RemovalThe ratio of eyes to the face area is based on the analysisresult of face database presented <strong>in</strong> the previous study [17].Therefore, assume the face area is denoted as FR and the left,top, width, height of FR are denoted as FR left , FR right , FR top andFR bottom . The region of <strong>in</strong>terest (ROI) of eyes, denoted RI, isdeterm<strong>in</strong>ed by us<strong>in</strong>g Eq. (1).RI FR FR 8RIRIRIlefttopright FRbottomlefttop FRleft FR FRtop FRwidthheightwidth FR4height FR2width8(1)The area of RI is then split <strong>in</strong>to two equal areas for left andright eyes, denoted as RI L and RI R . Although the RI cannotmatch the ratios of every people, it is suitable to <strong>in</strong>clude theeye image <strong>in</strong> the area. Two examples are shown <strong>in</strong> Figure 2.Figure 2. Two examples of the ratio of eyes to face areaWhen the eye area, RI, is determ<strong>in</strong>ed, the next step is thecreation of the eyeglasses mask. The method proposed <strong>in</strong> [16]is modified here. The sub-steps are listed as follows:S1: The RI image is converted to greyscale and b<strong>in</strong>arizedus<strong>in</strong>g a threshold 40.S2: A dilatation operation is applied to the above result. Twoblobs (b<strong>in</strong>ary large object) which are closest to the centerof RI L and RI R separately are selected.S3: The b<strong>in</strong>arization image is <strong>in</strong>verted.S4: Two blobs selected <strong>in</strong> S2 is applied the boundary fillfunction to remove the eye balls. The left white area isthe mask of the eyeglasses.An example of the above sub-steps is shown <strong>in</strong> Figure 3.The mask is then filled by a sk<strong>in</strong> color. Although the mask isstill visible, it will not <strong>in</strong>fluence the follow<strong>in</strong>g steps.(a) (b) (c) (d) (e)Figure 3. The process of extract<strong>in</strong>g eyeglasses mask (a) orig<strong>in</strong>al image (b)b<strong>in</strong>arization result of eye area RI (c) dilation (d) boundary fill to remove theeyeballs (e) mask filled by a sk<strong>in</strong> color3) Eye Detection and Track<strong>in</strong>gEyes are very important to the fatigue detection. The areashould be as accurate as possible. The sub-steps of eyedetection are listed below.S1: The cascade classifier of OpenCV library is used togenerate the eye areas <strong>in</strong> the eye area RI.S2: The areas of two eyes are recorded as the <strong>in</strong>itialtemplates for eye track<strong>in</strong>g.S3: Two eyes are track<strong>in</strong>g separately us<strong>in</strong>g the abovetemplates. If the face area cannot be detected <strong>in</strong> theprevious face detection step, the template is discardedand back to S1.An example of the eye detection result is shown <strong>in</strong> Figure4. In this step, the template size of eyes is fixed to 32×18pixels. The template track<strong>in</strong>g function is also utilized theOpenCV library. The match<strong>in</strong>g area is 1.4 times of the eyetemplate.


Estimate value of max consecutive pixelsEstimate value of max consecutive piexlsEstimate value of max consecutive pixelsEstimate value of max consecutive pixelsFigure 4. Eye area and the correspond<strong>in</strong>g imageB. Fatigue DetectionThe fatigue detection consists of two sub-steps. The firstone is the extraction of the S-channel image from HSL colorspace of eye image. The second sub-step is the analysis of eyestate, i.e. open or close, for fatigue detection.1) Extraction of S-channel imageThe areas of two eyes are converted from RGB to HSLcolor space. Then, the S-channel is applied the histogramequalization operation to <strong>in</strong>crease the contrast of eyeball. Then,the S-channel image is b<strong>in</strong>arized by us<strong>in</strong>g a pre-def<strong>in</strong>edthreshold 30.2) Eye State AnalysisEye state (open or close) is the ma<strong>in</strong> <strong>in</strong>formation of fatiguedetection. By observ<strong>in</strong>g the b<strong>in</strong>arization result of S-channelimage, the blob of the eyeball is close to a round shape. Whenthe eye is close, the blob is close a flat shape. Therefore, ahorizontal projection is applied to the b<strong>in</strong>arization image. Themaximum number of consecutive while pixels for everyhorizontal l<strong>in</strong>e is computed. Two examples of the horizontalprojection are shown <strong>in</strong> Figure 5.We found the value of horizontal projection <strong>in</strong> successiveframes is unstable. It causes the result of fatigue detection isunstable, too. A Kalman filter is used to stabilize the value[18]. After the estimation of Kalman filter, the eye state isjudged as “close” when the value, i.e. maximum number ofconsecutive white pixels, is larger than 0.45 times of the eyearea, RI L or RI R . Otherwise, the eye state is judged as “open”.Sometimes, the rotation of the head cause only one eye can bejudged successfully. Therefore, if one eye is judged as “close”,the output state is “close”. In advance, a fatigue is detectedwhen the eye is judged as “close” for two seconds, i.e. 30frames at 15 fps (frames-per-seconds). A fatigue alarm beenabled accord<strong>in</strong>g to the above result.IV. EXPERIMENTAL STUDYA fatigue detection system based on the above method wasimplemented by us<strong>in</strong>g Visual C++. An experiment was alsodesigned to evaluate the performance of the implementedsystem. Four test videos about 5~10 m<strong>in</strong>utes are recorded.Two of them are the subjects wear eyeglasses and two of themare the subjects don’t wear eyeglasses. The maximumconsecutive pixels of eyes are depicted <strong>in</strong> Figure 6. Figures6(a) and (b) are the result without eyeglasses.242016128Left eyeRight eye242016128Left eyeRight eye4Threshold4Threshold020040060002505007501,0001,2501,500Frame no.(a)Frame no.(b)(a)2420242016161212(b)(c)840200400600Frame no.(c)8001,0001,200Left eyeRight eyeThresholdFrame no.Figure 6. The fatigue detection results of four test videos (a)/(b) withouteyeglasses; (c)/(d) with eyeglasses840200400600800(d)1,0001,200Left eyeRight eyeThreshold(d)(e)Figure 5. The process of horizontal projection (a) eye open image (b)b<strong>in</strong>arization of S-channel image (c) horizontal projection (d) eye close image(e) b<strong>in</strong>arization of S-channel image (f) horizontal projection(f)The proposed method is compared with the methodpresented by Flores et al. [19]. The results are listed <strong>in</strong>TABLE 1. For every test videos, the total frames are marked<strong>in</strong> the parentheses. The number of frames with eye open andclose area also marked below. For example, the total frames of<strong>V1</strong> are 686 where 493 frames are eye open and 193 frames areeye close. The correct ratio is computed for every test videos.The overall ratio is also listed at the bottom of the table.


TABLE 1. THE COMPARISON OF EYE STATE DETECTIONOur method Flores et al. [19]Videos Open Close Ratio(%) Open Close Ratio(%)<strong>V1</strong> (686)493+ 193471 180 94.9 488 171 96.1V2 (1621)1041+580974 552 94.1 1027 544 96.9V3 (1217)545+672450 599 86.2 433 574 82.7V4 (1319)361+958350 710 80.4 316 696 76.7Overall 4286/4843=88.5 4249/4843=87.7The follow<strong>in</strong>g conclusions can be drawn from TABLE 1.• For the method presented <strong>in</strong> [19], the correct ratio ofeye open is higher than that of eye close.• For the test videos with eyeglasses (V3 and V4), ourmethod has higher correct ratio than the methodpresented <strong>in</strong> [19].• When a subject <strong>in</strong> the test video rotates his head about45 degrees. The eyeglasses will be failed to remove byour method. It causes the misjudgement of eye state.Next, the fatigue detection is also evaluated. The resultsare listed <strong>in</strong> TABLE 2. The abbreviations used <strong>in</strong> the table aredescribed below:• EB: the number of eye bl<strong>in</strong>k<strong>in</strong>g• RD: the number of real doz<strong>in</strong>g• AG: the number of alarm generated by the system• NF (Negative False): the number of false alarm.• PF (Positive False): the number of real doz<strong>in</strong>g withoutalarm• CA (Correct Alarm): the number of real doz<strong>in</strong>g withalarm generated• CR (Correct Ratio): the correct percentage of fatiguealarm• PR (Precision Rate): the precision percentage of thegenerated alarmThe computation of CR and PR is based on the Eq. (2).CR CA RD(2)PRRD AGThe above data is collected manually. Only the human canmake sure the subject <strong>in</strong> the test video is real doz<strong>in</strong>g or not.TABLE 2. THE COMPARISON OF FATIGUE DETECTIONTest Videos<strong>V1</strong> V2 V3 V4Methods Our [19] Our [19] Our [19] Our [19]EB 8 11 9 5RD 2 6 4 3AG 2 2 6 6 4 6 3 4NF 0 0 0 0 0 2 0 1PF 0 0 0 0 0 0 0 0CA 2 2 6 6 4 4 3 3CR (%) 100 100 100 100 100 100 100 100PR (%) 100 100 100 100 100 66.7 100 75Accord<strong>in</strong>g to the results listed <strong>in</strong> TABLE 2, our methodand the method presented <strong>in</strong> [19] can generate alarm with 100percent. However, the method presented <strong>in</strong> [19] generatesfalse alarms when the subjects wear eyeglasses. It causes theprecision ratio is decreased to 66.7 and 75 percent.In advance, the execution time is also measured. Thesystem is executed on a PC with Intel Core2 Quad CPUQ8400 @ 2.66GHz and 3GB RAM. The average executiontime of every steps of the proposed method is measured. Theresult is listed <strong>in</strong> TABLE 3.TABLE 3. THE AVERAGE EXECUTION TIMEStepsElapsed time (ms)Face detection 15.885Eyeglasses removal 1.393Eye detection and track<strong>in</strong>g 1.722Fatigue detection 0.439Overall of the method 19.439A frame processed by the system 61.995Accord<strong>in</strong>g to the results listed <strong>in</strong> TABLE 3, face detectionis the most time-consum<strong>in</strong>g step s<strong>in</strong>ce the whole frame mustbe processed. The rest of the steps are limited <strong>in</strong> the ROI, theexecution time is shorten substantially. So, the overallexecution time of the proposed method is about 19milliseconds. Besides, the system consumes about 62milliseconds for process<strong>in</strong>g a frame, <strong>in</strong>clud<strong>in</strong>g the display ofthe GUI (Graphical User Interface). It means that the systemcan handle 15~16 frames per second.V. CONCLUSIONIn this paper, a fatigue detection method with eyeglassesremoval is proposed. The removal of eyeglasses <strong>in</strong>creases theprecision of eye detection and thus reduces the false alarms.Accord<strong>in</strong>g to the results of experimental study, the correctratio of the eye state detection can reach 94 and 86 percentwhen the subjects don’t and do wear eyeglasses, respectively.The overall correct ratio can reach 90 percent.Although the proposed method seems promis<strong>in</strong>g, someissues are discussed below:1. Several threshold values are used <strong>in</strong> the method. Thecurrent sett<strong>in</strong>gs are based on the result of empirical study.However, these fixed values are hard to keep the samesystem performance <strong>in</strong> the practical environment. Someautomatic or adaptive mechanism should be designed toimprove the feasibility of the method.2. Eyeglasses removal may be fail when the rotation of thedriver’s head is over 30 degrees. Therefore, the <strong>in</strong>stall ofthe camera is suggested <strong>in</strong> front of the driver to preventsuch a problem.3. When the driver is back to the light, it causes the face andeyes area is dark. The detection of eyes area may be failedand causes the generation of false alarms.ACKNOWLEDGMENTThis paper is based upon work supported by the<strong>in</strong>dustrial academic project, “Vehicle Safety Driv<strong>in</strong>g AssistantTechnology Development Project”, under Grant No. UT99-DTJ4-0-007.


REFERENCES[1] P. Viola and M. Jones. “Rapid Object Detection Us<strong>in</strong>g a BoostedCascade of Simple Features,” Proceed<strong>in</strong>gs of the <strong>IEEE</strong> ComputerSociety Conference on Computer Vision and Pattern Recognition, vol.12, 2001, pp. I-511-I-518.[2] Chenyu Wu, Ce Liu and Heung-Yueng Shum, “Automatic EyeglassesRemoval from Face Images,” <strong>IEEE</strong> Transactions on Pattern Analysisand Mach<strong>in</strong>e Intelligence, vol. 26, no. 3, March 2004, pp. 322-336.[3] C. Du and G. Su, “Eyeglasses removal from facial images,” PatternRecognition Letters vol. 26, no. 14, 2005, pp. 2215-2220.[4] J. S. Park, Y. H. Oh, S. C. Ahn, and S. W. Lee, “Glasses Removal fromFacial Image Us<strong>in</strong>g Recursive Error Compensation,” <strong>IEEE</strong>Transactions on Pattern Analysis and Mach<strong>in</strong>e Intelligence, vol. 27, no.5, May 2005, pp. 805-811.[5] L. Liu, Y. Sun, B. Y<strong>in</strong>, C. Song, “Local Gabor B<strong>in</strong>ary Pattern RandomSubspace Method for Eyeglasses-face Recognition,” Proceed<strong>in</strong>gs of the3rd International Congress on Image and Signal Process<strong>in</strong>g, Yantai,Ch<strong>in</strong>a, 2010, pp. 1892-1896.[6] W. B. Horng, C. Y. Chen, Y. Chang and C. H. Fan, “Driver FatigueDetection Based on Eye Track<strong>in</strong>g and Dynamic <strong>Template</strong> Match<strong>in</strong>g,”Proceed<strong>in</strong>gs of International Conference on Network<strong>in</strong>g, Sens<strong>in</strong>g &Control, Taipei, Taiwan, 2004, pp. 7-12.[7] N. Sharma, V. K. Banga, “Development of a Drows<strong>in</strong>ess Warn<strong>in</strong>gSystem based on the Fuzzy Logic ,” International Journal of ComputerApplications, vol. 8, no. 9, 2010, pp. 1-6.[8] K. P. Yao, W. H. L<strong>in</strong>, C. Y. Fang, J. M. Wang, S. L. Chang, and S. W.Chen, “Real-Time Vision-Based Driver Drows<strong>in</strong>ess/Fatigue DetectionSystem,” Proceed<strong>in</strong>gs of <strong>IEEE</strong> 71st Vehicular Technology Conference,Taipei, Taiwan, 2010, pp. 1-5.[9] D. Liu, P. Sun, Y. Q. Xiao, Y. Y<strong>in</strong>, “Drows<strong>in</strong>ess Detection Based onEyelid Movement,” Proceed<strong>in</strong>gs of the Second International Workshopon ETCS, Wuhan, Ch<strong>in</strong>a, 2010, pp. 49-52.[10] P. R. Tabrizi and R. A. Zoroofi, “Open\Closed Eye Analysis forDrows<strong>in</strong>ess Detection,” Proceed<strong>in</strong>gs of the First Workshops on ImageProcess<strong>in</strong>g Theory, Tools and Applications, Sousse, Tunisia 2008, pp.1-7.[11] H. Z. Dong, M. Xie, “Real-Time Driver Fatigue Detection Based onSimplified Landmarks of AAM,” Proceed<strong>in</strong>gs of the ICACIA,Chengdu, Ch<strong>in</strong>a, 2010, pp. 363-366.[12] L. Li, M. Xie, H. Dong, “A Method of Driv<strong>in</strong>g Fatigue DetectionBased on Eye Location,” Proceed<strong>in</strong>gs of the <strong>IEEE</strong> 3rd ICCSN, Xian,Ch<strong>in</strong>a, 2011, pp. 480-484.[13] M. I. Khan and A. B. Mansoor, “Real Time Eyes Track<strong>in</strong>g andClassification for Driver Fatigue Detection,” Lecture Notes <strong>in</strong>Computer Science, vol. 5112, 2008, pp. 729-738.[14] I. Garcia, S. Bronte, L. M. Bergasa, N. Hernandez, B. Delgado, M.Sevillano “Vision-based Drows<strong>in</strong>ess Detector for a Realistic Driv<strong>in</strong>gSimulator,” Proceed<strong>in</strong>gs of the 13th International <strong>IEEE</strong> ITSC, Funchal,Portugal, 2010, pp. 887-894.[15] N. Alioua, A. Am<strong>in</strong>e, M. Rziza and D. Aboutajd<strong>in</strong>e, “Driver’s Fatigueand Drows<strong>in</strong>ess Detection to Reduce Traffic Accidents on Road,”Proceed<strong>in</strong>gs of the 14th <strong>in</strong>ternational conference on CAIP, Seville,Spa<strong>in</strong>, 2011, pp. 397-407.[16] H. T. L<strong>in</strong>, “An Automatic Glasses Removal by Us<strong>in</strong>g Sk<strong>in</strong> Color andExemplar-Based Inpa<strong>in</strong>t<strong>in</strong>g Method,” Master Thesis, Southern TaiwanUniversity of Technology, CSIE, 2008, 60 pages.[17] P. C. Yuen and C. H. Man, “Human Face Image Search<strong>in</strong>g SystemUs<strong>in</strong>g Sketches,” <strong>IEEE</strong> Transactions on System, Man, and CyberneticsPart A: Systems and Humans, vol. 37, no. 4, 2007, pp. 493-504.[18] D. Simon, “Kalman Filter<strong>in</strong>g,” Embedded Systems Programm<strong>in</strong>g, vol.14, no. 6, June 2001, pp. 72-79.[19] M. J. Flores and J. M. Arm<strong>in</strong>gol, A. Escalera, “Driver Drows<strong>in</strong>essWarn<strong>in</strong>g System Us<strong>in</strong>g Visual Information for Both Diurnal andNocturnal Illum<strong>in</strong>ation Conditions,” EURASIP Journal on Advances <strong>in</strong>Signal Process<strong>in</strong>g, vol. 2010, 19 pages.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!