12.07.2015 Views

Object Tracking and Face Recognition in Video Streams

Object Tracking and Face Recognition in Video Streams

Object Tracking and Face Recognition in Video Streams

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

28 Chapter 6. Conclusionsfulfilled by hav<strong>in</strong>g <strong>in</strong>tegrated <strong>Face</strong>clip, <strong>and</strong> hav<strong>in</strong>g compared it to the orig<strong>in</strong>al code. Becauseof negative results, <strong>Face</strong>clip is not used <strong>in</strong> the current implementation. While it would havebeen nice to have goals G2 <strong>and</strong> G4 implemented too, the results of the current systemare enough to give an <strong>in</strong>dication of the effect of us<strong>in</strong>g object track<strong>in</strong>g <strong>in</strong> a face recognitionsystem.The conclusion drawn from this project is that object track<strong>in</strong>g is a good tool for improv<strong>in</strong>gthe accuracy of face recognition <strong>in</strong> video streams. Anyone implement<strong>in</strong>g face recognitionfor video streams should consider us<strong>in</strong>g object track<strong>in</strong>g as a central component.6.1 Limitations <strong>and</strong> future workTest<strong>in</strong>g was done by compar<strong>in</strong>g whether a reported face position overlaps with a knownface position. It would, for example, mark a position def<strong>in</strong><strong>in</strong>g a person’s nose as correct,s<strong>in</strong>ce it overlaps with the whole face. Inspect<strong>in</strong>g the results shows that there are few suchoccasions, but it still means that the result numbers are not fully accurate.The current object track<strong>in</strong>g implementation cannot h<strong>and</strong>le a large number of false detectionswith the unknown PID, as it uses up too much memory.The current way for filter<strong>in</strong>g false detections easily fails; a trajectory will be acceptedas long as the face detector reports a particular image patch twice with<strong>in</strong> a short timespan.This is particularly apparent when comb<strong>in</strong><strong>in</strong>g object track<strong>in</strong>g with <strong>Face</strong>clip, as <strong>Face</strong>clipoften reports the same <strong>in</strong>valid image patch twice or more.One type of false detection that occurs is when the system tracks a face properly for awhile, <strong>and</strong> then f<strong>in</strong>ds the face at the wrong position for a few frames, <strong>and</strong> then goes back totrack<strong>in</strong>g it at the correct position. A solution based on cont<strong>in</strong>uity filter<strong>in</strong>g (Nielsen, 2010)may be a way to get rid of the false <strong>in</strong>termediate positions.It may also be useful to try identify<strong>in</strong>g faces reported by the object tracker, <strong>in</strong>stead ofthe current way of only identify<strong>in</strong>g faces reported by the face detector. Do<strong>in</strong>g this would<strong>in</strong>crease the number of faces for potential identification.It would be useful to test different sett<strong>in</strong>gs, for example other values of pid m<strong>in</strong> <strong>and</strong> det m<strong>in</strong> .That may be enough to get rid of some of the false detections <strong>and</strong> identifications.The system currently only does forward track<strong>in</strong>g. As mentioned <strong>in</strong> Section ??, onepossible improvement would be to also implement backward track<strong>in</strong>g.The optional goal of detect<strong>in</strong>g the direction of each face has not been implemented. Afuture research idea is to look <strong>in</strong>to techniques for do<strong>in</strong>g that.In the Complex experiment, the object tracker failed to track some faces. As mentioned<strong>in</strong> Section 5.2, it may be due to the low frame rate. One way to test whether it is causedby the low frame rate may be to down-sample the frame rate of a video clip with a higherframe rate, <strong>and</strong> see how the object track<strong>in</strong>g is affected. If the low frame rate is the cause ofthe failure, gett<strong>in</strong>g good results when us<strong>in</strong>g the implementation may require a video sourcewith a high enough frame rate.In general, Wawo seems to be quite sensitive to what tra<strong>in</strong><strong>in</strong>g pictures are <strong>in</strong>cluded;they all have to be of similar size <strong>and</strong> quality. Because of the high sensitivity, us<strong>in</strong>g theimplementation <strong>in</strong> practice may require the user to put some effort <strong>in</strong>to produc<strong>in</strong>g tra<strong>in</strong><strong>in</strong>gpictures of high enough, <strong>and</strong> consistent, quality.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!