[127] C.Y. Xu <strong>and</strong> J.L. Prince, “Snakes, shapes, <strong>and</strong> gradient vector flow,” IEEE Trans. Image Processing, vol. 7, no. 3, pp. 359–369, Mar. 1998. [128] R. Goldenberg, R. Kimmel, E. Rivlin, <strong>and</strong> M. Rudzsky, “Fast geodesic active contours,” IEEE Trans. Image Processing, vol. 10, no. 10, pp. 1467–1475, Oct. 2001. [129] X.M. Pardo, M.J. Carreira, A. Mosquera, <strong>and</strong> D. Cabello, “A snake <strong>for</strong> CT image segmentation integrating region <strong>and</strong> edge in<strong>for</strong>mation,” Image <strong>and</strong> Vision Computing, vol. 19, no. 7, pp. 461–475, May 2001. [130] T.F. Chan <strong>and</strong> L.A. Vese, “Active contours without edges,” IEEE Trans. Image Processing, vol. 10, no. 2, pp. 266–277, Feb. 2001. [131] C. Chesnaud, P. Refregier, <strong>and</strong> V. Boulet, “Statistical region snake-based segmentation adapted to different physical noise models,” IEEE Trans. Pattern Analysis <strong>and</strong> Machine Intelligence, vol. 21, no. 11, pp. 1145–1157, Nov. 1999. [132] S.C. Zhu <strong>and</strong> A. Yuille, “Region competition - unifying snakes, region growing, <strong>and</strong> Bayes/MDL <strong>for</strong> multib<strong>and</strong> image segmentation,” IEEE Trans. Pattern Analysis <strong>and</strong> Machine Intelligence, vol. 18, no. 9, pp. 884–900, Sept. 1996. [133] J. Ivins <strong>and</strong> J. Porrill, “Statistical snakes: active region models,” Proc. Fifth British Machine Vision Conf. (BMVC), vol. 2, pp. 377–386, Dec. 1994. [134] T. Abe <strong>and</strong> Y. Matsuzawa, “Multiple active contour models with application to region extraction,” Proc. 15th Int’l Conf. Pattern <strong>Recognition</strong>, vol. 1, pp. 626–630, Sept. 2000. [135] V. Chalana, D.T. Linker, D.R. Haynor, <strong>and</strong> Y.M. Kim, “A multiple active contour model <strong>for</strong> cardiac boundary detection on echocardiographic sequences,” IEEE Trans. Medical Imaging, vol. 15, no. 3, pp. 290–298, 1996. [136] B. Fleming, 3D <strong>Modeling</strong> & Surfacing, Morgan Kaufmann, San Francisco, Cali<strong>for</strong>nia, 1999. [137] M. Deering, “Geometry compression,” Proc. SIGGRAPH Conf., pp. 13–20, Aug. 1995. [138] M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Lounsbery, <strong>and</strong> W. Stuetzle, “Multiresolution analysis of arbitrary meshes,” Proc. SIGGRAPH Conf., pp. 173–182, Aug. 1995. [139] R.-L. Hsu, A.K. Jain, <strong>and</strong> M. Tuceryan, “Multiresolution model compression using 3-D wavelets,” Proc. Asian Conf. Computer Vision, pp. 74–79, Jan. 2000. [140] A.R. Calderbank, I. Daubechies, W. Sweldens, <strong>and</strong> B.-L. Yeo, “Lossless image compression using integer to integer wavelet trans<strong>for</strong>ms,” Proc. IEEE Int’l Conf. Image Processing, vol. 1, pp. 596–599, Oct. 1997. 174
[141] M.D. Adams <strong>and</strong> F. Kossentini, “Reversible integer-to-integer wavelet trans<strong>for</strong>ms <strong>for</strong> image compression: Per<strong>for</strong>mance evaluation <strong>and</strong> analysis,” Proc. IEEE Int’l Conf. Image Processing, vol. 9, no. 6, pp. 1010–1024, June 2000. [142] IBM Query By Image Content (QBIC), . [143] Photobook, . [144] M. Abdel-Mottaleb, N. Dimitrova, R. Desai, <strong>and</strong> J. Martino, “Conivas: Content-based image <strong>and</strong> video access system,” Proc. Fourth ACM Multimedia Conf., pp. 427–428, Nov. 1996. [145] FourEyes, . [146] Virage, . [147] J.-Y. Chen, C. Taskiran, E.J. Delp, <strong>and</strong> C.A. Bouman, “Vibe: A new paradigm <strong>for</strong> video database browsing <strong>and</strong> search,” Proc. IEEE Workshop Content-Based Access of Image <strong>and</strong> Video Libraries, pp. 96–100, June 1998, . [148] S.-F. Chang, W. Chen, H. Meng, H. Sundaram, <strong>and</strong> D. Zhong, “An automated content-based video search system using visual cues,” Proc. ACM Multimedia, vol. 18, no. 1, pp. 313–324, Nov. 1997, . [149] Visualseek, . [150] W.Y. Ma <strong>and</strong> B.S. Manjunath, “Netra: A toolbox <strong>for</strong> navigating large image databases,” Proc. IEEE Int’l Conf. Image Processing, vol. 1, pp. 568–571, Oct. 1997. [151] S. Mehrotra, Y. Rui, M. Ortega, <strong>and</strong> T.S. Huang, “Supporting content-based queries over images in MARS,” Proc. IEEE Int’l Conf. Multimedia Computing <strong>and</strong> Systems, pp. 632–633, June 1997. [152] J. Laaksonen, M. Koskela, S. Laakso, <strong>and</strong> E. Oja, “Picsom - content-based image retrieval with self-organizing maps,” Pattern <strong>Recognition</strong> Letters, vol. 21, no. 13-14, pp. 1199–1207, Dec. 2000, . [153] M.S. Lew, “Next-generation web searches <strong>for</strong> visual content,” IEEE Computer, pp. 46–53, Nov. 2000, . [154] A. Vailaya, M. Figueiredo, A.K. Jain, <strong>and</strong> H.-J. Zhang, “Image classification <strong>for</strong> content-based indexing,” IEEE Trans. Image Processing, vol. 10, no. 1, pp. 117–130, Jan. 2001. [155] C.A. Poynton, A Technical Introduction to Digital Video, John Wiley & Sons, New York, 1996. 175
- Page 1 and 2:
Face Detection and Modeling for Rec
- Page 3 and 4:
perimental results demonstrate succ
- Page 5 and 6:
To my parents; my lovely wife, Pei-
- Page 7 and 8:
for providing the range datasets; t
- Page 9 and 10:
4 Face Modeling 97 4.1 Modeling Met
- Page 11 and 12:
List of Figures 1.1 Applications us
- Page 13 and 14:
2.6 A breakdown of face recognition
- Page 15 and 16:
4.2 3D triangular-mesh model and it
- Page 17 and 18:
5.14 Fine alignment using geodesic
- Page 19 and 20:
Chapter 1 Introduction In recent ye
- Page 21 and 22:
(a) (b) Figure 1.1. Applications us
- Page 23 and 24:
(e) Figure 1.1. (Cont’d). (f) (a)
- Page 25 and 26:
esolutions and of poor quality (i.e
- Page 27 and 28:
can recognize known faces in carica
- Page 29 and 30:
the motivation for studies that att
- Page 31 and 32:
(a) Figure 1.10. Similarity of fron
- Page 33 and 34:
verify the face present in an image
- Page 35 and 36:
17 Figure 1.12. System diagram of o
- Page 37 and 38:
algorithm can also provide geometri
- Page 39 and 40:
commercial parametric face modeling
- Page 41 and 42:
1.5.1 Face Alignment Using 2.5D Sna
- Page 43 and 44:
1.5.3 Face Alignment Using Interact
- Page 45 and 46:
Using merely low-level features (e.
- Page 47 and 48:
such as eyes, mouth and face bounda
- Page 49 and 50:
can be applied to a 3D face model w
- Page 51 and 52:
Chapter 2 Literature Review We firs
- Page 53 and 54:
mation theory, geometrical modeling
- Page 55 and 56:
(e) (f) (g) (h) (i) (j) (k) (l) Fig
- Page 57 and 58:
esult in different recognition appr
- Page 59 and 60:
(a) (b) (c) Figure 2.2. Examples of
- Page 61 and 62:
(a) (b) (c) (d) Figure 2.4. Interna
- Page 63 and 64:
Figure 2.6. A breakdown of face rec
- Page 65 and 66:
2.3 Face Modeling Face modeling pla
- Page 67 and 68:
An advanced modeling approach which
- Page 69 and 70:
tionals to minimize, implementation
- Page 71 and 72:
are listed in Table 2.2. All of the
- Page 73 and 74:
low-level features can provide mean
- Page 75 and 76:
holistic 2D and geometrical 3D feat
- Page 77 and 78:
size) to generate potential face ca
- Page 79 and 80:
oundary. A face score is computed f
- Page 81 and 82:
normalized red-green (r-g) space [1
- Page 83 and 84:
(a) (b) Figure 3.3. The Y C b C r c
- Page 85 and 86:
(a) Figure 3.5. 2D projections of t
- Page 87 and 88:
3.3 Localization of Facial Features
- Page 89 and 90:
(a) (b) (c) Figure 3.9. Constructio
- Page 91 and 92:
The eye map from the chroma is enha
- Page 93 and 94:
mouth is performed within the face
- Page 95 and 96:
Figure 3.12. Computation of face bo
- Page 97 and 98:
center, and lengths of major and mi
- Page 99 and 100:
segment. The other term favors an u
- Page 101 and 102:
(a) (b) (c) (d) (e) Figure 3.15. Fa
- Page 103 and 104:
(a) (b) (c) (d) (e) Figure 3.18. Fa
- Page 105 and 106:
The number of false positives is al
- Page 107 and 108:
(a) (b) (c) (d) Figure 3.20. Face d
- Page 109 and 110:
Figure 3.22. Face detection results
- Page 111 and 112:
Figure 3.22. (Cont’d). 93
- Page 113 and 114:
3.5 Summary We have presented a fac
- Page 115 and 116:
Chapter 4 Face Modeling We first in
- Page 117 and 118:
and nose in frontal views), and bec
- Page 119 and 120:
4.3 Facial Measurements Facial meas
- Page 121 and 122:
4.4 Model Construction Our face mod
- Page 123 and 124:
a feature-dependent decay factor. F
- Page 125 and 126:
is to find appropriate external ene
- Page 127 and 128:
(a) (b) (d) (e) (f) Figure 4.11. Te
- Page 129 and 130:
Chapter 5 Semantic Face Recognition
- Page 131 and 132:
(a) (b) (c) Figure 5.1. Semantic fa
- Page 133 and 134:
of Fourier series truncation. In ad
- Page 135 and 136:
ponent color. The semantic facial s
- Page 137 and 138:
(a) (b) (c) (d) Figure 5.5. Boundar
- Page 139 and 140:
(a) (b) (c) (d) Figure 5.7. Coarse
- Page 141 and 142:
In the Baysian framework, given an
- Page 143 and 144: edge map; hence it requires a clean
- Page 145 and 146: shows examples of mouth energies. F
- Page 147 and 148: [ ( ∂Φ ∂t =∇Φ µ 1 div(g
- Page 149 and 150: (a) (b) (c) (d) (e) (f) Figure 5.14
- Page 151 and 152: y the reliability of component matc
- Page 153 and 154: tation, face size, and lighting con
- Page 155 and 156: (a) (b) (c) (d) Figure 5.18. Exampl
- Page 157 and 158: (a) (b) (c) (d) (e) (f) (g) (h) (i)
- Page 159 and 160: (a) (b) (c) (d) (e) (f) (g) Figure
- Page 161 and 162: 5.6 Summary For overcoming variatio
- Page 163 and 164: such vision-based systems are (i) d
- Page 165 and 166: 6.2 Future Directions Based on the
- Page 167 and 168: (a) (b) (c) (d) Figure 6.2. An exam
- Page 169 and 170: • Non-frontal training views: Acc
- Page 171 and 172: Appendices
- Page 173 and 174: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ Y
- Page 175 and 176: and are shown as blue-dashed lines
- Page 177 and 178: adius of a cluster ‘i’ w.r.t. a
- Page 179 and 180: Appendix C Image Processing Templat
- Page 181 and 182: considering pixel classes as voxel
- Page 183 and 184: gray8imageA(120,120) = 200; // Asse
- Page 185 and 186: Bibliography [1] Visionics Corporat
- Page 187 and 188: [32] L. Wiskott, J.M. Fellous, N. K
- Page 189 and 190: [59] M. Grudin, “On internal repr
- Page 191 and 192: [84] M. Abdel-Mottaleb and A. Elgam
- Page 193: [113] B. Kim and P. Burger, “Dept
- Page 197 and 198: [169] P.T. Jackway and M. Deriche,