Lien, J., T. Kanade, J. Cohn, C. C. Li, ‘Detection, tracking and classification <strong>of</strong> action units in <strong>facial</strong> expression’, Journal <strong>of</strong> Robotics and Autonomous <strong>Systems</strong>, 31:131–146, 2000 Lyons, M. J., J. Budynek, S. Akamatsu, ‘Automatic classification <strong>of</strong> single <strong>facial</strong> images’, IEEE Trans. Pattern Anal. Machine Intell., vol. 21, no. 12, pp. 1357–1362, 1999 Morimoto, C., D. Koons, A. Amir, M. Flickner, ‘Pupil detection and tracking using multiple light sources’, Technical report, IBM Almaden Research Center, 1998 Moriyama, T., J. Xiao, J. F. Cohn, T. Kanade, ‘Meticulously detailed eye model and its application to analysis <strong>of</strong> <strong>facial</strong> image’, Proceedings <strong>of</strong> IEEE SMC, pp.580–585, 2004 Padgett, C., G. Cottrell, “Representing face images for emotion classification”, In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing <strong>Systems</strong>, vol. 9, Cambridge, MA, MIT Press, 1997 Padgett, C., G. Cottrell, R. Adolphs, “Categorical perception in <strong>facial</strong> emotion classification”, In Proceedings <strong>of</strong> the 18th Annual Conference <strong>of</strong> the Cognitive Science Society, Hillsdale NJ, Lawerence Erlbaum. 5, 1996 Pantic, M., L. J. M. Rothkrantz, ‘Toward an Affect-Sensitive Multimodal Human- Computer Interaction’ IEEE proceedings vol. 91, no. 9, pp. 1370-1390, 2003 Pantic, M., L. J. M. Rothkrantz, ‘Self-adaptive expert system for <strong>facial</strong> expression analysis’, IEEE International Conference on <strong>Systems</strong>, Man and Cybernetics (SMC ’00), pp. 73–79, October 2000 Pantic, M., L .J. M. Rothkrantz, ‘Automatic analysis <strong>of</strong> <strong>facial</strong> <strong>expressions</strong>: the state <strong>of</strong> the art’, IEEE Trans. PAMI, 22(12), 2000 Pantic, M., L. J. M. Rothkrantz, ‘An expert system for multiple emotional classification <strong>of</strong> <strong>facial</strong> <strong>expressions</strong>’, Proceedings, 11th IEEE International Conference on Tools with Artificial Intelligence, pp. 113-120, 1999 Phillips, P., H. Moon, P. Rauss, S. Rizvi, ‘The FERET september 1996 database and evaluation procedure’ In Proc. First Int’l Conf. on Audio and Video-based Biometric Person Authentication, pages 12–14, Switzerland, 1997 Rosenblum, M., Y. Yacoob, L. S. Davis, ‘Human expression recognition from motion using a radial basis function network architecture’, IEEE Trans. NNs, vol. 7, no. 5, pp. 1121–1137, 1996 Salovey, P., J. D. Mayer, ‘Emotional intelligence’ Imagination, Cognition, and Personality, 9(3): 185-211, 1990 - 116 -
Samal, A., P. Iyengar, ‘Automatic recognition and analysis <strong>of</strong> human faces and <strong>facial</strong> expression’, A survey. Pattern <strong>Recognition</strong>, 25(1):65–77, 1992 Schweiger, R., P. Bayerl, H. Neumann, “Neural Architecture for Temporal Emotion Classification”, ADS 2004, LNAI 2068, Springer-Verlag Berlin Heidelberg, pp. 49-52, 2004 Seeger, M., ‘Learning with labeled and unlabeled data’, Technical report, Edinburgh University, 2001 Stathopoulou, I. O., G. A. Tsihrintzis, ‘An improved neuralnetwork-based face detection and <strong>facial</strong> expression classification system’, Proceedings <strong>of</strong> IEEE SMC, pp. 666–671, 2004 Tian, Y., T. Kanade, J. F. Cohn. ‘Recognizing upper face action units for <strong>facial</strong> expression analysis’ In Proceedings <strong>of</strong> Conference on Computer Vision and Pattern <strong>Recognition</strong>, June 2000 Tian, Y., T. Kanade, J. F. Cohn. ‘Recognizing action units for <strong>facial</strong> expression analysis’ Pattern Analysis and Machin Intelligence, 23(2), February 2001 Turk, M., A. Pentland, ‘Face recognition using eigenfaces”, Proc. CVPR, pp. 586-591, 1991 Yacoob Y., L. Davis. ‘Computing spatio-temporal representation <strong>of</strong> human faces’ In CVPR, pages 70–75, Seattle, WA, June 1994 Yin, L., J.Loi, W.Xiong, ‘Facial expression analysis based on enhanced texture and topographical structure’, Proceedings <strong>of</strong> IEEE SMC, pp. 586–591, 2004 Zhang, Z., ‘Feature-based <strong>facial</strong> expression recognition: Sensitivity analysis and experiments with a multilayer perceptron’ International Journal <strong>of</strong> Pattern <strong>Recognition</strong> and Artificial Intelligence, 13(6):893–911, 1999 Zhang, Z., M. Lyons. M. Schuster, S. Akamatsu, ‘Comparison between geometry based and Gabor-wavelets-based <strong>facial</strong> expression recognition using multi-layer perceptron’, in Proc. IEEE 3rd Int’l Conf. on Automatic Face and Gesture <strong>Recognition</strong>, Nara, Japan, April 1998 Zhu, Z., Q. Ji, K. Fujimura, K. Lee, ‘Combining Kalman filtering and mean shift for real time eye tracking under active IR illumination,’ in Proc. Int’l Conf. Pattern <strong>Recognition</strong>, Aug. 2002 Wang, X., X. Tang, ‘Bayesian Face <strong>Recognition</strong> Using Gabor Features’, Proceedings <strong>of</strong> the 2003 ACM SIGMM Berkley, California 2003 - 117 -
- Page 1 and 2:
Automatic recognition of facial exp
- Page 3 and 4:
Man-Machine Interaction Group Facul
- Page 5 and 6:
Acknowledgements The author would l
- Page 7 and 8:
Eye Detection Module ..............
- Page 9:
List of tables Table 1. The used se
- Page 12 and 13:
data taken from the Cohn-Kanade AU-
- Page 14 and 15:
- The discussions on the current ap
- Page 16 and 17:
ecognition in static pictures, for
- Page 18 and 19:
In [Wang and Tang, 2003] the author
- Page 20 and 21:
Data preparation Starting from the
- Page 22 and 23:
Figure 2. Facial characteristic poi
- Page 24 and 25:
The only additional time is that of
- Page 26 and 27:
African-American and three percent
- Page 28 and 29:
Table 4. The emotion projections of
- Page 30 and 31:
contains a large sample of varying
- Page 32 and 33:
Bayesian networks were designed to
- Page 34 and 35:
- correctly identify the goals of m
- Page 36 and 37:
In the final step of constructing a
- Page 38 and 39:
- renormalize the w ijk to assure t
- Page 40 and 41:
Principal Component Analysis The ne
- Page 42 and 43:
The term σ ij is the covariance be
- Page 44 and 45:
T The term rank( X ∗ X ) is gener
- Page 46 and 47:
numeric information. Usually, a neu
- Page 48 and 49:
defined as: ∆w = −η ∇ ji ji
- Page 50 and 51:
∂E ∂a j = i E ∂a pk ∂a pk
- Page 52 and 53:
Spatial Filtering The spatial filte
- Page 54 and 55:
way high pass filter were used for
- Page 56 and 57:
module includes some routines for d
- Page 59 and 60:
IMPLEMENTATION Facial Feature Datab
- Page 61 and 62:
SMILE resides in a dynamic link lib
- Page 63 and 64:
FCP Management Application The Cohn
- Page 65 and 66: Figure 13. Head rotation in the ima
- Page 67 and 68: Table 6. The set of rules for the u
- Page 69 and 70: [image width] [image height] ---- A
- Page 71 and 72: Figure 17. The facial areas involve
- Page 73 and 74: The functionality of the tool was b
- Page 75 and 76: A small part of the output text fil
- Page 77 and 78: o call a specialized routine for co
- Page 79 and 80: There is another kind of structure
- Page 81 and 82: performing classification of facial
- Page 83 and 84: Figure 22. Sobel edge detector appl
- Page 85 and 86: almost closed it obviously does not
- Page 87 and 88: Figure 28. FCP detection The effici
- Page 89 and 90: TESTING AND RESULTS The following s
- Page 91 and 92: BBN experiment 2 “Detection of fa
- Page 93 and 94: General recognition rate is 63.77%
- Page 95 and 96: Recognition results. Confusion Matr
- Page 97 and 98: 5 states model General recognition
- Page 99 and 100: LVQ experiment “LVQ based facial
- Page 101 and 102: ANN experiment Back Propagation Neu
- Page 103 and 104: PCA experiment “Principal Compone
- Page 105 and 106: Eigenvalues: Eigenvectors: Factor l
- Page 107 and 108: Squared cosines of the variables: C
- Page 109 and 110: - 109 -
- Page 111 and 112: CONCLUSION The human face has attra
- Page 113 and 114: REFERENCES Almageed, W. A., M. S. F
- Page 115: Essa, A. Pentland, ‘Coding, analy
- Page 120 and 121: 61: } 62: } 63: float model::comput
- Page 122 and 123: 119: for(k=0;k
- Page 125 and 126: APPENDIX B Datcu D., Rothkrantz L.J
- Page 127 and 128: facial feature and store the inform
- Page 129 and 130: available as part of the knowledge
- Page 131 and 132: detailed in table 4. The dependency
- Page 133: [11] M. Turk, A. Pentland ‘Face r