Fusing face recognition from multiple cameras

Josh Harguess, Changbo Hu, J. K. Aggarwal
2009 2009 Workshop on Applications of Computer Vision (WACV)  
Face recognition from video has recently received much interest. However, several challenges for such a system exist, such as resolution, occlusion (from objects or selfocclusion), motion blur, and illumination. The aim of this paper is to overcome the problem of self-occlusion by observing a person from multiple cameras with uniquely different views of the person's face and fusing the recognition results in a meaningful way. Each camera may only capture a part of the face, such as the right or
more » ... left half of the face. We propose a methodology to use cylinder head models (CHMs) to track the face of a subject in multiple cameras. The problem of face recognition from video is then transformed to a still face recognition problem which has been well studied. The recognition results are fused based on the extracted pose of the face. For instance, the recognition result from a frontal face should be weighted higher than the recognition result from a face with a yaw of 30 • . Eigenfaces is used for still face recognition along with the average-half-face to reduce the effect of transformation errors. Results of tracking are further aggregated to produce 100% accuracy using video taken from two cameras in our lab.
doi:10.1109/wacv.2009.5403055 dblp:conf/wacv/HarguessHA09 fatcat:4kbfzheaubhstcy6xtxfm6ffz4