Robust talking face video verification using joint factor analysis and sparse representation on GMM mean shifted supervectors

Ming Li, Shrikanth Narayanan
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
It has been previously demonstrated that systems based on block wise local features and Gaussian mixture models (GMM) are suitable for video based talking face verification due to the best trade-off in terms of complexity, robustness and performance. In this paper, we propose two methods to enhance the robustness and performance of the GMM-ZTnorm baseline system. First, joint factor analysis is performed to compensate the session variabilities due to different recording devices, lighting
more » ... es, lighting conditions, facial expressions, etc. Second, the difference between the universal background model (UBM) and the maximum a posteriori (MAP) adapted model is mapped into the GMM mean shifted supervector whose over-complete dictionary becomes more incoherent. Then, for verification purpose, the sparse representation computed by l 1 -minimization with quadratic constraints is employed to model these GMM mean shifted supervectors. Experimental results show that the proposed system achieved 8.4% (group 1) and 10.5% (group 2) equal error rate on the Banca talking face video database following the P protocol and outperformed the GMM-ZTnorm baseline by yielding more than 20% relative error reduction.
doi:10.1109/icassp.2011.5946773 dblp:conf/icassp/LiN11 fatcat:u27newmev5fj7dekcyjmai4o3q