Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning

Hao Zhu, Huaibo Huang, Yi Li, Aihua Zheng, Ran He
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Talking face generation aims to synthesize a face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video via the given speech clip and facial image. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, cross-modality coherence between audio and video information has not been well addressed during synthesis. In this paper, we propose a novel
more » ... ropose a novel arbitrary talking face generation framework by discovering the audio-visual coherence via the proposed Asymmetric Mutual Information Estimator (AMIE). In addition, we propose a Dynamic Attention (DA) block by selectively focusing the lip area of the input image during the training stage, to further enhance lip synchronization. Experimental results on benchmark LRW dataset and GRID dataset transcend the state-of-the-art methods on prevalent metrics with robust high-resolution synthesizing on gender and pose variations.
doi:10.24963/ijcai.2020/323 dblp:conf/ijcai/WangHWSOC20 fatcat:h3ks7i7oovbebk75b45af3smqy