Image-driven re-targeting and relighting of facial expressions

Lei Zhang, Yang Wang, Sen Wang, D. Samaras, Song Zhang, Peisen Huang
International 2005 Computer Graphics  
Synthesis and re-targeting of facial expressions is central to facial animation and often involves significant manual work in order to achieve realistic expressions, due to the difficulty of capturing high quality expression data. Recent progress in dynamic 3-D scanning allows very accurate acquisition of dense point clouds of facial geometry and texture moving at video speeds. Often the new facial expressions need to be rendered in different environments where the illumination is different
more » ... the original capture conditions. In this paper we examine the problem of re-targeting captured facial motion under different illumination conditions when the information we have about the face we want to animate is minimal, a single input image. Given an input image of a face, a set of illumination example images (of other faces captured under different illumination) and a facial expression motion sequence, we aim to generate novel expression sequences of the input face under the lighting conditions in the illumination example images. The input image and illumination example images can be taken under arbitrary unknown lighting. In this paper, we propose two methods in which a 3D spherical harmonic morphable model (SHBMM) can generate images under new lighting conditions with remarkable quality even if only one single image under unknown lighting is available, not only for static poses but for dynamic sequences as well where the face is undergoing subtle high-detail motion.
doi:10.1109/cgi.2005.1500355 dblp:conf/cgi/00020WS0H05 fatcat:vicvgborrzaaxkjnuupjsgg4ha