Example-Guided Style-Consistent Image Synthesis From Semantic Labeling

Miao Wang, Guo-Ye Yang, Ruilong Li, Run-Ze Liang, Song-Hai Zhang, Peter M. Hall, Shi-Min Hu
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Figure 1 : We present a generative adversarial framework for synthesizing images from semantic label maps as well as image exemplars. Our synthetic results are photorealistic, semantically consistent to the label maps (facial expression, pose or scene segmentation map) and style-consistent with corresponding exemplars. Abstract Example-guided image synthesis aims to synthesize an image from a semantic label map and an exemplary image indicating style. We use the term "style" in this problem to
more » ... efer to implicit characteristics of images, for example: in portraits "style" includes gender, racial identity, age, hairstyle; in full body pictures it includes clothing; in street scenes it refers to weather and time of day and such like. A semantic label map in these cases indicates facial expression, full body pose, or scene segmentation. We propose a solution to the example-guided image synthesis problem using conditional generative adversarial networks with style consistency. Our key contributions are (i) a novel style con-sistency discriminator to determine whether a pair of images are consistent in style; (ii) an adaptive semantic consistency loss; and (iii) a training data sampling strategy, for synthesizing style-consistent results to the exemplar. We demonstrate the efficiency of our method on face, dance and street view synthesis tasks.
doi:10.1109/cvpr.2019.00159 dblp:conf/cvpr/0004YLLZHH19 fatcat:plmskjeagbasbcxcoaz3fwsfsa