Real-Time Hair Rendering Using Sequential Adversarial Networks [chapter]

Lingyu Wei, Liwen Hu, Vladimir Kim, Ersin Yumer, Hao Li
2018 Lecture Notes in Computer Science  
reference image reference hair model rendering results reference image rendering results reference hair model Fig. 1 . We propose a real-time hair rendering method. Given a reference image, we can render a 3D hair model with the referenced color and lighting in real-time. Faces in this paper are obfuscated to avoid copyright infringement Abstract. We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning
more » ... pproach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semisupervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.
doi:10.1007/978-3-030-01225-0_7 fatcat:k27ahqyvpjfmrftzc2lvgcv6v4