A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
2021
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
unpublished
Recent neural rendering approaches for human activities achieve remarkable view synthesis results, but still rely on dense input views or dense training with all the capture frames, leading to deployment difficulty and inefficient training overload. However, existing advances will be ill-posed if the input is both spatially and temporally sparse. To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal
doi:10.24963/ijcai.2021/130
fatcat:cxxo23s4knc6ln3tkngnc2ys5u