A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Novel View Synthesis of Dynamic Scenes With Globally Coherent Depths From a Monocular Camera
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
♯ NVIDIA Figure 1 : Dynamic Scene View Synthesis: (Left) A dynamic scene is captured from a monocular camera from the locations V 0 to V k . Each image captures people jumping at each time step (t = 0 to t = k). (Middle) A novel view from an arbitrary location between V 0 and V 1 (denoted as an orange frame) is synthesized with the dynamic contents observed at the time t = k. The estimated depth at V k is shown in the inset. (Right) For the novel view (orange frame), we can also synthesize the
doi:10.1109/cvpr42600.2020.00538
dblp:conf/cvpr/YoonKGPK20
fatcat:no3b2gf4gjarvnisbejyw2pgwi