2 Hits in 2.3 sec

PixelSynth: Generating a 3D-Consistent Experience from a Single Image [article]

Chris Rockwell, David F. Fouhey, Justin Johnson
2021 arXiv   pre-print
Recent advancements in differentiable rendering and 3D reasoning have driven exciting results in novel view synthesis from a single image.  ...  In addition, we show increased 3D consistency compared to alternative accumulation methods. Project website:  ...  We thank Angel Chang, Richard Tucker, and Noah Snavely for allowing us to share frames from their datasets, and Olivia Wiles and Ajay Jain for easily extended code.  ... 
arXiv:2108.05892v1 fatcat:rieqpmrkp5gupibymp2evecif4

Look Outside the Room: Synthesizing A Consistent Long-Term 3D Scene Video from A Single Image [article]

Xuanchi Ren, Xiaolong Wang
2022 arXiv   pre-print
Novel view synthesis from a single image has recently attracted a lot of attention, and it has been primarily advanced by 3D deep learning and rendering techniques.  ...  In this paper, we propose a novel approach to synthesize a consistent long-term video given a single scene image and a trajectory of large camera motions.  ...  Given a single image x 1 , we first generate x 2 , ...x L in an autoregressive manner. Then, instead of only using x L , we aggregate information from x 2 , ...x L to generate x L+1 and so on.  ... 
arXiv:2203.09457v1 fatcat:2wktjhheifb7rcgbdf2ckyrcb4