Filters








1,637 Hits in 3.6 sec

Dynamic View Synthesis from Dynamic Monocular Video [article]

Chen Gao, Ayush Saraf, Johannes Kopf, Jia-Bin Huang
2021 arXiv   pre-print
We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.  ...  We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene.  ...  Conclusions We have presented a new algorithm for dynamic view synthesis from a single monocular video.  ... 
arXiv:2105.06468v1 fatcat:fvwki3byd5fu3fclqvjyun5yoe

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video [article]

Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, Christian Theobalt
2021 arXiv   pre-print
Our approach takes RGB images of a dynamic scene as input (e.g., from a monocular video recording), and creates a high-quality space-time geometry and appearance representation.  ...  We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.  ...  [84] combine neural textures with the classical graphics pipeline for novel view synthesis of static objects and monocular video re-rendering.  ... 
arXiv:2012.12247v4 fatcat:6axdyysm4bc6tev6tb3qp6mcxm

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [article]

Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang
2021 arXiv   pre-print
We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.  ...  We conduct a number of experiments that demonstrate our approach significantly outperforms recent monocular view synthesis methods, and show qualitative results of space-time view synthesis on a variety  ...  We present a new approach for novel view and time syn-thesis of dynamic scenes from monocular video input with known (or derivable) camera poses.  ... 
arXiv:2011.13084v3 fatcat:obseifvswzbzxonqo74siddgbe

D^2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video [article]

Tianhao Wu, Fangcheng Zhong, Andrea Tagliasacchi, Forrester Cole, Cengiz Oztireli
2022 arXiv   pre-print
Given a monocular video, segmenting and decoupling dynamic objects while recovering the static environment is a widely studied problem in machine intelligence.  ...  We introduce Decoupled Dynamic Neural Radiance Field (D^2NeRF), a self-supervised approach that takes a monocular video and learns a 3D scene representation which decouples moving objects, including their  ...  Our method enables 3D scene decoupling and reconstruction from a monocular video captured from casual equipment such as a mobile phone, and can be readily extended to multi-view videos.  ... 
arXiv:2205.15838v2 fatcat:fog6hztu4jfzdb4vn2iikqnxda

Learning Dynamic View Synthesis With Few RGBD Cameras [article]

Shengze Wang, YoungJoong Kwon, Yuan Shen, Qian Zhang, Andrei State, Jia-Bin Huang, Henry Fuchs
2022 arXiv   pre-print
There have been significant advancements in dynamic novel view synthesis in recent years.  ...  We generate feature point clouds from RGBD frames and then render them into free-viewpoint videos via a neural renderer.  ...  Introduction Dynamic novel view synthesis is the task of using a set of input video frames to synthesize videos of the dynamic scene from novel viewpoints.  ... 
arXiv:2204.10477v2 fatcat:tfortvxrwrcthkcrnbxjdslf7y

Neural Radiance Flow for 4D View Synthesis and Video Processing [article]

Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu
2021 arXiv   pre-print
state-of-the-art methods for spatial-temporal view synthesis.  ...  We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.  ...  We also show 4D view synthesis results in Figure 7 on monocular video datasets from [38, 80] .  ... 
arXiv:2012.09790v2 fatcat:ma2n65o4xze6lonewevz5e55pa

3D Moments from Near-Duplicate Photos [article]

Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen
2022 arXiv   pre-print
Our system produces photorealistic space-time videos with motion parallax and scene dynamics, while plausibly recovering regions occluded in the original views.  ...  As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.  ...  Recently, several neural rendering approaches [15, [27] [28] [29] 49, 52] have shown promising results on space-time view synthesis from monocular dynamic videos.  ... 
arXiv:2205.06255v1 fatcat:ckszn3gvfreaxpstyb443taboi

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation [article]

Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt
2021 arXiv   pre-print
We demonstrate several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively  ...  In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of  ...  As shown in Figure 1 , our approach can be utilized in various applications, such as human motion transfer, interactive reenactment and novel view synthesis from monocular video.  ... 
arXiv:2001.04947v3 fatcat:ppii2ilexze7nkejshrohlky4u

Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video

Delong Yang, Xunyu Zhong, Dongbing Gu, Xiafu Peng, Gongliu Yang, Chaosheng Zou
2020 International Journal of Advanced Robotic Systems  
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular videos are fundamental but challenging research topics in computer vision.  ...  The supervisory signals for the training stage come from various forms of image synthesis.  ...  Until now, the depth pose CNNs take advantage of the image synthesis to construct the supervisory signal for the static scene, with the flow CNN using the view synthesis as a supervisory signal for dynamic  ... 
doi:10.1177/1729881420909653 fatcat:psx7vi472bew5nrlxkzy5zfygi

Space-time Neural Irradiance Fields for Free-Viewpoint Video [article]

Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim
2021 arXiv   pre-print
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video.  ...  We address this ambiguity by constraining the time-varying geometry of our dynamic scene representation using the scene depth estimated from video depth estimation methods, aggregating contents from individual  ...  Unlike NeRF that only models static scenes, our focus is on creating new views from arbitrary viewpoint and time for dynamic scenes. View synthesis for videos.  ... 
arXiv:2011.12950v2 fatcat:rq65vtccazgmfmovp756vqa7du

Novel View Synthesis of Dynamic Scenes With Globally Coherent Depths From a Monocular Camera

Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, Jan Kautz
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
♯ NVIDIA Figure 1 : Dynamic Scene View Synthesis: (Left) A dynamic scene is captured from a monocular camera from the locations V 0 to V k .  ...  (Middle) A novel view from an arbitrary location between V 0 and V 1 (denoted as an orange frame) is synthesized with the dynamic contents observed at the time t = k.  ...  In this paper, we focus on view synthesis of dynamic scenes observed from a moving monocular camera as shown in Figure 1 .  ... 
doi:10.1109/cvpr42600.2020.00538 dblp:conf/cvpr/YoonKGPK20 fatcat:no3b2gf4gjarvnisbejyw2pgwi

Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera [article]

Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, Jan Kautz
2020 arXiv   pre-print
A key challenge for the novel view synthesis arises from dynamic scene reconstruction where epipolar geometry does not apply to the local motion of dynamic contents.  ...  We evaluate our method of depth estimation and view synthesis on diverse real-world dynamic scenes and show the outstanding performance over existing methods.  ...  In this paper, we focus on view synthesis of dynamic scenes observed from a moving monocular camera as shown in Figure 1 .  ... 
arXiv:2004.01294v1 fatcat:nl5ntfmmznbtfcbpip5o6k6hki

Animatable Neural Radiance Fields from Monocular RGB Videos [article]

Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, Huchuan Lu
2021 arXiv   pre-print
We present animatable neural radiance fields (animatable NeRF) for detailed human avatar creation from monocular videos.  ...  In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from novel views  ...  Conclusion In this paper, we propose to learn an animatable neural radiance field from monocular videos, which allows us to perform visually realistic novel-view synthesis results, reconstruct 3D geometry  ... 
arXiv:2106.13629v2 fatcat:tg3f5jyyhbbcjhpisay5pe6b5e

DiPE: Deeper into Photometric Errors for Unsupervised Learning of Depth and Ego-motion from Monocular Videos [article]

Hualie Jiang, Laiyan Ding, Zhenglong Sun, Rui Huang
2020 arXiv   pre-print
Unsupervised learning of depth and ego-motion from unlabelled monocular videos has recently drawn great attention, which avoids the use of expensive ground truth in the supervised one.  ...  It achieves this by using the photometric errors between the target view and the synthesized views from its adjacent source views as the loss.  ...  The first method training with monocular videos, SfM-Learner [1] adopts an additional Pose CNN to estimate the relative motion between sequential views to make view synthesis attainable.  ... 
arXiv:2003.01360v3 fatcat:p5toyxdmhfbh3mnvy6tgsmryfi

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans [article]

Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
2021 arXiv   pre-print
We also demonstrate the capability of our approach to reconstruct a moving person from a monocular video on the People-Snapshot dataset.  ...  This paper addresses the challenge of novel view synthesis for a human performer from a very sparse set of camera views.  ...  synthesis on monocular videos.  ... 
arXiv:2012.15838v2 fatcat:cehzk5zuwvespkjwhbgp3tnolu
« Previous Showing results 1 — 15 out of 1,637 results