Filters








95,962 Hits in 2.9 sec

Generating Long Videos of Dynamic Scenes [article]

Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila, Jaakko Lehtinen, Ming-Yu Liu, Alexei A. Efros, Tero Karras
2022 arXiv   pre-print
On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes.  ...  To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.  ...  the StyleGAN-V baseline; Tero Kuosmanen for maintaining compute infrastructure; Elisa Wallace Eventing (https://www.youtube.com/c/WallaceEventing) and Brian Kennedy (https://www.youtube.com/c/bkxc) for videos  ... 
arXiv:2206.03429v2 fatcat:jsotshqt5zd6pm24ruzcfzvd64

CRAM: Compact representation of actions in movies

Mikel Rodriguez
2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
Our method automatically generates a compact video representation of a long sequence, which features only activities of interest while preserving the general dynamics of the original video.  ...  Dynamic regions within the flow field are identified within the phase spectrum volume of the flow field.  ...  the scene dynamics of the original video.  ... 
doi:10.1109/cvpr.2010.5540030 dblp:conf/cvpr/Rodriguez10 fatcat:ivynnjscsffbjd74orffjl23vy

High Dynamic Range Video through Fusion of Exposured-Controlled Frames

Seung-Jun Youm, Won-ho Cho, Ki-Sang Hong
2005 IAPR International Workshop on Machine Vision Applications  
In this paper, we present a method for generating high dynamic range ( HDR) video through fusion of ex posure-controlled frames in a stationary video camera system.  ...  I ntroduction The real world scene contains a wide dynamic range of illuminance( radiance) values( up to 1:500,000).  ... 
dblp:conf/mva/YoumCH05 fatcat:43cwuckwzbddpbsjd43ctrzh7m

Exploiting Long-Term Dependencies for Generating Dynamic Scene Graphs [article]

Shengyu Feng, Subarna Tripathi, Hesham Mostafa, Marcel Nassar, Somdeb Majumdar
2021 arXiv   pre-print
We show that capturing long-term dependencies is the key to effective generation of dynamic scene graphs.  ...  Compared to the task of scene graph generation from images, dynamic scene graph generation is more challenging due to the temporal dynamics of the scene and the inherent temporal fluctuations of predictions  ...  However, the task of dynamic scene graph generation from video is relatively new, more challenging, and presents several unsolved problems.  ... 
arXiv:2112.09828v1 fatcat:mhyikupbf5gnfasfpfoosddrxm

Future Video Synthesis with Object Motion Prediction [article]

Yue Wu, Rongrong Gao, Jaesik Park, Qifeng Chen
2020 arXiv   pre-print
Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects.  ...  We present an approach to predict future video frames given a sequence of continuous video frames in the past.  ...  Future video generation techniques can also be used to synthesize a long video by repeatedly extending the future of the video.  ... 
arXiv:2004.00542v2 fatcat:6wpozllu3nhubnl3yubdllntlu

Future Video Synthesis With Object Motion Prediction

Yue Wu, Rongrong Gao, Jaesik Park, Qifeng Chen
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects.  ...  We present an approach to predict future video frames given a sequence of continuous video frames in the past.  ...  Future video generation techniques can also be used to synthesize a long video by repeatedly extending the future of the video.  ... 
doi:10.1109/cvpr42600.2020.00558 dblp:conf/cvpr/WuGPC20 fatcat:sct5kariqncqtd7fdki5bsvmnm

Modeling video viewing behaviors for viewer state estimation

Ryo Yonetani
2012 Proceedings of the 20th ACM international conference on Multimedia - MM '12  
This model realizes statistical learning of gaze information while considering dynamic characteristics of video scenes to achieve viewer-state estimation.  ...  Human gaze behaviors when watching videos reflect their cognitive states as well as characteristics of the video scenes being watched.  ...  This work is in part supported by Grantin-Aid for Scientific Research under the contract of 24·5573.  ... 
doi:10.1145/2393347.2396500 dblp:conf/mm/Yonetani12 fatcat:zpvuu3xodnemxkingfz52unf6q

High dynamic range video

Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
2003 ACM SIGGRAPH 2003 Papers on - SIGGRAPH '03  
Figure 1: High dynamic range video of a driving scene. Top row: Input video with alternating short and long exposures. Bottom row: High dynamic range video (tonemapped).  ...  This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame.  ...  This enables us to generate HDR video sequences as well as HDR still images of moving scenes. The result of applying our approach to a driving video can be seen in Figure 1 .  ... 
doi:10.1145/1201775.882270 fatcat:ydlrwprdsfduhhf75kyhhjwppq

High dynamic range video

Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
2003 ACM Transactions on Graphics  
Figure 1: High dynamic range video of a driving scene. Top row: Input video with alternating short and long exposures. Bottom row: High dynamic range video (tonemapped).  ...  This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame.  ...  This enables us to generate HDR video sequences as well as HDR still images of moving scenes. The result of applying our approach to a driving video can be seen in Figure 1 .  ... 
doi:10.1145/882262.882270 fatcat:fetsppuqoneljm6klruyx3n3qy

Turning an Urban Scene Video into a Cinemagraph

Hang Yan, Yebin Liu, Yasutaka Furukawa
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
The creation of a Cinemagraph usually requires a static camera in a carefully configured scene. The task becomes challenging for a regular video with a moving camera and objects.  ...  Lastly, the algorithm applies a sequence of video processing techniques to produce a Cinemagraph movie. We have tested the proposed approach on numerous challenging real scenes.  ...  We thank NVIDIA for a generous GPU donation.  ... 
doi:10.1109/cvpr.2017.177 dblp:conf/cvpr/YanLF17 fatcat:qqypvq2ztfdmvoh323lebuzkbu

Revisiting Hierarchical Approach for Persistent Long-Term Video Prediction [article]

Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong
2021 arXiv   pre-print
We evaluate our method on three challenging datasets involving car driving and human dancing, and demonstrate that it can generate complicated scene structures and motions over a very long time horizon  ...  Learning to predict the long-term future of video frames is notoriously challenging due to inherent ambiguities in the distant future and dramatic amplifications of prediction error through time.  ...  For example, we can see diverse scene dynamics such as emergence of novel vehicles (t = 54 ∼ 144), as well as transition to the novel scene (t = 144 ∼ 180). Long-term prediction results.  ... 
arXiv:2104.06697v1 fatcat:thkaq2a53fhyzedof52lzw7xtm

Interactive viewpoint video textures

Philippe Levieux, James Tompkin, Jan Kautz
2012 Proceedings of the 9th European Conference on Visual Media Production - CVMP '12  
Figure 1 : We enable the viewer to spatially explore a temporally coherent dynamic scene.  ...  We demonstrate our approach on a variety of scenes with stochastic or repetitive motions, and we analyze the limits of our approach and failure-case artifacts.  ...  We use the video texture from the closest capture location, say x l , to predict the dynamics of the scene.  ... 
doi:10.1145/2414688.2414690 dblp:conf/cvmp/LevieuxTK12 fatcat:elqrqugxrrhv5npe2hxbnowbpu

Temporal Image Fusion [article]

Francisco J. Estrada
2014 arXiv   pre-print
In particular, temporal image fusion enables the rendering of long-exposure effects on full frame-rate video, as well as the generation of arbitrarily long exposures from a sequence of images of the same  ...  scene taken over time.  ...  TIF can render long-exposure photographic effects onto full frame-rate video, generate arbitrarily long exposures for photography, enhance or suppress dynamic content, and selectively blend image regions  ... 
arXiv:1403.0087v1 fatcat:v5eqw42kmrce3axgcadbmg4jfa

Generative Video Transformer: Can Objects be the Words? [article]

Yi-Fu Wu, Jaesik Yoon, Sungjin Ahn
2021 arXiv   pre-print
By factoring the video into objects, our fully unsupervised model is able to learn complex spatio-temporal dynamics of multiple interacting objects in a scene and generate future frames of the video.  ...  However, applying transformers to the video domain for tasks such as long-term video generation and scene understanding has remained elusive due to the high computational complexity and the lack of natural  ...  We argue that this is a reasonable choice for videos because in the physical world, the spatiotemporal dynamics of a scene is governed mostly by the causal interaction among the objects of the scene.  ... 
arXiv:2107.09240v1 fatcat:qgi22ecflfdpdjm326a722lepe

A Perceptually Correct 3D Model for Live 3D TV

Yuichi Ohta, Itaru Kitahara, Yoshinari Kameda, Hiroyuki Ishikawa, Takayoshi Koyama
2007 2007 3DTV Conference  
3D modeling of a scene is essential for the generation of 3D video from multiple images. In this paper, we present a perceptually correct 3D modeling scheme.  ...  The quality of 3D video generated using a perceptually correct 3D model could be better than that generated using a physically correct model, when the models are constructed from images.  ...  ACKNOWLEDGMENTS This research was supported in part by the Strategic Information and Communications R&D Promotion Programme (SCOPE) of the Ministry of Internal Affairs and Communications.  ... 
doi:10.1109/3dtv.2007.4379482 fatcat:mnrzspfchffd7ez3nfx6dogk4a
« Previous Showing results 1 — 15 out of 95,962 results