Filters








1,585 Hits in 5.6 sec

Video Face Editing Using Temporal-Spatial-Smooth Warping [article]

Xiaoyan Li, Dacheng Tao
2014 arXiv   pre-print
In this paper we propose a novel temporal-spatial-smooth warping (TSSW) algorithm to effectively exploit the temporal information in two consecutive frames, as well as the spatial smoothness within each  ...  Simply applying image-based warping algorithms to video-based face editing produces temporal incoherence in the synthesized videos because it is impossible to consistently localize facial features in two  ...  The clas- Temporal-Spatial-Smooth Warping The input of TSSW is a video containing T frames of a human face.  ... 
arXiv:1408.2380v1 fatcat:yb5foxdosnfmzdo66ggwtxf4dm

Entertaining video warping

Yingzhen Yang, Yin Zhu, Chunxiao Liu, Chengfang Song, Qunsheng Peng
2009 2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics  
We employ AdaBoost to detect the sixteen human facial feature points and implement a fast face warping frame by frame while maintaining both temporal and spatial continuity of the warped video.  ...  warping of a meaningful moving part in the video like human face.  ...  In the paper, we present a real-time face warping technique for video sequence preserving both temporal and spatial continuity which cannot be achieved by existing video warping framework.  ... 
doi:10.1109/cadcg.2009.5246908 dblp:conf/cadgraphics/YangZLSP09 fatcat:vkfp2rchgrg7bmwtxmvd77dtpe

Occlusion-aware Video Temporal Consistency

Chun-Han Yao, Chia-Yang Chang, Shao-Yi Chien
2017 Proceedings of the 2017 ACM on Multimedia Conference - MM '17  
Image color editing techniques such as color transfer, HDR tone mapping, dehazing, and white balance have been widely used and investigated in recent decades.  ...  In addition, we propose a video quality metric to evaluate temporal coherence.  ...  Moreover, when optimizing the objective function, it o en trades spatial sharpness for temporal smoothness. Bonneel et al.  ... 
doi:10.1145/3123266.3123363 dblp:conf/mm/YaoCC17 fatcat:pgugxfv7cfbffmvleyrzihrgyu

PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering [article]

Yurui Ren and Ge Li and Yuanqi Chen and Thomas H. Li and Shan Liu
2021 arXiv   pre-print
However, many existing techniques do not provide such fine-grained controls or use indirect editing methods i.e. mimic motions of other individuals.  ...  For easy use and intuitive control, semantically meaningful and fully disentangled parameters should be used as modifications.  ...  Meanwhile, failing to model the temporal correlations of videos will cause incoherent videos.  ... 
arXiv:2109.08379v1 fatcat:jikuvuylanffnoyvy6c7y5x2iu

Task-agnostic Temporally Consistent Facial Video Editing [article]

Meng Cao, Haozhi Huang, Hao Wang, Xuan Wang, Li Shen, Sheng Wang, Linchao Bao, Zhifeng Li, Jiebo Luo
2020 arXiv   pre-print
Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.  ...  In this paper, we propose a task-agnostic temporally consistent facial video editing framework.  ...  Despite tremendous advance in image facial editing, it is still challenging to perform video-level editing because of temporal diversity.  ... 
arXiv:2007.01466v1 fatcat:ya3ka7jmlnakrkbyrdbdupxgpi

Parametric Reshaping of Portraits in Videos [article]

Xiangjun Tang, Wenxin Sun, Yong-Liang Yang, Xiaogang Jin
2022 arXiv   pre-print
In addition, we use the 3D structure of the face to correct the dense mapping to achieve temporal consistency.  ...  However, applying portrait image editing directly on portrait video frames cannot generate smooth and stable video sequences.  ...  However, applying portrait image editing directly on portrait video frames cannot generate smooth and stable video sequences.  ... 
arXiv:2205.02538v1 fatcat:atthua3fxfghvbkx6hu2d6mv2a

Selectively de-animating video

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi
2012 ACM Transactions on Graphics  
We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner.  ...  We demonstrate the success of our technique with a number of motion visualizations, cinemagraphs and video editing examples created from a variety of short input videos, as well as visual and numerical  ...  Second, we composite video regions spatially and temporally using graph-cuts [Boykov et al. 2001; Kwatra et al. 2003 ].  ... 
doi:10.1145/2185520.2185562 fatcat:gxv5urpyxbemvkovltvkzil3ce

Selectively de-animating video

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi
2012 ACM Transactions on Graphics  
We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner.  ...  We demonstrate the success of our technique with a number of motion visualizations, cinemagraphs and video editing examples created from a variety of short input videos, as well as visual and numerical  ...  Second, we composite video regions spatially and temporally using graph-cuts [Boykov et al. 2001; Kwatra et al. 2003 ].  ... 
doi:10.1145/2185520.2335417 fatcat:dbzvp5ym2zgxlpqwpvodnpx5ni

A system for retargeting of streaming video

Philipp Krähenbühl, Manuel Lang, Alexander Hornung, Markus Gross
2009 ACM Transactions on Graphics  
Automatic features comprise video saliency, edge preservation at the pixel resolution, and scene cut detection to enforce bilateral temporal coherence.  ...  This allows us to retarget annotated video streams at a high quality to arbitary aspect ratios while retaining the intended cinematographic scene composition.  ...  Copyrights of the source videos belong to The Walt Disney Company, LiberoVision and Teleclub, the Blender Foundation, and Mammoth HD, Inc.  ... 
doi:10.1145/1618452.1618472 fatcat:otibxcl635hu3jz6qyoa2c46le

A system for retargeting of streaming video

Philipp Krähenbühl, Manuel Lang, Alexander Hornung, Markus Gross
2009 ACM SIGGRAPH Asia 2009 papers on - SIGGRAPH Asia '09  
Automatic features comprise video saliency, edge preservation at the pixel resolution, and scene cut detection to enforce bilateral temporal coherence.  ...  This allows us to retarget annotated video streams at a high quality to arbitary aspect ratios while retaining the intended cinematographic scene composition.  ...  Copyrights of the source videos belong to The Walt Disney Company, LiberoVision and Teleclub, the Blender Foundation, and Mammoth HD, Inc.  ... 
doi:10.1145/1661412.1618472 fatcat:g3zh37ii55b63hcejvcvqfgcnq

User-Assisted Video Stabilization

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi
2014 Computer graphics forum (Print)  
Our system introduces two new modes of interaction that allow the user to improve the unsatisfactory stabilized video. First, we cluster tracks and visualize them on the warped video.  ...  These user-provided deformations reduce undesirable distortions in the video. Our algorithm then computes a stabilized video using the user-selected tracks, while respecting the usermodified regions.  ...  The stabilized output video is obtained by smoothing the individual paths both temporally and spatially.  ... 
doi:10.1111/cgf.12413 fatcat:ccnbwgxpvvccxhojlgxss5jn7a

StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN [article]

Fei Yin and Yong Zhang and Xiaodong Cun and Mingdeng Cao and Yanbo Fan and Xuan Wang and Qingyan Bai and Baoyuan Wu and Jue Wang and Yujiu Yang
2022 arXiv   pre-print
., high-resolution video generation, disentangled control by driving video or audio, and flexible face editing.  ...  One-shot talking face generation aims at synthesizing a high-quality talking face video from an arbitrary portrait image, driven by a video or an audio segment.  ...  The predicted flow field is used to spatially warp the latent feature map.  ... 
arXiv:2203.04036v2 fatcat:uyo7v5gvefgbnlxzxa5gp7bafy

Video-based characters

Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, Christian Theobalt
2011 ACM SIGGRAPH 2011 papers on - SIGGRAPH '11  
Figure 1 : An animation of an actor created with our method from a multi-view video database.  ...  In the composited scene of animation and background, the synthesized character and her spatio-temporal appearance look close to lifelike.  ...  Jain et al. [2010] edited body shape in video sequences.  ... 
doi:10.1145/1964921.1964927 fatcat:ghvueurvardsvpie7uqmphtaai

Video-based characters

Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, Christian Theobalt
2011 ACM Transactions on Graphics  
Figure 1 : An animation of an actor created with our method from a multi-view video database.  ...  In the composited scene of animation and background, the synthesized character and her spatio-temporal appearance look close to lifelike.  ...  Jain et al. [2010] edited body shape in video sequences.  ... 
doi:10.1145/2010324.1964927 fatcat:h6payn5ejfcsxgmm2hjkbfc6zi

Videoshop: A new framework for spatio-temporal video editing in gradient domain

Hongcheng Wang, Ning Xu, Ramesh Raskar, Narendra Ahuja
2007 Graphical Models  
A set of gradient operators is also provided to the user for editing purposes. We evaluate our algorithm using a variety of examples for image/video or video/video pairs.  ...  , which is very different from all current video editing software; secondly, we propose using a fast and accurate 3D discrete Poisson solver which uses diagonal multigrids to solve the 3D Poisson equation  ...  used to compare gradients from different channels, c, or dimensions, ijk (spatial dimension, temporal dimension or both) when editing videos.  ... 
doi:10.1016/j.gmod.2006.06.002 fatcat:zpgdtt3hbba6tgkvrwi5c6ltvi
« Previous Showing results 1 — 15 out of 1,585 results