Filters








55,989 Hits in 13.6 sec

Identifying individuals in video by combining 'generative' and discriminative head models

M. Everingham, A. Zisserman
2005 Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1  
The second is to combine discriminative and 'generative' approaches for detection and recognition.  ...  Subsequent verification of the identity is obtained using the head model in a 'generative' framework.  ...  This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. We also acknowledge the support of EC project CogViSys.  ... 
doi:10.1109/iccv.2005.116 dblp:conf/iccv/EveringhamZ05 fatcat:hiqwqymjyjddvpbs6zdpszmfwm

3D-Aware Video Generation [article]

Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, Radu Timofte
2022 arXiv   pre-print
By combining neural implicit representations with time-aware discriminator, we develop a GAN framework that synthesizes 3D video supervised only with monocular videos.  ...  Generative models have emerged as an essential building block for many image synthesis and editing tasks.  ...  HAI, and a Samsung GRO.  ... 
arXiv:2206.14797v1 fatcat:66yji7u7gvbvfmnmnkb56g7fem

CONFIG: Controllable Neural Face Image Generation [article]

Marek Kowalski, Stephan J. Garbin, Virginia Estellers, Tadas Baltrušaitis, Matthew Johnson, Jamie Shotton
2020 arXiv   pre-print
Our ability to sample realistic natural images, particularly faces, has advanced by leaps and bounds in recent years, yet our ability to exert fine-tuned control over the generative process has lagged  ...  To this end we propose ConfigNet, a neural face model that allows for controlling individual aspects of output images in semantically meaningful ways and that is a significant step on the path towards  ...  Acknowdledgments The authors would like to thank Nate Kushman for helpful discussions and suggestions.  ... 
arXiv:2005.02671v3 fatcat:uo6b36ktcrdv7ppwkwvm4hsgxu

Video Generative Adversarial Networks: A Review

Nuha Aldausari, Arcot Sowmya, Nadine Marcus, Gelareh Mohammadi
2023 ACM Computing Surveys  
The paper concludes with the main challenges and limitations of the current video GANs models.  ...  While the variations of GANs models in general have been covered to some extent in several survey papers, to the best of our knowledge, this is the first paper that reviews the state-of-the-art video GANs  ...  While some studies [64, 65, 68, 95] retarget the motion of an individual body to another individual, others focus only on the head [80, 84] .  ... 
doi:10.1145/3487891 fatcat:fssfwvlfsje4ddk5pk5cahdwuu

Facial Keypoint Sequence Generation from Audio [article]

Prateek Manocha, Prithwijit Guha
2020 arXiv   pre-print
Whenever we speak, our voice is accompanied by facial movements and expressions.  ...  Several recent works have shown the synthesis of highly photo-realistic videos of talking faces, but they either require a source video to drive the target face or only generate videos with a fixed head  ...  However, in talking face videos generated by these 2D methods [5, 27] , the head pose remains almost fixed during talking.  ... 
arXiv:2011.01114v1 fatcat:ru3q4xmqgrgoph3tcqlgah26bq

Unsupervised Video Summarization with Attentive Conditional Generative Adversarial Networks

Xufeng He, Yang Hua, Tao Song, Zongpu Zhang, Zhengui Xue, Ruhui Ma, Neil Robertson, Haibing Guan
2019 Proceedings of the 27th ACM International Conference on Multimedia - MM '19  
With the rapid growth of video data, video summarization technique plays a key role in reducing people's efforts to explore the content of videos by generating concise but informative summaries.  ...  Specifically, the generator produces high-level weighted frame features and predicts frame-level importance scores, while the discriminator tries to distinguish between weighted frame features and raw  ...  ACKNOWLEDGMENT This work was supported in part by National NSF of China (NO. 61525204, 61732010, 61872234) and Shanghai Key Laboratory of Scalable Computing and Systems.  ... 
doi:10.1145/3343031.3351056 dblp:conf/mm/HeHSZXMRG19 fatcat:3nfjkhjpxrbjvfq4brkesxyyvi

Generating Long Videos of Dynamic Scenes [article]

Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila, Jaakko Lehtinen, Ming-Yu Liu, Alexei A. Efros, Tero Karras
2022 arXiv   pre-print
We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time.  ...  Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence  ...  baseline; Tero Kuosmanen for maintaining compute infrastructure; Elisa Wallace Eventing (https://www.youtube.com/c/WallaceEventing) and Brian Kennedy (https://www.youtube.com/c/bkxc) for videos used to  ... 
arXiv:2206.03429v2 fatcat:jsotshqt5zd6pm24ruzcfzvd64

Generative Models for Pose Transfer [article]

Patrick Chao, Alexander Li, Gokul Swamy
2018 arXiv   pre-print
We take in a video of one person performing a sequence of actions and attempt to generate a video of another person performing the same actions.  ...  We investigate nearest neighbor and generative models for transferring pose between persons.  ...  For our experiments, we wanted to take a given video of B, and input video A and generate an output video of individual B in the same poses as A.  ... 
arXiv:1806.09070v1 fatcat:poy3lvnk3nejvpcr4535wjikuq

V3GAN: Decomposing Background, Foreground and Motion for Video Generation [article]

Arti Keshari, Sonam Gupta, Sukhendu Das
2022 arXiv   pre-print
Video generation is a challenging task that requires modeling plausible spatial and temporal dynamics in a video.  ...  Inspired by how humans perceive a video by grouping a scene into moving and stationary components, we propose a method that decomposes the task of video generation into the synthesis of foreground, background  ...  Qualitative Evaluation In figure 3 , we show the background, foreground and mask generated by our model along with the generated video for the three datasets.  ... 
arXiv:2203.14074v1 fatcat:yuqsirxnffc43kxu76wkguwajy

CortexNet: a Generic Network Family for Robust Visual Temporal Representations [article]

Alfredo Canziani, Eugenio Culurciello
2017 arXiv   pre-print
However, there is a need to identify the best strategy to employ these networks with temporal visual inputs and obtain a robust and stable representation of video data.  ...  These models have achieved super-human performance on object recognition, localisation, and detection in still images.  ...  It also explored and visualised data though the matplotlib library combined with the Jupyter Notebook interactive computational environment.  ... 
arXiv:1706.02735v2 fatcat:z6mbci4of5bepjg7tnl4bzmlbe

Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent [chapter]

Jean-Claude Martin, Laurence Devillers, Amaryllis Raouzaiou, George Caridakis, Zsófia Ruttkay, Catherine Pelachaud, Maurizio Mancini, Radek Niewiadomski, Hannes Pirker, Brigitte Krenn, Isabella Poggi, Emanuela Magno Caldognetto (+7 others)
2010 Cognitive Technologies  
This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent.  ...  In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities.  ...  A parametric model for iconic gesture generation is defined by Tepper et al. (2004) following the analysis of a video corpus.  ... 
doi:10.1007/978-3-642-15184-2_18 fatcat:zjiejgijsjd3rhuxrvknkverqu

Robust Deepfake On Unrestricted Media: Generation And Detection [article]

Trung-Nghia Le and Huy H Nguyen and Junichi Yamagishi and Isao Echizen
2022 arXiv   pre-print
., in-the-wild images and videos). Finally, it suggests a focus for future fake media research.  ...  This chapter explores the evolution of and challenges in deepfake generation and detection.  ...  Acknowledgments This research was partly supported by JSPS KAKENHI Grants (JP16H06302, JP18H04120, JP21H04907, JP20K23355, JP21K18023) and JST CREST Grants (JPMJCR20D3, JP-MJCR18A6), Japan.  ... 
arXiv:2202.06228v1 fatcat:a37q2lf7w5bcbekk5esmbx2goe

The Hippocampus Generalizes across Memories that Share Item and Context Information

Laura A Libby, Zachariah M Reagh, Nichole R Bouffard, J Daniel Ragland, Charan Ranganath
2018 Journal of Cognitive Neuroscience  
Hippocampal activity patterns discriminated between events that shared either item or context information but generalized across events that shared similar item-context associations.  ...  The current findings provide evidence that, whereas the hippocampus can reduce mnemonic interference by separating events that generalize along a single attribute dimension, overlapping hippocampal codes  ...  and a combined model containing both additive and conjunctive terms.  ... 
doi:10.1162/jocn_a_01345 pmid:30240315 pmcid:PMC7217712 fatcat:p7gztaeepjeirdd2a34cqketne

Model generation for robust object tracking based on temporally stable regions

Prithviraj Banerjee, Axel Pinz, Somnath Sengupta
2008 2008 IEEE Workshop on Motion and video Computing  
Tracking and recognition of objects in video sequences suffer from difficulties in learning appropriate object models.  ...  Our experiments demonstrate the capabilities of this novel method to build object models for people and to robustly track them, but the method is in general applicable to learn object models for any object  ...  (b),(c)(d)and(e) are the individual models detected for each frame, which combine to give the net model in (a). Figure 4 . 4 Examples of models generated over a larger period of time.  ... 
doi:10.1109/wmvc.2008.4544045 fatcat:mwhbj34osjeonfeuoqcnqtrenq

Establishing a Generalized Repertoire of Helping Behavior in Children with Autism

Sharon A Reeve, Kenneth F Reeve, Dawn Buffington Townsend, Claire L Poulson, Henry Roane
2007 Journal of Applied Behavior Analysis  
During the training condition, video models, prompting, and reinforcement were used.  ...  Additional pre-and postintervention generalization trials showed that the frequency of helping responses also increased in the presence of novel stimuli, in a novel setting, and with a novel instructor  ...  Interobserver Agreement and Procedural Integrity All sessions were videotaped and independently scored by two individuals not involved in the present study.  ... 
doi:10.1901/jaba.2007.11-05 pmid:17471797 pmcid:PMC1868822 fatcat:fjfbuaa6jbc3tmbu3koegnqyra
« Previous Showing results 1 — 15 out of 55,989 results