Filters








6,790 Hits in 5.2 sec

TwoStreamVAN: Improving Motion Modeling in Video Generation [article]

Ximeng Sun, Huijuan Xu, Kate Saenko
2020 arXiv   pre-print
To im-prove motion modeling in video generation tasks, we propose a two-stream model that disentangles motion generation from content generation, called a Two-Stream Variational Adversarial Network (TwoStreamVAN  ...  Existing methods entangle the two intrinsically different tasks of motion and content creation in a single generator network, but this approach struggles to simultaneously generate plausible motion and  ...  ] , Fig. 12 for MUG Facial Expression Figure 1 : 1 We propose Two-Stream Variational Adversarial Net- Figure 2 :Figure 3 : 23 Our Two-Stream Variational Adversarial Network learns to generate the  ... 
arXiv:1812.01037v2 fatcat:7sawoqf7ebdihnjjonwd4wh3ie

Adversarial Memory Networks for Action Prediction [article]

Zhiqiang Tao, Yue Bai, Handong Zhao, Sheng Li, Yu Kong, Yun Fu
2021 arXiv   pre-print
In this study, we propose adversarial memory networks (AMemNet) to generate the "full video" feature conditioning on a partial video query from two new aspects.  ...  Secondly, we develop a class-aware discriminator to guide the memory generator to deliver not only realistic but also discriminative full video features upon adversarial training.  ...  Conclusion In this paper, we presented a novel two-stream adversarial memory networks (AMemNet) model for the action prediction task.  ... 
arXiv:2112.09875v1 fatcat:jhksvwsnxzfhvcrxlzzyroiyki

TwoStreamVAN: Improving Motion Modeling in Video Generation

Ximeng Sun, Huijuan Xu, Kate Saenko
2020 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)  
To improve motion modeling in video generation task, we propose a two-stream model that disentangles motion generation from content generation, called a Two-Stream Variational Adversarial Network (TwoStreamVAN  ...  Existing methods entangle the two intrinsically different tasks of motion and content creation in a single generator network, but this approach struggles to simultaneously generate plausible motion and  ...  Conclusion In this paper, we propose a novel Two-Stream Variational Adversarial Network to improve motion modeling in video generation.  ... 
doi:10.1109/wacv45572.2020.9093557 dblp:conf/wacv/SunXS20 fatcat:rjwvm6iqyfh3fhgp3bpv7hxgta

Face-Focused Cross-Stream Network for Deception Detection in Videos [article]

Mingyu Ding, An Zhao, Zhiwu Lu, Tao Xiang, Ji-Rong Wen
2018 arXiv   pre-print
Specifically, for face-body multimodal learning, a novel face-focused cross-stream network (FFCSN) is proposed.  ...  It differs significantly from the popular two-stream networks in that: (a) face detection is added into the spatial stream to capture the facial expressions explicitly, and (b) correlation learning is  ...  Adversarial training involves a discriminator and a generator. In our case, the discriminator network aims to classify the inputs into two classes: real or fake.  ... 
arXiv:1812.04429v1 fatcat:d576p7wirzgkhcihsrympfjocq

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers [article]

Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li
2018 arXiv   pre-print
In this work, we develop a powerful untargeted adversarial attack for action recognition systems in both white-box and black-box settings.  ...  Drawing inspiration from image classifier attacks, we create new attacks which achieve state-of-the-art success rates on a two-stream classifier trained on the UCF-101 dataset.  ...  We create a white-box, untargeted attack for a two-stream action recognition classifier; 2.  ... 
arXiv:1811.11875v1 fatcat:a3nmutqhmrewjcnysdpbcyztpa

Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [article]

Hu Zhang, Linchao Zhu, Yi Zhu, Yi Yang
2020 arXiv   pre-print
Deep neural networks are known to be susceptible to adversarial noise, which are tiny and imperceptible perturbations.  ...  By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.  ...  Two-stream methods [26] train two separate networks, a spatial stream given input of RGB images and a temporal stream given input of stacked optical flow images.  ... 
arXiv:2003.07637v2 fatcat:5jncbiizara25a5yexrm4ia4h4

Poking a Hole in the Wall: Efficient Censorship-Resistant Internet Communications by Parasitizing on WebRTC

Diogo Barradas, Nuno Santos, Luís E. T. Rodrigues, Vítor Nunes
2020 Conference on Computer and Communications Security  
., Firefox) through the WebRTC video stream.  ...  To create a covert channel, a user only needs to make a video call with a trusted party located outside the censored region using a popular WebRTC streaming service, e.g., Whereby.  ...  Acknowledgments: We thank our shepherd, Nick Feamster, and the anonymous reviewers for their comments.  ... 
doi:10.1145/3372297.3417874 dblp:conf/ccs/Barradas0RN20 fatcat:ewsegootsng6jezfuwhozrguva

Adversarial Framework for Unsupervised Learning of Motion Dynamics in Videos [article]

C. Spampinato, S. Palazzo, P. D'Oro, D. Giordano, M. Shah
2019 arXiv   pre-print
Self-supervision is enforced by using motion masks produced by the generator, as a co-product of its generation process, to supervise the discriminator network in performing dense prediction.  ...  Unsupervised learning can instead leverage the vast amount of videos available on the web and it is a promising solution for overcoming the existing limitations.  ...  Adversarial Framework for Video Generation and Unsupervised Motion Learning Our adversarial framework for video generation and dense prediction -VOS-GAN -is based on a GAN framework and consists of the  ... 
arXiv:1803.09092v2 fatcat:tconl7knq5af3nqlbxthvx7br4

Generating Synthetic Video Sequences by Explicitly Modeling Object Motion [chapter]

S. Palazzo, C. Spampinato, P. D'Oro, D. Giordano, M. Shah
2019 Landolt-Börnstein - Group III Condensed Matter  
In this paper we propose a GAN framework for video generation that, instead, employs two latent spaces in order to structure the generative process in a more natural way: 1) a latent space to generate  ...  Recent GAN-based video generation approaches model videos as the combination of a time-independent scene component and a time-varying motion component, thus factorizing the generation problem into generating  ...  Thus, z C can be seen as a condition for the foreground stream, in a similar way to conditional generative adversarial networks for restricting generation process to a specific class.  ... 
doi:10.1007/978-3-030-11012-3_37 fatcat:gkycjz3nyrem3bqn73l4o4hvra

Learning with privileged information via adversarial discriminative modality distillation [article]

Nuno C. Garcia, Pietro Morerio, Vittorio Murino
2018 arXiv   pre-print
We propose a new approach to train a hallucination network that learns to distill depth information via adversarial learning, resulting in a clean approach without several losses to balance or hyperparameters  ...  This paper presents a new approach in this direction for RGB-D vision tasks, developed within the adversarial learning and privileged information frameworks.  ...  ACKNOWLEDGMENTS The authors would like to thank Riccardo Volpi for useful discussion on adversarial training and GANs.  ... 
arXiv:1810.08437v1 fatcat:emrj23ga3ngprlp2zmtxvms3qy

Face Translation between Images and Videos using Identity-aware CycleGAN [article]

Zhiwu Huang, Bernhard Kratzwald, Danda Pani Paudel, Jiqing Wu, Luc Van Gool
2017 arXiv   pre-print
To address such two problems, we generalize the state-of-the-art image-to-image translation network (Cycle-Consistent Adversarial Networks) to the image-to-video/video-to-image translation context by exploiting  ...  In this problem there exist two major technical challenges: 1) designing a robust translation model between static images and dynamic videos, and 2) preserving facial identity during image-video translation  ...  Acknowledgements We would like to thank NVidia for donating the GPUs used in this work.  ... 
arXiv:1712.00971v1 fatcat:4ptvcrdbbjbgrcqt2qlopizghq

Adversarial Semi-Supervised Multi-Domain Tracking [article]

Kourosh Meshgi, Maryam Sadat Mirzaei
2020 arXiv   pre-print
In visual tracking, the emerging features in shared layers of a multi-domain tracker, trained on various sequences, are crucial for tracking in unseen videos.  ...  By employing these features and training dedicated layers for each sequence, we build a tracker that performs exceptionally on different types of videos.  ...  This network uses a two-stream architecture [31] each of streams having ResNet-18 architecture, and the final fully connected (FC) layer is dropped.  ... 
arXiv:2009.14635v1 fatcat:mgcali7nz5eh3ip5psdyau6rb4

Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization [article]

Kyle Min, Jason J. Corso
2020 arXiv   pre-print
To further improve the performance, we build our network using two parallel branches which operate in an adversarial way: the first branch localizes the most salient activities of a video and the second  ...  no activity occurs (i.e. background features) from activity-related features for each video.  ...  Acknowledgement We thank Stephan Lemmer, Victoria Florence, Nathan Louis, and Christina Jung for their valuable feedback and comments. This research was, in part, supported by NIST grant 60NANB17D191.  ... 
arXiv:2007.06643v1 fatcat:lzdfu3i4kreghoa2av4x2afbui

Early Action Prediction with Generative Adversarial Networks

Dong Wang, Yuan Yuan, Qi Wang
2019 IEEE Access  
Specifically, its generator comprises of two networks: a CNN for feature extraction and an LSTM for estimating residual error between features of the partially observed videos and complete ones, and then  ...  For this purpose, the generative adversarial network is introduced for tackling action prediction problem, which improves the recognition accuracy of partially observed videos though narrowing the feature  ...  For exploiting motion information of video sequence, Simonyan and Zisserman [11] propose a novel two-stream CNN architecture that processes motion information with a separate CNN that is fed with optical  ... 
doi:10.1109/access.2019.2904857 fatcat:oewrqk7qejbhvjgjyabmlbewje

SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation

Xiao Wang, Chenglong Li, Bin Luo, Jin Tang
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
In this paper, we propose to generate hard positive samples via adversarial learning for visual tracking.  ...  Based on the generated hard positive samples, we train a Siamese network for visual tracking and our experiments validate the effectiveness of the introduced algorithm.  ...  ., positive sample generation network (PSGN), hard positive transformation network (HPTN) and two streaming Siamese instance search network, as shown in Figure 2 .  ... 
doi:10.1109/cvpr.2018.00511 dblp:conf/cvpr/WangL0T18 fatcat:ovjvqxtcozawpmwxtikgbblcy4
« Previous Showing results 1 — 15 out of 6,790 results