1,812 Hits in 4.7 sec

Combining sampling and autoregression for motion synthesis

D. Oziem, N. Campbell, C. Dalton, D. Gibson, B. Thomas
Proceedings Computer Graphics International, 2004.  
By modelling each segment using an autoregressive process we can introduce new segments and therefore unseen motions.  ...  We present a novel approach to motion synthesis. We show that by splitting sequences into segments we can create new sequences with a similar look and feel to the original.  ...  Motion synthesis provides an efficient and cost effective tool for the film and games industry. Thousands of background characters can be synthesised using a single extracted motion.  ... 
doi:10.1109/cgi.2004.1309255 dblp:conf/cgi/OziemCDGT04 fatcat:xinxydhyjbc23pucad5pfo547m

Transflower: probabilistic autoregressive dance generation with multimodal attention [article]

Guillermo Valle-Pérez, Gustav Eje Henter, Jonas Beskow, André Holzapfel, Pierre-Yves Oudeyer, Simon Alexanderson
2021 arXiv   pre-print
Second, we introduce the currently largest 3D dance-motion dataset, obtained with a variety of motion-capture technologies, and including both professional and casual dancers.  ...  attend over a large motion and music context are necessary to produce interesting, diverse, and realistic dance that matches the music.  ...  Here we present what we believe is the first tion realism, different works have added extra inputs to the model, model for autoregressive motion synthesis combining both  ... 
arXiv:2106.13871v1 fatcat:ofkengygabcfdlbcwsl6vk4rmq

MoGlow: Probabilistic and controllable motion synthesis using normalising flows [article]

Gustav Eje Henter, Simon Alexanderson, Jonas Beskow
2019 arXiv   pre-print
Objective and subjective results show that randomly-sampled motion from the proposed method attains a motion quality close to recorded motion capture for both humans and animals.  ...  Data-driven modelling and synthesis of motion is an active research area with applications that include animation, games, and social robotics.  ...  Among other results, the performance of the ablations MG-D and MG-A versus the full MG system indicate that both autoregression and data dropout are of great importance for synthesising natural motion.  ... 
arXiv:1905.06598v2 fatcat:thpy52wva5eatdhhlyadvsmic4

Generative Spatiotemporal Modeling Of Neutrophil Behavior [article]

Narita Pandhe, Balazs Rada, Shannon Quinn
2018 arXiv   pre-print
In this work, we propose an aggregate model that combine Generative Adversarial Networks (GANs) and Autoregressive (AR) models to predict cell motion and appearance in human neutrophils imaged by differential  ...  Cell motion and appearance have a strong correlation with cell cycle and disease progression.  ...  Fig. 7 : 7 Sample results of appearance and motion synthesis.  ... 
arXiv:1804.00393v1 fatcat:3x4dpafrerb5ldcnpxbvxf52xi

Higher-order Autoregressive Models for Dynamic Textures

M. Hyndman, A. Jepson, D. J. Fleet
2007 Procedings of the British Machine Vision Conference 2007  
To overcome problems of dynamic model stability, we apply Burg's Maximum Entropy Spectral Analysis technique for parameter estimation, which is found to be reliably stable on smaller samples of training  ...  Based on earlier work the images of the sequence are interpreted as the output of a linear autoregressive process driven by white Gaussian noise.  ...  Previous work using autoregressive models for dynamic texture synthesis, [6] in particular, used a first-order dynamical model.  ... 
doi:10.5244/c.21.76 dblp:conf/bmvc/HyndmanJF07 fatcat:mlzqmdbsxzgr7giplc6phlkl3m

MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement [article]

Alexander Richard, Michael Zollhoefer, Yandong Wen, Fernando de la Torre, Yaser Sheikh
2022 arXiv   pre-print
To improve upon existing models, we propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.  ...  Our approach ensures highly accurate lip motion, while also synthesizing plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.  ...  Motion synthesis is based on an autoregressive sampling strategy of the audio-conditioned temporal model over the learnt categorical latent space.  ... 
arXiv:2104.08223v2 fatcat:4kynejubyzcmrldh54kkvut5s4

Realtime style transfer for unlabeled heterogeneous human motion

Shihong Xia, Congyi Wang, Jinxiang Chai, Jessica Hodgins
2015 ACM Transactions on Graphics  
We demonstrate the power of our approach by transferring stylistic human motion for a wide variety of actions, including walking, running, punching, kicking, jumping and transitions between those behaviors  ...  (top) the input motion in the "neutral" style; (bottom) the output animation in the "proud" style. Note the more energetic arm motions and jump in the stylized motion.  ...  Green represents sample poses of an input motion and red shows the three closest neighbors from the database.  ... 
doi:10.1145/2766999 fatcat:5vsujn5wvjbmzheh7niphdgepm

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning [article]

Ligong Han and Jian Ren and Hsin-Ying Lee and Francesco Barbieri and Kyle Olszewski and Shervin Minaee and Dimitris Metaxas and Sergey Tulyakov
2022 arXiv   pre-print
To improve video quality and consistency, we propose a new video token trained with self-learning and an improved mask-prediction algorithm for sampling video tokens.  ...  Most methods for conditional video synthesis use a single modality as the condition. This comes with major limitations.  ...  A future direction could be balancing the training set to cover enough motion patterns for the sampled frames. Diversity of Non-Autoregressive Transformer.  ... 
arXiv:2203.02573v1 fatcat:77g4rpc6vfayhgjrkdn66rcwdy

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion [article]

Evonne Ng, Hanbyul Joo, Liwen Hu, Hao Li, Trevor Darrell, Angjoo Kanazawa, Shiry Ginosar
2022 arXiv   pre-print
We combine the motion and speech audio of the speaker using a motion-audio cross attention transformer.  ...  We present a framework for modeling interactional communication in dyadic conversations: given multimodal inputs of a speaker, we autoregressively output multiple possibilities of corresponding listener  ...  We extend vector quantization to the domain of motion synthesis and learn a quantized space of motion in which we autoregressively predict multiple modes of perceptually realistic listener motion.  ... 
arXiv:2204.08451v1 fatcat:tliylgdzhfaifh3mjyripaywc4

Data-driven autocompletion for keyframe animation

Xinyi Zhang, Michiel van de Panne
2018 Proceedings of the 11th Annual International Conference on Motion, Interaction, and Games - MIG '18  
In this thesis, we present a data-driven autocompletion method for synthesizing animated motions from input keyframes.  ...  Keyframing is the main method used by animators to choreograph appealing motions, but the process is tedious and labor-intensive.  ...  Deep Learning for Motion Synthesis Recently, researchers have begun to exploit deep learning algorithms for motion synthesis.  ... 
doi:10.1145/3274247.3274502 dblp:conf/mig/ZhangP18 fatcat:ytbg253h2zdl3osv6e7y2k6yla

CCVS: Context-aware Controllable Video Synthesis [article]

Guillaume Le Moing and Jean Ponce and Cordelia Schmid
2021 arXiv   pre-print
by affording simple mechanisms for handling multimodal ancillary information for controlling the synthesis process (eg, a few sample frames, an audio track, a trajectory in image space) and taking into  ...  the synthesis process on contextual information for temporal continuity and ancillary information for fine control.  ...  JP was supported in part by the Louis Vuitton/ENS chair in artificial intelligence and the Inria/NYU collaboration. We thank the reviewers for useful comments.  ... 
arXiv:2107.08037v2 fatcat:xmpykxzkz5cxrppejzlc5u6i6i

DanceConv: Dance Motion Generation with Convolutional Networks

Kosmas Kritsis, Aggelos Gkiokas, Aggelos Pikrakis, Vassilis Katsouros
2022 IEEE Access  
In this paper we present a multimodal convolutional autoencoder that combines 2D skeletal and audio information by employing an attention-based feature fusion mechanism, capable of generating novel dance  ...  Based on this outcome, we train the proposed multimodal architecture with two different approaches, namely teacher-forcing and self-supervised curriculum learning, to deal with the autoregressive error  ...  ACKNOWLEDGMENT The authors would like to thank all of their colleagues at the University of Piraeus and the Athena Research Center, for their constant support.  ... 
doi:10.1109/access.2022.3169782 fatcat:abjqqrrww5bulh5tadggkwzk4u

Cross-Conditioned Recurrent Networks for Long-Term Synthesis of Inter-Person Human Motion Interactions

Jogendra Nath Kundu, Himanshu Buckchash, Priyanka Mandikal, Rahul M V, Anirudh Jamkhandi, R. Venkatesh Babu
2020 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)  
Qualitative and quantitative evaluation on several tasks, such as Short-term motion prediction, Long-term motion synthesis and Interaction-based motion retrieval against prior state-of-the-art approaches  ...  Available attempts to use auto-regressive techniques for long-term single-person motion generation usually fails, resulting in stagnated motion or divergence to unrealistic pose patterns.  ...  This work was supported by a Wipro PhD Fellowship (Jogendra), and a project grant from Robert Bosch Centre for Cyber-Physical Systems, IISc.  ... 
doi:10.1109/wacv45572.2020.9093627 dblp:conf/wacv/KunduBMVJR20 fatcat:5oj4z365u5euffic4bu4jkbuje

Action-Conditioned 3D Human Motion Synthesis with Transformer VAE [article]

Mathis Petrovich, Michael J. Black, Gül Varol
2021 arXiv   pre-print
By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action.  ...  Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE).  ...  The authors would like to thank Mathieu Aubry and David Picard for helpful feedback, Chuan Guo and Shihao Zou for their help with Action2Motion details.  ... 
arXiv:2104.05670v2 fatcat:acazmwxqhnhqrboeillgdcwie4

The StyleGestures entry to the GENEA Challenge 2020 [article]

Simon Alexanderson
2020 Zenodo  
We look forward to improving the system for future GENEA challenges, for example by pre-training the model on external data, or exploiting text features and language models.  ...  CONCLUSIONS AND FUTURE WORK We have described our entry to the GENEA challenge 2020, which closely followed our recent publication [2] .  ...  DATA PREPARATION AND TRAINING We used the supplied challenge data for model training and synthesis [3] .  ... 
doi:10.5281/zenodo.4088600 fatcat:xvfg7fganncrhowp2uaftq43yi
« Previous Showing results 1 — 15 out of 1,812 results