Filters








138,204 Hits in 6.0 sec

Example-Based Facial Animation of Virtual Reality Avatars using Auto-Regressive Neural Networks

Wolfgang Paier, Anna Hilsmann, Peter Eisert
2021 IEEE Computer Graphics and Applications  
During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order.  ...  This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.  ...  In contrast to other animation methods (e.g. [1] ), the main challenge of this approach is to generate realistic facial animations (appearance as well as dynamics) from a few semantic labels that do not  ... 
doi:10.1109/mcg.2021.3068035 pmid:33755560 fatcat:wc3pzuotjnch3je5fyij6ezdlm

Online simulation of emotional interactive behaviors with hierarchical Gaussian process dynamical models

Nick Taubert, Andrea Christensen, Dominik Endres, Martin A. Giese
2012 Proceedings of the ACM Symposium on Applied Perception - SAP '12  
The dynamics of the state in the latent space is modeled by a Gaussian Process Dynamical Model, a probabilistic dynamical model that can learn to generate arbitrary smooth trajectories in real-time.  ...  This shows that the proposed method generates highly-realistic interactive movements that are almost indistinguishable from natural ones.  ...  The dominant approach for the construction of dynamic models is physics-based animation.  ... 
doi:10.1145/2338676.2338682 dblp:conf/apgv/TaubertCEG12 fatcat:owljsj4vfreindz6glnjms7wdu

MOTION DEFORMATION STYLE CONTROL TECHNIQUE FOR 3D HUMANOID CHARACTER BY USING MOCAP DATA

Ismahafezi Ismail, Mohd Shahrizal Sunar, Hoshang Kolivand
2015 Jurnal Teknologi  
This paper presents a technique to deform motion style using Motion Capture (MoCap) data based on computer animation system. By using MoCap data, natural human action style could be deforming.  ...  Unlike existing 3D humanoid character motion editor, our method produces realistic final result and simulates new dynamic humanoid motion style based on simple user interface control.  ...  To generate the less dynamic motion style, we reduce the action force when the 3D character moves from the ground.  ... 
doi:10.11113/jt.v78.6926 fatcat:euwffalxtbe4njn4ll4fkbbife

Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding [article]

Mireille Fares, Michele Grimaldi, Catherine Pelachaud, Nicolas Obin
2022 arXiv   pre-print
Our model performs zero shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers.  ...  Our system consists of: (1) a speaker style encoder network that learns to generate a fixed dimensional speaker embedding style from a target speaker multimodal data and (2) a sequence to sequence synthesis  ...  [22] propose a model for driving 3D facial animation from audio.  ... 
arXiv:2208.01917v1 fatcat:soeue2wkmrcfnokvak2y6332yq

Dynamic Online 3D Visualization Framework for Real-Time Energy Simulation Based on 3D Tiles

Bo Mao, Yifang Ban, Björn Laumert
2020 ISPRS International Journal of Geo-Information  
Finally, during the visualization process, dynamic interactions and data sources are integrated into the styling generation to support real-time visualization.  ...  Then the 3D geometry data of these city objects are combined with its simulation results as attributes or just with object ID information to generate Batched 3D Models (B3DM) in 3D Tiles.  ...  Acknowledgments: We thank the Stockholm City for the city models. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/ijgi9030166 fatcat:dxutu62qyrg2rndpg4waos6be4

Stylized synthesis of facial speech motions

Yuru Pei, Hongbin Zha
2007 Computer Animation and Virtual Worlds  
To generalize the visyllable model from several instances, the mapping coefficient matrices are assembled to a tensor, which is decomposed into independent modes, e.g. identity and uttering styles.  ...  In this paper, we address the fundamental issues regarding the stylized dynamic modeling of visyllables. The decomposable generalized model is learnt for the stylized motion synthesis.  ...  Instead, we provide an integrated framework for the motion embedding and the dynamic modeling, along with a generalized style mapping. Motion analysis has drawn attention for many years.  ... 
doi:10.1002/cav.186 fatcat:ttw5hxgrs5e7vpsmy6r4h5imcu

DeepCloth: Neural Garment Representation for Shape and Style Editing [article]

Zhaoqi Su and Tao Yu and Yangang Wang and Yipeng Li and Yebin Liu
2020 arXiv   pre-print
Experiments demonstrate that our method can achieve the state-of-the-art garment modeling results compared with the previous methods.  ...  To conclude, with the proposed DeepCloth, we move a step forward on establishing a more flexible and general 3D garment digitization framework.  ...  Also, with our proposed animation and 3D-shape inference module, Deep-Cloth can generate dynamic garment 4D sequences (see Fig. 1 (c)), or extract the clothing shape parameters from a clothed human model  ... 
arXiv:2011.14619v1 fatcat:kz5xbalo2jcilj3wjkzdjvqpre

Real Time Physical Force-Driven Hair Animation

Ahmad Hoirul Basori, Alaa Omran Almagrabi
2018 Zenodo  
This paper proposed physical forces of gravity and wind to be applied into hair animation in real time based on wisp model and verlet integration technique.  ...  haracter animation which is widely used in animation, games and movies.  ...  Previous researchers also studied the hair style and involve the statistical wisp model for the hair style generation approach.  ... 
doi:10.5281/zenodo.2550644 fatcat:horwqc746fffxdi6xtzrv3rvla

DeepLandscape: Adversarial Modeling of Landscape Video [article]

Elizaveta Logacheva, Roman Suvorov, Oleg Khomenko, Anton Mashikhin, Victor Lempitsky
2020 arXiv   pre-print
Our architecture extends StyleGAN model by augmenting it with parts that allow to model dynamic changes in a scene.  ...  Once trained, our model can be used to generate realistic time-lapse landscape videos with moving objects and time-of-the-day changes.  ...  It allows to disentangle static appearance from the dynamic, as well as manifold of possible changes from a trajectory in it. Once trained, our model can animate a given photograph.  ... 
arXiv:2008.09655v1 fatcat:un6qlzjwvve4jmzg5smoy6x52u

Learning Motion Style Synthesis from Perceptual Observations

Lorenzo Torresani, Peggy Hackney, Christoph Bregler
2006 Neural Information Processing Systems  
between animation parameters and movement styles in perceptual space.  ...  We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data.  ...  We are grateful to Edward Warburton, Kevin Feeley, and Robb Bifano for assistance with the experimental setup and to Jared Silver for the Maya animations.  ... 
dblp:conf/nips/TorresaniHB06 fatcat:p6ou76sg3fdxlhs3xorejitpka

High Resolution Acquisition, Learning and Transfer of Dynamic 3-D Facial Expressions

Yang Wang, Xiaolei Huang, Chan-Su Lee, Song Zhang, Zhiguo Li, Dimitris Samaras, Dimitris Metaxas, Ahmed Elgammal, Peisen Huang
2004 Computer graphics forum (Print)  
Thus new expressions can be synthesized, either as dynamic morphing between individuals, or as expression transfer from a source face to a target face, as demonstrated in a series of experiments.  ...  high quality dynamic expression data.  ...  Using our decomposable generative model, we analyzed the motion style factor for each person using the variation of the feature point location from the base geometry.  ... 
doi:10.1111/j.1467-8659.2004.00800.x fatcat:245aeohuf5dxbffwhi5rm5vlri

The influence of visual cognitive style when learning from instructional animations and static pictures

Tim N. Höffler, Helmut Prechtl, Claudia Nerdel
2010 Learning and Individual Differences  
The assumption is made that HDV benefit from their cognitive style when they have to construct a mental animation from static pictures.  ...  In a 2 × 2 design, we examined the role of visual cognitive style in two multimedia-based learning environments (text plus static pictures/animations).  ...  Acknowledgements This research was supported by a grant from Deutsche Forschungsgemeinschaft (BA 1087/4-1).  ... 
doi:10.1016/j.lindif.2010.03.001 fatcat:aipoltwxgjdufmz4kfhgr5udfy

TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style

Chaitanya Patel, Zhouyingcheng Liao, Gerard Pons-Moll
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Figure 1: We present TailorNet, a model to estimate the clothing deformations with fine details from input body shape, body pose and garment style.  ...  From the left: the first two avatars show two different styles on the same shape, the following two show the same two styles on another shape, and the last two avatars illustrate that our method works  ...  In order to make animation easy, several works learn efficient approximate models from PBS complied off-line.  ... 
doi:10.1109/cvpr42600.2020.00739 dblp:conf/cvpr/PatelLP20 fatcat:tafhrzep75hhddxu7wv4vjpf7a

Real-Time Synthesis of Body Movements Based on Learned Primitives [chapter]

Martin A. Giese, Albert Mukovskiy, Aee-Ni Park, Lars Omlor, Jean-Jacques E. Slotine
2009 Lecture Notes in Computer Science  
The proposed model is inspired by concepts from motor control.  ...  The learned generative model can synthesize periodic and non-periodic movements, achieving high degrees of realism with a very small number of synergies.  ...  In addition, a variety of animation systems have exploited models for central pattern generators.  ... 
doi:10.1007/978-3-642-03061-1_6 fatcat:mjwemdrnzveyvhsyx2fzw46lcy

Simulation Guided Hair Dynamics Modeling from Video

Qing Zhang, Jing Tong, Huamin Wang, Zhigeng Pan, Ruigang Yang
2012 Computer graphics forum (Print)  
The refined space-time hair dynamics are consistent with video inputs and can be also used to generate novel hair animations of different hair styles.  ...  In this paper we present a hybrid approach to reconstruct hair dynamics from multi-view video sequences, captured under uncontrolled lighting conditions.  ...  Figure 7 : 7 The result of generating new hair animation using different hair styles. The static artist-made models are shown in the left column.  ... 
doi:10.1111/j.1467-8659.2012.03192.x fatcat:qfn7opf3ynbvphfs3xwrclbgcm
« Previous Showing results 1 — 15 out of 138,204 results