1,850 Hits in 4.2 sec

Pose-timeline for propagating motion edits

Tomohiko Mukai, Shigeru Kuriyama
2009 Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation - SCA '09  
We demonstrate the efficiency of our pose-timeline interface with a propagation mechanism for the timing adjustment of mutual actions and for motion synchronization with a music sequence.  ...  Our system visualizes a motion sequence on a summary timeline with editable pose-icons, and drag-and-drop operations on the timeline enable intuitive controls of temporal properties of the motion such  ...  Acknowledgements The authors would like to thank Hiroshi Yasuda for detailed discussions, and anonymous reviewers for their helpful comments.  ... 
doi:10.1145/1599470.1599485 dblp:conf/sca/MukaiK09 fatcat:aapnnn533vgx7ndat3z625zh5q

A Novel Direct Manipulation Technique for Motion-editing using a Timeline-based Interface

Natapon Pantuwong
2019 ECTI Transactions on Computer and Information Technology  
This paper presents a timeline-based motion-editing system that enables users to perform motion-editing tasks easily and quickly.  ...  In contrast with the previous work that allows only temporal editing, the proposed system provides editing functions for both geometry and temporal editing.  ...  Editing propagation is extended for pose editing.  ... 
doi:10.37936/ecti-cit.2018122.59392 fatcat:3z675odm7ncrxet3dy7bkwrhvu

Direct space-time trajectory control for visual media editing

Stephanie Santosa, Fanny Chevalier, Ravin Balakrishnan, Karan Singh
2013 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13  
We explore the design space for using object motion trajectories to create and edit visual elements in various media across space and time.  ...  We implemented and evaluated these techniques in DirectPaint, a system for creating free-hand painting and annotation over video.  ...  Many participants really liked the colour script timeline as a feedback of their edits.  ... 
doi:10.1145/2470654.2466148 dblp:conf/chi/SantosaCBS13 fatcat:hsqbdjcrsjcdxjnlarh2h4bs5q

2-3 アニメーション
2-3 Animation

Shigeru KURIYAMA, Tomohiko MUKAI
2010 The Journal of the Institute of Image Electronics Engineers of Japan  
Forsyth : " Generalizing motion edits with gaussian processes" , ACM Trans.Graph., Vol.28 , No.1( 20 09). 3) T.M ukai and S.Kuriyama : " Pose-timeline for propagating motion edits" , In Symposium on Computer  ...  ., Vol.29, No.1(2009) . 5) M.Kim,K.Hyun,J.Kim,and J.Lee: " Synchronized multicharacter motion editing" ,ACM Trans.Graph.,Vol.2 8,No. 3(2 00 9) . 6) M. Lau, Z. Bar-Joseph, and J.  ... 
doi:10.11371/iieej.39.835 fatcat:tfxg3mokrbeqnfiyqkqljqlhbi

Visual Rhythm Prediction with Feature-Aligning Network [article]

Yutong Xie, Haiyang Wang, Yan Hao, Zihao Xu
2019 arXiv   pre-print
In our approach, we first extract features including original frames and their residuals, optical flow, scene change, and body pose.  ...  Here we observe that there are some slight misaligning between features over the timeline and assume that this is due to the distinctions between how different features are computed.  ...  However, this rule-based method cannot distinguish the motion of central subject from the background or camera motion, and even minor camera motion disturbance can be a great interference for the detection  ... 
arXiv:1901.10163v1 fatcat:4554lkdugbba5kmllbp6gfwg6i


Michael Neff, Eugene Fiume
2005 Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation - SCA '05  
Simply attending to the low-level motion control problem, particularly for physically based models, is very difficult.  ...  We demonstrate how all highlevel constructions for expressive animation can be given a precise semantics that translate into a low-level motion specification that is then simulated either physically or  ...  The time planner implements elastic behaviour for the timeline where no TElemnts can overlap and there can be no gaps in the timeline.  ... 
doi:10.1145/1073368.1073391 fatcat:255tdfqmjbf3vc57ao6ru2f32y

A Layered Authoring Tool for Stylized 3D animations

Jiaju Ma, Li-Yi Wei, Rubaiat Habib Kazi
2022 CHI Conference on Human Factors in Computing Systems  
The layered user interface features a timeline sequencer (c) in which users can add stylization efects as additional channels for the corresponding objects (a), either individually or as groups (b).  ...  a) Original animation (b) Stylizing (a) with arc, staging, follow through, and squash and stretch (c) Surface level authoring interface: timeline sequencer (d) Second level authoring interface: node graph  ...  ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their valuable feedback as well as support from the following people (in alphabetical order of last name): Amanda McCoy Bast, Kenji Endo  ... 
doi:10.1145/3491102.3501894 fatcat:zw2geuzifrg5nkubkro5kqwpru

Space-time sketching of character animation

Martin Guay, Rémi Ronfard, Michael Gleicher, Marie-Paule Cani
2015 ACM Transactions on Graphics  
Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control.  ...  We present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC).  ...  Acknowledgements We thank Antoine Begault for help with code, Estelle Charleroy with video editing and Deidre Stuffer with text editing.  ... 
doi:10.1145/2766893 fatcat:r66ekt4xzvayfhrf6ziencnfvm

Style learning and transferring for facial animation editing

Xiaohan Ma, Binh Huy Le, Zhigang Deng
2009 Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation - SCA '09  
Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency  ...  effectively applied to automate the editing of the remaining facial animation frames or transfer editing styles between different animation sequences.  ...  It should be noted that for each of the training editing-pairs, we will generate two such feature vectors (one for the facial pose before the editing, the other for its facial pose after the editing).  ... 
doi:10.1145/1599470.1599486 dblp:conf/sca/MaLD09 fatcat:xlvvrbjx75d5dau2kg2u6mj7nu

Intuitive Facial Animation Editing Based On A Generative RNN Framework [article]

Eloïse Berson, Catherine Soladié, Nicolas Stoiber
2020 arXiv   pre-print
Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering.  ...  For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and  ...  This framework allows editing motion segments of any length at any point in the animation timeline.  ... 
arXiv:2010.05655v1 fatcat:z6udw7utqncidojlr7562htyiq

A deep learning framework for character motion synthesis and editing

Daniel Holden, Jun Saito, Taku Komura
2016 ACM Transactions on Graphics  
This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural.  ...  Once motion is generated it can be edited by performing optimization in the space of the motion manifold.  ...  Acknowledgements We thank the reviewers for the fruitful suggestions. This research is supported by Marza Animation Planet.  ... 
doi:10.1145/2897824.2925975 fatcat:tlw7vclqknawzdzsnqimtt2apy


Rony Kubat, Philip DeCamp, Brandon Roy
2007 Proceedings of the ninth international conference on Multimodal interfaces - ICMI '07  
We introduce a system for visualizing, annotating, and analyzing very large collections of longitudinal audio and video recordings.  ...  The system, TotalRecall, is designed to address the requirements of projects like the Human Speechome Project [18], for which more than 100,000 hours of multitrack audio and video have been collected over  ...  In figure 2 , the first window displays a synchronized timeline view of the multiple audio and video channels. Users view transform data in this window as well as view and edit annotations.  ... 
doi:10.1145/1322192.1322229 dblp:conf/icmi/KubatDR07 fatcat:wifgnnzeord4lmbiqxfjrlzd7a

Controllable high-fidelity facial performance transfer

Feng Xu, Jinxiang Chai, Yilong Liu, Xin Tong
2014 ACM Transactions on Graphics  
This paper introduces a novel facial expression transfer and editing technique for high-fidelity facial performance data.  ...  Current methods for facial expression transfer, however, are often limited to large-scale facial deformation.  ...  Acknowledgement The authors would like to thank Xing Zhao for her help in modeling and editing the "Dog" sequence. The authors also want to thank Stephen Lin for paper proofreading.  ... 
doi:10.1145/2601097.2601210 fatcat:srywyxguhnhynaymb4d4xote7m

ActionSnapping: Motion-Based Video Synchronization [chapter]

Jean-Charles Bazin, Alexander Sorkine-Hornung
2016 Lecture Notes in Computer Science  
Our approach extends the popular "snapping" tool of video editing software and allows users to automatically snap action videos together in a timeline based on their content.  ...  Video synchronization is a fundamental step for many applications in computer vision, ranging from video morphing to motion analysis.  ...  Acknowledgements We are very grateful to World Dance New York for giving us the permission to use their YouTube videos.  ... 
doi:10.1007/978-3-319-46454-1_10 fatcat:mdgpogidyfglfcorl2fyxz25nq

Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders [article]

Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, Linchao Bao
2021 arXiv   pre-print
However, splitting the latent code into two parts poses training difficulties for the VAE model.  ...  Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline.  ...  Our model could generate a smooth motion from the edited motion-specific code. Please refer to our project page for the demonstration.  ... 
arXiv:2108.06720v1 fatcat:cjognac7gbdavkfut2qu2kichu
« Previous Showing results 1 — 15 out of 1,850 results