Filters








119 Hits in 6.8 sec

MoVi: A Large Multipurpose Motion and Video Dataset [article]

Saeed Ghorbani, Kimia Mahdaviani, Anne Thaler, Konrad Kording, Douglas James Cook, Gunnar Blohm, Nikolaus F. Troje
2020 arXiv   pre-print
Human movements are both an area of intense study and the basis of many applications such as character animation.  ...  For many applications, it is crucial to identify movements from videos or analyze datasets of movements. Here we introduce a new human Motion and Video dataset MoVi, which we make available publicly.  ...  We further wish to thank Viswaijt Kumar for his help with post-processing the data and setting up the data repository and website.  ... 
arXiv:2003.01888v1 fatcat:r62vdwp75neejngmf6vynpb65e

AVASAG: A German Sign Language Translation System for Public Services (short paper)

Fabrizio Nunnari, Judith Bauerdiek, Lucas Bernhard, Cristina España-Bonet, Corinna Jäger, Amelie Unger, Kristoffer Waldow, Sonja Wecker, Elisabeth André, Stephan Busemann, Christian Dold, Arnulph Fuhrmann (+7 others)
2021 Machine Translation Summit  
We describe the scientific innovation points (geometry-based SL-description, 3D animation and video corpus, simplified annotation scheme, motion capture strategy) and the overall translation pipeline.  ...  This paper presents an overview of AVASAG; an ongoing applied-research project developing a text-to-sign-language translation system for public services.  ...  For the animation synthesis, we use the cloud-based Charamel software VuppetMaster [18] , which supports a 3D real-time rendering engine based on WebGL standard, thus making it possible to run the avatar  ... 
dblp:conf/mtsummit/NunnariBBEJUWWA21 fatcat:yjsn3zfjwzanbfkkgipkvz6rge

AMASS: Archive of Motion Capture as Surface Shapes [article]

Naureen Mahmood, Nima Ghorbani (MPI for Intelligent Systems), Nikolaus F. Troje, Gerard Pons-Moll (MPI for Informatics), Michael J. Black
2019 arXiv   pre-print
The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning.  ...  We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model; here we use SMPL [doi:10.1145/2816795.2818013], which is widely used  ...  The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning.  ... 
arXiv:1904.03278v1 fatcat:ryuog2nd6nc3jpuzjdxdgc7rnu

A System for Acquisition and Modelling of Ice-Hockey Stick Shape Deformation from Player Shot Videos

Kaustubha Mendhurwar, Gaurav Handa, Leixiao Zhu, Sudhir Mudur, Etienne Beauchesne, Marc LeVangie, Aiden Hallihan, Abbas Javadtalab, Tiberiu Popa
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Our stick acquisition pipeline: a) Input images for each frame; b) Point cloud reconstruction; c) Template geometry (blue); d) Reconstructed stick bend (green); e,f) Deformed hockey stick (red).  ...  Zuffi et al [45] have proposed a method to obtain 3D textured animal model, given a set of images of the animal annotated with landmarks and silhouettes.  ...  Conclusions, Limitations and Future Work In this work we presented a complete hardware setup and software pipeline for 3D acquisition of a hockey stick during a high-speed hockey shot.  ... 
doi:10.1109/cvprw50498.2020.00453 dblp:conf/cvpr/MendhurwarHZMBL20 fatcat:usn2h5fgovhjvda34o3cq3g7k4

Teaching visual storytelling for virtual production pipelines incorporating motion capture and visual effects

Gregory Bennett, Jan Kruse
2015 SIGGRAPH Asia 2015 Symposium on Education on - SA '15  
Specifically, pedagogical challenges in teaching Visual Storytelling through Motion Capture and Visual Effects are addressed, and a new pedagogical framework using three different modes of moving image  ...  Accordingly, tertiary teaching of subject areas such as cinema, animation and visual effects require frequent adjustments regarding curriculum structure and pedagogy.  ...  Acknowledgements We wish to thank Auckland University of Technology, in particular Colab for their support.  ... 
doi:10.1145/2818498.2818516 dblp:conf/siggraph/BennettK15 fatcat:vvhu7drngjakjl2dvrzmygpnoq

Automatic Learning of Articulated Skeletons from 3D Marker Trajectories [chapter]

Edilson de Aguiar, Christian Theobalt, Hans-Peter Seidel
2006 Lecture Notes in Computer Science  
We present a novel fully-automatic approach for estimating an articulated skeleton of a moving subject and its motion from body marker trajectories that have been measured with an optical motion capture  ...  Our method does not require a priori information about the shape and proportions of the tracked subject, can be applied to arbitrary motion sequences, and renders dedicated initialization poses unnecessary  ...  Rigid Body Clustering The input to our system is raw optical MOCAP data, i.e. 3D marker trajectories that can be acquired with all commercial optical MOCAP systems available today.  ... 
doi:10.1007/11919476_49 fatcat:434buqcbnjdunmpfbhguan4bji

Simple MoCap System for Home Usage

Martin Magdin
2017 International Journal of Interactive Multimedia and Artificial Intelligence  
Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress.  ...  In general it is a professional solution that is sophisticated and costly. This paper presents a solution with a system that is inexpensive.  ...  III. dEsIgn of systEm for anImatIon and analysIs of motIon Previous researches used data processing programs created by commercial companies.  ... 
doi:10.9781/ijimai.2017.4410 fatcat:zizcoq5hrrf7bgmctw3eeffefe

Motion Capture using 3d

R. Suwetha
2017 International Journal for Research in Applied Science and Engineering Technology  
, and mistreatment that info to animate digital character models in several animation area unit captured in second or 3D animation. [2][3][4] once it includes face and fingers or captures the expressions  ...  It's enforced in military, amusement, sports, medical field , and for laptop vision[1] and AI. it's additionally used now-a-days filmmaking and video gamest, it's done by recording actions by human actors  ...  One of the secret ingredients that will get us there may be motion capture (MoCap), defined as a technology that allows us to record human motion with sensors and to digitally map the motion to computer-generated  ... 
doi:10.22214/ijraset.2017.4218 fatcat:xtexzok4ivep3hvpc22csqmnf4

Methodology for Building Synthetic Datasets with Virtual Humans [article]

Shubhajit Basak, Hossein Javidnia, Faisal Khan, Rachel McDonnell, Michael Schukat
2020 arXiv   pre-print
In this work, we explore a framework to synthetically generate facial data to be used as part of a toolchain to generate very large facial datasets with a high degree of control over facial and environmental  ...  In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities, providing full control over image variations such as pose,  ...  In this paper, we propose a pipeline using an open-source tool and a commercially available animation toolkit to generate photo-realistic human models and corresponding ground truths including RGB images  ... 
arXiv:2006.11757v1 fatcat:njk7mbtmszc3tjqsq3gxa6q6fm

High-Accuracy Facial Depth Models derived from 3D Synthetic Data [article]

Faisal Khan, Shubhajit Basak, Hossein Javidnia, Michael Schukat, Peter Corcoran
2020 arXiv   pre-print
In this paper, we explore how synthetically generated 3D face models can be used to construct a high accuracy ground truth for depth.  ...  Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head  ...  A method is proposed to generate facial depth information using 3D virtual human and iClone [1] character modelling software.  ... 
arXiv:2003.06211v2 fatcat:fpx2z4dq6vh5lh6eao6h6dxxfa

Modelling craftspeople for cultural heritage: a case study

Nedjma Cadi, Nadia Magnenat-Thalmann, Danai Kaplanidi, Nikolaos Partarakis, Effie Karouzaki, Manos Zidianakis, Andreas Pattakos, Xenophon Zabulis
2022 Zenodo  
This paper is a practical description of the steps to model and animate virtual humans. The work aims to bring a methodology for achieving DHs creation for CH applications.  ...  We present the process and the tasks involved in modelling, designing, and animating DHs, detailing the underlying technological background.  ...  Acknowledgements This work has been conducted in the context of the Mingei project that has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement  ... 
doi:10.5281/zenodo.6552355 fatcat:ysvdfhd255ehhk2psnbef6z6se

High-Accuracy Facial Depth Models derived from 3D Synthetic Data

Faisal Khan, Shubhajit Basak, Hossein Javidnia, Michael Schukat, Peter Corcoran
2020 2020 31st Irish Signals and Systems Conference (ISSC)  
In this paper, we explore how synthetically generated 3D face models can be used to construct a highaccuracy ground truth for depth.  ...  Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head  ...  A method is proposed to generate facial depth information using 3D virtual human and iClone [1] character modelling software.  ... 
doi:10.1109/issc49989.2020.9180166 fatcat:c5kcgkptyjdozllh6o2ze5jzy4

Accurate 3D pose estimation from a single depth image

Mao Ye, Xianwang Wang, Ruigang Yang, Liu Ren, Marc Pollefeys
2011 2011 International Conference on Computer Vision  
The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud.  ...  This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement.  ...  It uses a data-driven approach to find a low-dimensional parametric human shape and pose description, so the fitting of image data to 3D model can be simplified.  ... 
doi:10.1109/iccv.2011.6126310 dblp:conf/iccv/YeWYRP11 fatcat:cpjtjpw5fzhuhkscerkngj2sue

A Review of Body Measurement Using 3D Scanning

Kristijan Bartol, David Bojanic, Tomislav Petkovic, Tomislav Pribanic
2021 IEEE Access  
This paper gives a comprehensive survey of body measurement techniques, with an emphasis on 3D scanning technologies and automatic data processing pipelines.  ...  A multitude of 3D scanning methods and processing pipelines have been described in the literature, and the advent of deep learning-based processing methods has generated an increased interest in the topic  ...  [149] (2D) use a deep learning model to predict the shape and pose parameters of a SMPL model, Yan et al.  ... 
doi:10.1109/access.2021.3076595 fatcat:ditipdlro5gc3cm2zuoyvfw6hi

A Facial Rigging Survey [article]

Verónica Orvalho, Pedro Bastos, Frederic Parke, Bruno Oliveira, Xenxo Alvarez
2012 Eurographics State of the Art Reports  
Rigging is the process of setting up a group of controls to operate a 3D model, analogous to the strings of a puppet.  ...  It describes the main problems that appear when preparing a character for animation. This paper also gives an overview of the role and relationship between the rigger and the animator.  ...  We also thank Joseé erra and the Porto Interactive Center team for their support. Special Thanks go to Jacqueline Fernandes for helping with the editing of the article.  ... 
doi:10.2312/conf/eg2012/stars/183-204 fatcat:ibo6hnsa3bddjgeaumxhkpgpc4
« Previous Showing results 1 — 15 out of 119 results