Filters








3,068 Hits in 8.1 sec

Construction of virtual video scene and its visualization during sports training

Rui Yuan, Zhendong Zhang, Pengwei Song, Jia Zhang, Long Qin
2020 IEEE Access  
In the video game, you can use motion capture technology to capture human motion data to build athletes, martial artists, and other game character models [3] .  ...  style of one game to another game.  ... 
doi:10.1109/access.2020.3007897 fatcat:mjie3elecnf67pimlzw5qnnop4

A Survey on Deep Learning for Skeleton-Based Human Animation [article]

L. Mourot, L. Hoyet, F. Le Clerc, François Schnitzler
2021 arXiv   pre-print
Human character animation is often critical in entertainment content production, including video games, virtual reality or fiction films.  ...  In this article, we propose a comprehensive survey on the state-of-the-art approaches based on either deep learning or deep reinforcement learning in skeleton-based human character animation.  ...  In human animation, style transfer aims at transferring the style from one motion sequence to another whose content is retained, called hereafter style and content motion sequences, respectively.  ... 
arXiv:2110.06901v1 fatcat:abppln4rbbeufiw4z6a3wnk7oy

Deep Learning for Procedural Content Generation

Jialin Liu, Sam Sndograss, Ahmed Khalifa, Sebastian Risi, Georgios N. Yannakakis, Julian Togelius
2020 Zenodo  
Procedural content generation in video games has a long history.  ...  This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly, discusses deep learning methods that could be used for content generation  ...  portraits and anime-style faces, can be used to generate comic or video game characters and the one for landscapes can be used to generate background images for games.  ... 
doi:10.5281/zenodo.4415242 fatcat:6q4swrsefvhhde2v6mepsoagg4

Survey on Style in 3D Human Body Motion: Taxonomy, Data, Recognition and its Applications

Sarah Ribet, Hazem Wannous, Jean-Philippe Vandeborre
2019 IEEE Transactions on Affective Computing  
This paper focuses on the study of style in human body motion from 3D human body skeletal data.  ...  The meaning of the word style depends on its context. While actions have already been quite studied for a while, style in human body motion is a growing topic of interest.  ...  MOTION STYLE GENERATION In animation and video games, large datasets of actions and styles are required. Capturing all the possible combinations is a burden for actors, tedious and time-consuming.  ... 
doi:10.1109/taffc.2019.2906167 fatcat:qsq5wnke4zd6nkevp6knfmhmnq

Action2video: Generating Videos of Human 3D Actions [article]

Chuan Guo, Xinxin Zuo, Sen Wang, Xinshuang Liu, Shihao Zou, Minglun Gong, Li Cheng
2021 arXiv   pre-print
Moreover, given an additional input image of a clothed human character, an entire pipeline is proposed to extract his/her 3D detailed shape, and to render in videos the plausible motions from different  ...  We aim to tackle the interesting yet challenging problem of generating videos of diverse and natural human motions from prescribed action categories.  ...  action videos by cou- Weng CY, Curless B, Kemelmacher-Shlizerman I pling 3d game engines and probabilistic graphical (2019) Photo wake-up: 3D character animation from models.  ... 
arXiv:2111.06925v2 fatcat:obdpetdfqbdetonei73ndq6ckq

3D Human Motion Synthesis Based on Convolutional Neural Network

Dongsheng Zhou, Xinzhu Feng, Pengfei Yi, Xin Yang, Qiang Zhang, Xiaopeng Wei, Deyun Yang
2019 IEEE Access  
The synthesis of human motion is the virtual of the action of the characters in the real world, the authenticity of the action, and the natural smoothness is especially important to the user's experience  ...  The data used in this paper are all 3D human motion data in BioVision Hierarchical (BVH) format, which can be captured by optical, inertial, mechanical or other video-based motion capture devices.  ...  Deep learning has made great achievements in static images, and gradually expanded to time series human behavior recognition for dynamic video [14] , [15] .  ... 
doi:10.1109/access.2019.2917609 fatcat:f6f76k265fbvffxdohy56niwdq

Automated Game Design Learning [article]

Joseph C Osborn, Adam Summerville, Michael Mateas
2017 arXiv   pre-print
While general game playing is an active field of research, the learning of game design has tended to be either a secondary goal of such research or it has been solely the domain of humans.  ...  We propose a field of research, Automated Game Design Learning (AGDL), with the direct purpose of learning game designs directly through interaction with games in the mode that most people experience games  ...  Apart from learning heuristics for a single game, transfer learning is a key area where the portable design representations learned by AGDL could be of use to GGP agents.  ... 
arXiv:1707.03333v1 fatcat:ruzmhqo7u5dcxouqexoxj5i7nu

Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis

Maxime Garcia, Remi Ronfard, Marie-Paule Cani
2019 Motion, Interaction and Games on - MIG '19  
This information is transferred to an articulated character to generate an expressive 3D animation sequence.  ...  We parse the input 6D trajectories (position and orientation over time) -called spatial motion doodles -into sequences of actions and convert them into detailed character animations using a dataset of  ...  ACKNOWLEDGMENTS We would like to thank Laurence BOISSIEUX for the animations she provided for testing, for the 3d model of the garden environment and for her advice and involvement in this project.  ... 
doi:10.1145/3359566.3360061 dblp:conf/mig/GarciaRC19 fatcat:baoxi3zobvdptiildpirrya3yq

Applied Machine Learning for Games: A Graduate School Course [article]

Yilei Zeng, Aayush Shah, Jameson Thai, Michael Zyda
2021 arXiv   pre-print
The game industry is moving into an era where old-style game engines are being replaced by re-engineered systems with embedded machine learning technologies for the operation, analysis and understanding  ...  Student projects cover use-cases such as training AI-bots in gaming benchmark environments and competitions, understanding human decision patterns in gaming, and creating intelligent non-playable characters  ...  The constructed standalone model could be used to apply a reference style to any real-world input video.  ... 
arXiv:2012.01148v2 fatcat:f44ln32jnbfhrearv234ylteru

Turning to the masters

Christoph Bregler, Lorie Loeb, Erika Chuang, Hrishi Deshpande
2002 ACM Transactions on Graphics  
In this paper, we present a technique we call "cartoon capture and retargeting" which we use to track the motion from traditionally animated cartoons and retarget it onto 3-D models, 2-D drawings, and  ...  We would like to thank Craig Slagel, Steve Taylor, and Steve Anderson from Electronic Arts for providing and helping us with the 3D Otter model in Maya.  ...  We would like to thank Catherine Margerin, Greg LaSalle, and Jennifer Balducci from Rearden Steel for their invaluable help with the motion capture and retargeting tasks.  ... 
doi:10.1145/566570.566595 fatcat:acmscli4jfgavjxebgdz5zbe6i

Turning to the masters: motion capturing cartoons

Christoph Bregler, Lorie Loeb, Erika Chuang, Hrishi Deshpande
2002 ACM Transactions on Graphics  
In this paper, we present a technique we call "cartoon capture and retargeting" which we use to track the motion from traditionally animated cartoons and retarget it onto 3-D models, 2-D drawings, and  ...  We would like to thank Craig Slagel, Steve Taylor, and Steve Anderson from Electronic Arts for providing and helping us with the 3D Otter model in Maya.  ...  We would like to thank Catherine Margerin, Greg LaSalle, and Jennifer Balducci from Rearden Steel for their invaluable help with the motion capture and retargeting tasks.  ... 
doi:10.1145/566654.566595 fatcat:4rqqvozcbrgwhjivqoukp3ujpu

HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation [article]

Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya Ogata
2020 arXiv   pre-print
Hatsuki's novelty lies in aesthetic design, 2D facial expressions, and anime-style behaviors that allows it to deliver rich interaction experiences resembling anime-characters.  ...  Results show our approach was successfully able to generate the actions through self-organized contexts, which shows the potential for generalizing our approach in further actions under different contexts  ...  (comics), and video games [1] , [2] .  ... 
arXiv:2003.14121v2 fatcat:ayvh35hgwrhdhavennkgyhkqqa

2021 Index IEEE Transactions on Multimedia Vol. 23

2021 IEEE transactions on multimedia  
Departments and other items may also be covered if they have been judged to have archival value. The Author Index contains the primary entry for each item, listed under the first author's name.  ...  -that appeared in this periodical during 2021, and items from previous years that were commented upon or corrected in 2021.  ...  Li, A New Approach for Character Recognition of Multi-Style Vehicle License Video Sequences in HEVC.  ... 
doi:10.1109/tmm.2022.3141947 fatcat:lil2nf3vd5ehbfgtslulu7y3lq

Hierarchical Style-based Networks for Motion Synthesis [article]

Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, Trevor Darrell
2020 arXiv   pre-print
Generating diverse and natural human motion is one of the long-standing goals for creating intelligent characters in the animated world.  ...  Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner.  ...  for player customization of action skills in video games [10, 41] .  ... 
arXiv:2008.10162v1 fatcat:yd4zkg64vzeqbf4xop7nzbfxv4

Procedural Generation of Videos to Train Deep Action Recognition Networks

Cesar Roberto de Souza, Adrien Gaidon, Yohann Cabon, Antonio Manuel Lopez
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections.  ...  We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for "Procedural Human Action Videos".  ...  To the best of our knowledge, ours is the first work to investigate virtual worlds and game engines to generate synthetic training videos for action recognition.  ... 
doi:10.1109/cvpr.2017.278 dblp:conf/cvpr/SouzaGCP17 fatcat:w3frbltuh5ecbn67gsrhw33vji
« Previous Showing results 1 — 15 out of 3,068 results