Filters








285 Hits in 6.8 sec

A Survey on Deep Learning for Skeleton-Based Human Animation [article]

L. Mourot, L. Hoyet, F. Le Clerc, François Schnitzler
2021 arXiv   pre-print
First, we introduce motion data representations, most common human motion datasets and how basic deep models can be enhanced to foster learning of spatial and temporal patterns in motion data.  ...  In this article, we propose a comprehensive survey on the state-of-the-art approaches based on either deep learning or deep reinforcement learning in skeleton-based human character animation.  ...  Acknowledgements This work was supported by the European Commission under European Horizon 2020 Programme, grant number 951911 -AI4Media. Acronyms  ... 
arXiv:2110.06901v1 fatcat:abppln4rbbeufiw4z6a3wnk7oy

DETECT: Deep Trajectory Clustering for Mobility-Behavior Analysis [article]

Mingxuan Yue, Yaguang Li, Haoze Yang, Ritesh Ahuja, Yao-Yi Chiang, Cyrus Shahabi
2020 arXiv   pre-print
To address these challenges, we propose an unsupervised neural approach for mobility behavior clustering, called the Deep Embedded TrajEctory ClusTering network (DETECT).  ...  In the second part, it learns a powerful representation of trajectories in the latent space of behaviors, thus enabling a clustering function (such as k-means) to be applied.  ...  Due to their condensed format, they improve learning efficiency and enable the neural layer to learn a better representation.  ... 
arXiv:2003.01351v1 fatcat:wirelxw65rg77psfm6izezb6bm

Robot Motion Planning in Learned Latent Spaces

Brian Ichter, Marco Pavone
2019 IEEE Robotics and Automation Letters  
Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics.  ...  Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP,  ...  The GPUs used for this research were donated by the NVIDIA Corporation.  ... 
doi:10.1109/lra.2019.2901898 fatcat:oxt53m3nhrdgjgnmhsl7iwtdiu

Imitating by generating: deep generative models for imitation of interactive tasks [article]

Judith Bütepage, Ali Ghadirzadeh, Özge Öztimur Karadag̃, Mårten Björkman, Danica Kragic
2019 arXiv   pre-print
Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner.  ...  To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner and (3) generation of robot joint  ...  Acknowledgments This work was supported by the EU through the project socSMCs (H2020-FETPROACT-2014) and the Swedish Foundation for Strategic Research.  ... 
arXiv:1910.06031v1 fatcat:alzh4zso6fe45mr6loawtjo6hu

Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks

Judith Bütepage, Ali Ghadirzadeh, Özge Öztimur Karadaǧ, Mårten Björkman, Danica Kragic
2020 Frontiers in Robotics and AI  
To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot  ...  Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner.  ...  VAEs combine VI for probabilistic models with the representational power of deep neural networks.  ... 
doi:10.3389/frobt.2020.00047 pmid:33501215 pmcid:PMC7806025 fatcat:babjmehfkng2zbj3f5k7zipeqy

Robot Motion Planning in Learned Latent Spaces [article]

Brian Ichter, Marco Pavone
2018 arXiv   pre-print
Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics.  ...  Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP,  ...  The encoder network, h enc φ , is a deep-spatial autoencoder [7] , which encourages learning important visual features by using a convolutional neural network followed by a spatial soft arg-max.  ... 
arXiv:1807.10366v2 fatcat:r26v3snl45edpfmy3wr6km3zhm

NewtonianVAE: Proportional Control and Goal Identification from Pixels via Physical Latent Spaces [article]

Miguel Jaques, Michael Burke, Timothy Hospedales
2021 arXiv   pre-print
We introduce a latent dynamics learning framework that is uniquely designed to induce proportional controlability in the latent space, thus enabling the use of much simpler controllers than prior work.  ...  Learning low-dimensional latent state space dynamics models has been a powerful paradigm for enabling vision-based planning and learning for control.  ...  Imitation learning using dynamic movement primitives (DMPs) We also leverage the properties of our embedding to fit dynamic movement primitives in the structured latent space for trajectory tracking problems  ... 
arXiv:2006.01959v2 fatcat:3rzonwvthnctroi4ivgvisugky

Supervised Learning and Reinforcement Learning of Feedback Models for Reactive Behaviors: Tactile Feedback Testbed [article]

Giovanni Sutanto, Katharina Rombach, Yevgen Chebotar, Zhe Su, Stefan Schaal, Gaurav S. Sukhatme, Franziska Meier
2020 arXiv   pre-print
Our pipeline starts by segmenting demonstrations of a complete task into motion primitives via a semi-automated segmentation algorithm.  ...  In the final phase, a sample-efficient reinforcement learning algorithm fine-tunes these feedback models for novel task settings through few real system interactions.  ...  Moreover, this research was also supported in part by the Max-Planck-Society through funding provided to Giovanni Sutanto, Katharina Rombach, Yevgen Chebotar, Zhe Su, and Stefan Schaal.  ... 
arXiv:2007.00450v1 fatcat:chzm5eduvbe2padut4kedgvfsy

Machine Learning for Data-Driven Movement Generation: a Review of the State of the Art [article]

Omid Alemi, Philippe Pasquier
2019 arXiv   pre-print
We cover topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods.  ...  In this survey, we review and analyze different aspects of building automatic movement generation systems using machine learning techniques and motion capture data.  ...  Acknowledgments This work was funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) through the Moving Stories Project, as well as the Natural Sciences and Engineering Research  ... 
arXiv:1903.08356v1 fatcat:wtqawbramvdx3kz6ffgp2sv3ja

Learning and Mining Player Motion Profiles in Physically Interactive Robogames

Ewerton Oliveira, Davide Orrù, Luca Morreale, Tiago Nascimento, Andrea Bonarini
2018 Future Internet  
Like commercial video games, the main aspect in a PIRG is to produce a sense of entertainment and pleasure that can be "consumed" by a large number of users.  ...  This is done by dealing both with the intrinsic uncertainty associated with the setting and with the agent necessity to act in real time to support the game interaction.  ...  Abbreviations The following abbreviations are used in this work:  ... 
doi:10.3390/fi10030022 fatcat:tfbrf4wajrhttjyumghaytf554

Intrinsically Motivated Exploration of Learned Goal Spaces

Adrien Laversanne-Finot, Alexandre Péré, Pierre-Yves Oudeyer
2021 Frontiers in Neurorobotics  
In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces.  ...  Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently.  ...  Experiments presented in this paper were carried out using the PlaFRIM experimental testbed, supported by Inria, CNRS (LABRI and IMB), Université de Bordeaux, Bordeaux INP and Conseil Régional d'Aquitaine  ... 
doi:10.3389/fnbot.2020.555271 pmid:33510630 pmcid:PMC7835425 fatcat:rxww7yrfcrfghifpzydvgk6dba

Neural probabilistic motor primitives for humanoid control [article]

Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess
2019 arXiv   pre-print
We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space.  ...  Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic.  ...  More similar to our work is the setting examined by , in which full-body humanoid movements were studied.  ... 
arXiv:1811.11711v2 fatcat:pulp4gc5vrdvpibthme367cufm

Information Encoding by Deep Neural Networks: What Can We Learn?

Louis ten Bosch, Lou Boves
2018 Interspeech 2018  
The recent advent of deep learning techniques in speech technology and in particular in automatic speech recognition has yielded substantial performance improvements.  ...  Two experiments investigate representations formed by auto-encoders. A third experiment investigates representations on convolutional layers that treat speech spectrograms as if they were images.  ...  Introduction The speech technology field is being revolutionized by the application of deep learning techniques, in particular Deep Neural Nets (DNNs).  ... 
doi:10.21437/interspeech.2018-1896 dblp:conf/interspeech/BoschB18 fatcat:riqbuj5qefad5h6y54xg7km5ge

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration [article]

Alexandre Péré, Sébastien Forestier, Olivier Sigaud, Pierre-Yves Oudeyer
2018 arXiv   pre-print
In this work, we propose to use deep representation learning algorithms to learn an adequate goal space.  ...  goal exploration happens in a second stage by sampling goals in this latent space.  ...  ACKNOWLEDGEMENT This work was supported by Inria and by the European Commission, within the DREAM project, and has received funding from the European Unions Horizon 2020 research and innovation program  ... 
arXiv:1803.00781v3 fatcat:wlh5evfi5rdt3omwrwv6kpblda

A Variational Time Series Feature Extractor for Action Prediction [article]

Maxime Chaveroche, Adrien Malaisé, Francis Colas, François Charpillet, Serena Ivaldi
2018 arXiv   pre-print
Our method is based on variational autoencoders.  ...  We propose a Variational Time Series Feature Extractor (VTSFE), inspired by the VAE-DMP model of Chen et al., to be used for action recognition and prediction.  ...  [1] , which proposed a method called Dynamic Movement Primitives in Latent Space of Time-Dependent Variational Autoencoders (VAE-DMP).  ... 
arXiv:1807.02350v2 fatcat:7jfwt3q35bfg7d4xcqri2tivjq
« Previous Showing results 1 — 15 out of 285 results