909 Hits in 4.7 sec

Deep Predictive Policy Training using Reinforcement Learning [article]

Ali Ghadirzadeh, Atsuto Maki, Danica Kragic, Mårten Björkman
2017 arXiv   pre-print
We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations.  ...  Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes.  ...  The encoder of the variational autoencoder transforms input trajectories into a normal distribution in a 5D space using a neural structure with three layers of 1000, 500 and 250 hidden units for both the  ... 
arXiv:1703.00727v1 fatcat:iuy4lllozvb5zm6hchydrshxmm

Controlling Assistive Robots with Learned Latent Actions [article]

Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh
2019 arXiv   pre-print
Unfortunately, the very dexterity that makes these arms useful also makes them challenging to teleoperate: the robot has more degrees-of-freedom than the human can directly coordinate with a handheld joystick  ...  Finally, we conduct two user studies on a robotic arm to compare our latent action approach to both state-of-the-art shared autonomy baselines and a teleoperation strategy currently used by assistive arms  ...  Human teleoperating a robot arm using latent actions.  ... 
arXiv:1909.09674v3 fatcat:jbywo5kwhna2dj4cts3ezq2jfy

Composable Instructions and Prospection Guided Visuomotor Control for Robotic Manipulation

Quanquan Shao, Jie Hu, Weiming Wang, Yi Fang, Mingshuo Han, Jin Qi, Jin Ma
2019 International Journal of Computational Intelligence Systems  
A B S T R A C T Deep neural network-based end-to-end visuomotor control for robotic manipulation is becoming a hot issue of robotics field recently.  ...  One-hot vector is often used for multi-task situation in this framework. However, it is inflexible using one-hot vector to describe multiple tasks and transmit intentions of humans.  ...  ACKNOWLEDGMENT This research is mainly supported by Special Program for Innovation Method of Ministry of Science and Technology of China (2018IM020100), National Natural Science Foundation of China (51775332  ... 
doi:10.2991/ijcis.d.191017.001 fatcat:pwv2yzqgpndhjjn4i5t6klzq74

Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks [article]

Hee-Seung Moon, Jiwon Seo
2019 arXiv   pre-print
In this paper, we present a method for predicting the trajectory of a human who follows a haptic robotic guide without using sight, which is valuable for assistive robots that aid the visually impaired  ...  We apply a deep learning method based on recurrent neural networks using multimodal data: (1) human trajectory, (2) movement of the robotic guide, (3) haptic input data measured from the physical interaction  ...  In addition, a depth camera captures the human's pose, and latent vectors are extracted from the depth image using a pre-trained variational autoencoder (VAE).  ... 
arXiv:1903.01027v1 fatcat:d3vnoihad5aw3prvlewrvm4l34

AutoIncSFA and vision-based developmental learning for humanoid robots

Varun Raj Kompella, Leo Pape, Jonathan Masci, Mikhail Frank, Jurgen Schmidhuber
2011 2011 11th IEEE-RAS International Conference on Humanoid Robots  
We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its  ...  Our new method Au-toIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person  ...  ACKNOWLEDGMENT The experimental paradigm used for the Human Interaction Experiment was first developed by the first author under the supervision of Dr.  ... 
doi:10.1109/humanoids.2011.6100865 dblp:conf/humanoids/KompellaPMFS11 fatcat:5l4v6qoc6rbojgooxn7bikagua

Safe Visual Navigation via Deep Learning and Novelty Detection

Charles Richter, Nicholas Roy
2017 Robotics: Science and Systems XIII  
Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior  ...  We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the  ...  ACKNOWLEDGMENTS This work was supported by ARO under the Robotics Collaborative Technology Alliance and their support is gratefully acknowledged.  ... 
doi:10.15607/rss.2017.xiii.064 dblp:conf/rss/RichterR17 fatcat:amapfytrxndurpbogjhlpri26u

Learning Latent Actions to Control Assistive Robots [article]

Dylan P. Losey, Hong Jun Jeon, Mengxi Li, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Jeannette Bohg, Dorsa Sadigh
2021 arXiv   pre-print
The robot is helping you eat dinner, and currently you want to cut a piece of tofu.  ...  These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick.  ...  Let us say you are using a one-DoF joystick to guide the robot arm along a line.  ... 
arXiv:2107.02907v2 fatcat:nw5s6roadbfdtceqwgvnhm5i2y

Behavior Self-Organization Supports Task Inference for Continual Robot Learning [article]

Muhammad Burhan Hafez, Stefan Wermter
2021 arXiv   pre-print
On the other hand, as humans, we have the ability to learn a growing set of tasks over our lifetime.  ...  Recent advances in robot learning have enabled robots to become increasingly better at mastering a predefined set of tasks.  ...  However, we use a Variational Autoencoder (VAE) [35] to learn a low-dimensional abstract representation of each frame.  ... 
arXiv:2107.04533v1 fatcat:u4fet5lhk5eizmhser2zvuol5m

Guest Editorial: Introduction to the Special Issue on Long-Term Human Motion Prediction

Luigi Palmieri, Rudenko Andrey, Jim Mainprice, Marc Hanheide, Alexandre Alahi, Achim Lilienthal, Kai O. Arras
2021 IEEE Robotics and Automation Letters  
Marc Hanheide specifically focuses on aspects of long-term robotic behaviour and human-robot interaction and adaptation. Alexandre Alahi is currently an Assistant Professor at EPFL.  ...  In all his work, he researches on autonomous robots, human-robot interaction, interaction-enabling technologies, and system architectures.  ...  [item 10) in the Appendix] provide a self-contained tutorial on a conditional-variational autoencoder ( ) G. Habibi and J. P.  ... 
doi:10.1109/lra.2021.3077964 fatcat:2gzbhc3x7rgsloieovogcbo6gm

Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network [article]

Hee-Seung Moon, Jiwon Seo
2020 arXiv   pre-print
We applied the proposed method to the training of a robotic guide for visually impaired people, which was designed to collect multimodal human response data and reflect such data when selecting the robot's  ...  This paper proposes a human path prediction network (HPPN) and an evolution strategy-based robot training method using virtual human movements generated by the HPPN, which compensates for this sample inefficiency  ...  Therefore, instead of using the depth data directly to train the HPPN, we extract low-dimensional latent feature vectors from the depth images using a variational autoencoder (VAE) [46] .  ... 
arXiv:2008.05054v1 fatcat:skzlgyjo7za2pakbqhs7zmqqnm

Action Anticipation by Predicting Future Dynamic Images [chapter]

Cristian Rodriguez, Basura Fernando, Hongdong Li
2019 Lecture Notes in Computer Science  
We represent human motion using Dynamic Images [1] and make use of tailored loss functions to encourage a generative model to produce accurate future motion prediction.  ...  Human action-anticipation methods predict what is the future action by observing only a few portion of an action in progress.  ...  Acknowledgments We thank NVIDIA Corporation for the donation of the GPUs used in this work. Action Anticipation By Predicting Future Dynamic Images  ... 
doi:10.1007/978-3-030-11015-4_10 fatcat:kupdi2jxbbe5jmry24fcg6co54

Action Anticipation By Predicting Future Dynamic Images [article]

Cristian Rodriguez, Basura Fernando, Hongdong Li
2018 arXiv   pre-print
We represent human motion using Dynamic Images and make use of tailored loss functions to encourage a generative model to produce accurate future motion prediction.  ...  Human action-anticipation methods predict what is the future action by observing only a few portion of an action in progress.  ...  Furthermore, it may be possible to take advantage of different kinds of neural machine to implement the model in equation 4 such as autoencoders [24] , variational conditional autoencoders [25, 26]  ... 
arXiv:1808.00141v1 fatcat:4rzxmxnoyfdl3lyinmm2u7ngge

Monophonic Music Generation with a Given Emotion Using Conditional Variational Autoencoder

Jacek Grekow, Teodora Dimitrova-Grekow
2021 IEEE Access  
Conditional variational autoencoder using a recurrent neural network for sequence processing was used as a generative model.  ...  By implementing emotional intelligence in machines, robots are expected not only to recognize and track emotions when interacting with humans, but also to respond and behave appropriately.  ...  Thanks to such a system, in any human-machine interaction, a robot would be able to create a varied bunch of suitable and well corresponding to the current human mood melodies.  ... 
doi:10.1109/access.2021.3113829 fatcat:nt6eczbkira33fnfkfjyne6n4u

Emotional Dialogue Generation Based on Conditional Variational Autoencoder and Dual Emotion Framework

Zhenrong Deng, Hongquan Lin, Wenming Huang, Rushi Lan, Xiaonan Luo, Yaguang Lin
2020 Wireless Communications and Mobile Computing  
In this paper, we propose a model based on conditional variational autoencoder and dual emotion framework (CVAE-DE) to generate emotional responses.  ...  In our model, latent variables of the conditional variational autoencoder are adopted to promote the diversity of conversation.  ...  AB20238013, ZY20198016, 2019GXNSFFA245014), and Guangxi Key Laboratory of Image and Graphic Intelligent Processing Project (No. GIIP2003).  ... 
doi:10.1155/2020/8881616 fatcat:ab7neiuoiravlfi2jztm576puu

Data-driven emotional body language generation for social robotics [article]

Mina Marmpena, Fernando Garcia, Angelica Lim, Nikolas Hemion, Thomas Wennekers
2022 arXiv   pre-print
The framework uses the Conditional Variational Autoencoder model and a sampling approach based on the geometric properties of the model's latent space to condition the generative process on targeted levels  ...  In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration, since humans attribute, and perhaps subconsciously  ...  Conditional Variational Autoencoders The Variational Autoencoder (VAE) framework [82, 83, 84] can be used to learn a posterior probability distribution that represents the unknown underlying process  ... 
arXiv:2205.00763v1 fatcat:m3ajydvkrrafli33hdvdyg5tpy
« Previous Showing results 1 — 15 out of 909 results