Filters








8,620 Hits in 3.6 sec

SKILL-IL: Disentangling Skill and Knowledge in Multitask Imitation Learning [article]

Bian Xihan and Oscar Mendez and Simon Hadfield
2022 arXiv   pre-print
In this work, we introduce a new perspective for learning transferable content in multi-task imitation learning. Humans are able to transfer skills and knowledge.  ...  These contain either the knowledge of the environmental context for the task or the generalizable skill needed to solve the task.  ...  Disentangled Representations The state of the art for learning disentangled representations is dominated by VAE approaches.  ... 
arXiv:2205.03130v2 fatcat:k4d3ooxt6vb25hufnjgu4rsqru

Weakly Supervised Disentangled Representation for Goal-conditioned Reinforcement Learning

Zhifeng Qian, You Mingyu, Zhou Hongjun, Bin He
2022 IEEE Robotics and Automation Letters  
visual Reinforcement Learning.  ...  In the paper, we propose a skill learning framework DR-GRL that aims to improve the sample efficiency and policy generalization by combining the Disentangled Representation learning and Goal-conditioned  ...  METHOD DR-GRL is a skill learning framework for combining disentangled representation learning with the goal-condition RL.  ... 
doi:10.1109/lra.2022.3141148 fatcat:sprl5ju4x5ftbmq76cp3c3cai4

Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video [article]

Oier Mees, Markus Merklinger, Gabriel Kalweit, Wolfram Burgard
2020 arXiv   pre-print
Key challenges for the deployment of reinforcement learning (RL) agents in the real world are the discovery, representation and reuse of skills in the absence of a reward function.  ...  The adversarial skill-transfer loss enhances re-usability of learned skill embeddings over multiple task domains.  ...  Compared to these approaches, we take multiple tasks into account to learn a skill embedding before training a reinforcement learning agent with a self-supervised vision-based training signal.  ... 
arXiv:1910.09430v2 fatcat:755seey2lfebznbluzko6qrzdi

Abstract Reasoning with Distracting Features [article]

Kecheng Zheng, Zheng-jun Zha, Wei Wei
2019 arXiv   pre-print
for predictions.  ...  Inspired by this fact, we propose feature robust abstract reasoning (FRAR) model, which consists of a reinforcement learning based teacher network to determine the sequence of training and a student network  ...  Right: Form the teacher model as a reinforcement learning problem.  ... 
arXiv:1912.00569v1 fatcat:xpcruq56sbanha2k2z4o35fbpi

Scaling simulation-to-real transfer by learning composable robot skills [article]

Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman
2018 arXiv   pre-print
In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.  ...  Later, we learn high-level policies which actuate the low-level policies via this skill embedding parameterization.  ...  Acknowledgements The authors would like to thank Angel Gonzalez Garcia, Jonathon Shen, and Chang Su for their work on the garage 2 reinforcement learning for robotics framework, on which the software for  ... 
arXiv:1809.10253v3 fatcat:32nfak3hongcdo5gvujtc5vyy4

Complex Skill Acquisition Through Simple Skill Imitation Learning [article]

Pranay Pasula
2020 arXiv   pre-print
of complex, hard-to-learn skills.  ...  Motivated by this line of reasoning, we propose a new algorithm that trains neural network policies on simple, easy-to-learn skills in order to cultivate latent spaces that accelerate imitation learning  ...  Training a VAE to embed and reconstruct demonstrations of these skills and subskills using (1) would generally result in an embedding space with no clear relationship between skill and subskill embedding  ... 
arXiv:2007.10281v4 fatcat:xmeromusp5bvrowwdyo7zrhclq

Representation Matters: Improving Perception and Exploration for Robotics [article]

Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, Tejas Kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin Riedmiller
2021 arXiv   pre-print
Projecting high-dimensional environment observations into lower-dimensional structured representations can considerably improve data-efficiency for reinforcement learning in domains with limited data such  ...  learned or hand-engineered representations.  ...  ACKNOWLEDGEMENTS The authors would like to thank Yusuf Aytar, Volodymyr Mnih, Nando de Freitas, and Nicolas Heess for helpful discussion and relevant feedback for shaping our submission.  ... 
arXiv:2011.01758v2 fatcat:wsoz3m4e4ffbxkjf4taoiyfw6e

Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping in Clutter by State Representation Learning Based on Disentanglement of a Raw Input Image [article]

Taewon Kim, Yeseong Park, Youngbin Park, Il Hong Suh
2020 arXiv   pre-print
However, typical representation learning procedures are unsuitable for extracting pertinent information for learning the grasping skill, because the visual inputs for representation learning, where a robot  ...  This enables deep RL to learn robotic grasping skills from highly varied and diverse visual inputs.  ...  It is observed that for all SRLs, grasping skills are not learned given raw input images (L0). L1 disentanglement fails to obtain substantial improvement in all three SRL models.  ... 
arXiv:2002.11903v1 fatcat:q7l25j6a7vbv5htiupombsempe

Fast Adaptation via Policy-Dynamics Value Functions [article]

Roberta Raileanu, Max Goldstein, Arthur Szlam, Rob Fergus
2020 arXiv   pre-print
An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned.  ...  At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction  ...  Reinforcement learning transfer via sparse coding. In Proceedings of the 11th international conference on autonomous agents and multiagent systems, volume 1, pp. 383-390.  ... 
arXiv:2007.02879v1 fatcat:zhfr27zs3rfphftwrijjlrx57q

Unsupervised Skill Discovery with Bottleneck Option Learning [article]

Jaekyeom Kim, Seohong Park, Gunhee Kim
2021 arXiv   pre-print
It provides the abstraction of the skills learned with the information bottleneck framework for the options with improved stability and encouraged disentanglement.  ...  We propose a novel unsupervised skill discovery method named Information Bottleneck Option Learning (IBOL).  ...  Acknowledgements We thank the anonymous reviewers for the helpful comments. This work was supported by Samsung Advanced Institute of Technology, the ICT R&D program of MSIT/IITP  ... 
arXiv:2106.14305v1 fatcat:yd3pmk7oszdbvm3cr7pf5jdelm

Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos [article]

Ajay Kumar Tanwani, Pierre Sermanet, Andy Yan, Raghav Anand, Mariano Phielipp, Ken Goldberg
2020 arXiv   pre-print
The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network.  ...  We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are  ...  Generalizing these skills to new situations requires extracting disentangled representations from observations such as the relationships between objects and the environment while being invariant to lighting  ... 
arXiv:2006.00545v1 fatcat:l7r5yhmm5jbmtckxrhi43xacuu

Time-Contrastive Networks: Self-Supervised Learning from Video [article]

Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine
2018 arXiv   pre-print
Reward functions obtained by following the human demonstrations under the learned representation enable efficient reinforcement learning that is practical for real-world robotic systems.  ...  We train our representations using a metric learning loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors  ...  We demonstrate that this representation can be used to create a reward function for reinforcement learning of robotic skills, using only raw video demonstrations for supervision, and for direct imitation  ... 
arXiv:1704.06888v3 fatcat:mqt2bdjvobc7lidrtvrc3rtnoi

Multi-Task Reinforcement Learning with Context-based Representations [article]

Shagun Sodhani, Amy Zhang, Joelle Pineau
2021 arXiv   pre-print
While this metadata can be useful for improving multi-task learning performance, effectively incorporating it can be an additional challenge.  ...  The benefit of multi-task learning over single-task learning relies on the ability to use relations across tasks to improve performance on any single task.  ...  Meta-World proposes a benchmark for meta-RL Acknowledgements We thank Edward Grefenstette, Tim Rocktäschel, Danielle Rothermel and Olivier Delalleau for feedback that improved this paper.  ... 
arXiv:2102.06177v2 fatcat:gmwlp2lwavhi7itxkok7dj6tdy

Proceedings of the First Workshop on Weakly Supervised Learning (WeaSuL) [article]

Michael A. Hedderich, Benjamin Roth, Katharina Kann, Barbara Plank, Alex Ratner, Dietrich Klakow
2021 arXiv   pre-print
for prediction.  ...  Welcome to WeaSuL 2021, the First Workshop on Weakly Supervised Learning, co-located with ICLR 2021.  ...  We proposed a new framework based on total correlation for weakly-supervised disentanglement and showed through empirical evaluations on image datasets that our model improves learning disentangled representations  ... 
arXiv:2107.03690v1 fatcat:e57s4gr4lrdp5jtw7uoqsbtknq

DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation [article]

Minghao Zhang, Pingcheng Jian, Yi Wu, Huazhe Xu, Xiaolong Wang
2021 arXiv   pre-print
While previous reinforcement learning approaches primarily focus on modeling the compositionality of sub-tasks, two fundamental issues are largely ignored particularly when learning cooperative strategies  ...  To tackle these two issues, we propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub-tasks and objects.  ...  We are committed to releasing the code for our approach, the baselines, and the simulation environment.  ... 
arXiv:2106.05907v4 fatcat:cvs643jjfrgojhgr5epjvncx3i
« Previous Showing results 1 — 15 out of 8,620 results