267,829 Hits in 8.8 sec

Continual Learning of Visual Concepts for Robots through Limited Supervision [article]

Ali Ayub, Alan R. Wagner
2021 arXiv   pre-print
Further, robots need to learn through limited data because of scarcity of labeled data in the real-world environments.  ...  For many real-world robotics applications, robots need to continually adapt and learn new concepts.  ...  In each new increment , the robot gets a small set of labeled samples = {( , )} =1 , where ∈ X are the visual samples (images) and are their ground truth labels.  ... 
arXiv:2101.10509v1 fatcat:5tzigfgg2neqvkzah5o6skre5m

Lexicon acquisition based on object-oriented behavior learning

Shinya Takamuku, Yasutake Takahashi, Minoru Asada
2006 Advanced Robotics  
Based on a modified multi-module reinforcement learning system, the robot is able to automatically associate words to objects with various visual features based on similarities in affordances or in functions  ...  This limitation is due to the fact that categories for words are formed in a passive manner, either by teaching of caregivers or finding similarities in visual features.  ...  A learner with a low W limit associates labels more often with behavior categories, whereas a learner with a high W limit is more conservative about association.  ... 
doi:10.1163/156855306778522523 fatcat:2dkckbu2u5bt3ji5q3mr7xw6ii

Guest Editorial Introduction to the Special Section on Representation Learning for Visual Content Understanding

Jiwen Lu, Yuxin Peng, Guo-Jun Qi, Jun Yu
2020 IEEE transactions on circuits and systems for video technology (Print)  
Over the past years, his research interests have included multimedia analysis, machine learning, and image processing. In 2017, he received the IEEE SPS Best Paper Award.  ...  He has served as a program committee member or reviewer for top conferences and prestigious journals.  ...  To effectively utilize the limited labeled data and a large amount of unlabeled data for visual representation learning, semi-supervised learning methods usually generate the pseudo labels for the unlabeled  ... 
doi:10.1109/tcsvt.2020.3009095 fatcat:5gew2gv32zg3tfwjaavrtknr2e

Designing asynchronous online discussion environments: Recent progress and possible future directions

Fei Gao, Tianyi Zhang, Teresa Franklin
2012 British Journal of Educational Technology  
More specifically, future work should aim at (a) exploring new environments that support varied goals of learning; (b) integrating emerging technologies to address the constraints of current environments  ...  ; (c) designing multi-functional environments to facilitate complex learning, and (d) developing appropriate instructional activities and strategies for these environments.  ...  Research on designing environments to achieve other learning goals is limited.  ... 
doi:10.1111/j.1467-8535.2012.01330.x fatcat:z4ewnzx6qfbklkbtro6asxivp4

Using Navigational Information to Learn Visual Representations [article]

Lizhen Zhu, Brad Wyble, James Z. Wang
2022 arXiv   pre-print
as a similarity label to drive a learning objective for self-supervised learning.  ...  The goal of this work is to exploit navigational information in a visual environment to provide performance in training that exceeds the state-of-the-art self-supervised training.  ...  efficient self-supervised learning in a limited visual environment.  ... 
arXiv:2202.08114v1 fatcat:uit5rsy4nvap7gaznnkuteudye

A Programming Environment for Visual Block-Based Domain-Specific Languages

Azusa Kurihara, Akira Sasaki, Ken Wakita, Hiroshi Hosobe
2015 Procedia Computer Science  
We show that the environment is useful for novice programmers who learn basic concepts of programming and the features of Processing.  ...  In this paper, we present a programming environment for providing visual block-based domainspecific languages (visual DSLs) that are translatable into various programming languages.  ...  By choosing a type and defining a label and code, users can add a block with new features.  ... 
doi:10.1016/j.procs.2015.08.452 fatcat:6gwnrzeydjbtbnbn22mv72kaje

Deep Object-Centric Representations for Generalizable Robot Learning [article]

Coline Devin, Pieter Abbeel, Trevor Darrell, Sergey Levine
2017 arXiv   pre-print
In this paper, we propose a method where general purpose pretrained visual models serve as an object-centric prior for the perception system of a learned policy.  ...  The scope of the task-specific attention is easily adjusted by showing demonstrations with distractor objects or with diverse relevant objects.  ...  and labeling new detection data.  ... 
arXiv:1708.04225v3 fatcat:b5oscnltcnhhzm2zxfpiuwfiwy

Learning task-agnostic representation via toddler-inspired learning [article]

Kwanyoung Park, Junseok Park, Hyunseok Oh, Byoung-Tak Zhang, Youngki Lee
2021 arXiv   pre-print
Inspired by the toddler's learning procedure, we design an interactive agent that can learn and store task-agnostic visual representation while exploring and interacting with objects in the virtual environment  ...  One of the inherent limitations of current AI systems, stemming from the passive learning mechanisms (e.g., supervised learning), is that they perform well on labeled datasets but cannot deduce knowledge  ...  Motivated by [12] , we designed an environment supporting the human-like visual observation and active physical interaction with the object, to train the visual knowledge prior without any explicit labels  ... 
arXiv:2101.11221v1 fatcat:d6fhxvc22fclvcmvkndpyqmgoq

Adaptive Semantic Segmentation for Unmanned Surface Vehicle Navigation

Zhan, Xiao, Wen, Zhou, Yuan, Xiu, Zou, Xie, Li
2020 Electronics  
The experimental results show that the proposed method exhibits excellent performance with few-shot learning and is quite adaptable to a new environment, very efficient for limited manual labeled data  ...  The network trains itself with the refined pseudo label and the weight map. A set of experiments were designed to evaluate the proposed method.  ...  Recently, the overwhelming success of deep learning architectures has inspired a new approach for researches on outdoor visual navigation [10, 11] .  ... 
doi:10.3390/electronics9020213 fatcat:jw7iyy5oizbk7o5fdnyrjgchhi

Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments

Stewart Jamieson, Jonathan P. How, Yogesh Girdhar
2020 2020 IEEE International Conference on Robotics and Automation (ICRA)  
We present a novel POMDP problem formulation for a robot that must autonomously decide where to go to collect new and scientifically relevant images given a limited ability to communicate with its human  ...  We introduce a novel active reward learning strategy based on making queries to help the robot minimize path "regret" online, and evaluate it for suitability in autonomous visual exploration through simulations  ...  In bandwidth limited environments such as the deep sea, sending that many images for labelling during the span of a mission is not feasible.  ... 
doi:10.1109/icra40945.2020.9196922 fatcat:imjmswlsybdi5ihmy5qg44ic3a

Teaching Robots through Situated Interactive Dialogue and Visual Demonstrations

Jose L. Part, Oliver Lemon
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
The ability to quickly adapt to new environments and incorporate new knowledge is of great importance for robots operating in unstructured environments and interacting with non-expert users.  ...  We propose the development of a framework for teaching robots to perform tasks using natural language instructions, visual demonstrations and interactive dialogue.  ...  During training, each sample is accompanied by its label which can be obtained from interacting with a human tutor. • The development of algorithms for learning hierarchical task structures from natural  ... 
doi:10.24963/ijcai.2017/760 dblp:conf/ijcai/PartL17 fatcat:ruqtewm5wzbn5k52alhqd7ojvu

Urban Rhapsody: Large-scale exploration of urban soundscapes [article]

Joao Rulff, Fabio Miranda, Maryam Hosseini, Marcos Lage, Mark Cartwright, Graham Dove, Juan Bello, Claudio T. Silva
2022 arXiv   pre-print
However, the overwhelming number of noise sources in the urban environment and the scarcity of labeled data makes it nearly impossible to create classification models with large enough vocabularies that  ...  To satisfy the requirements and tackle the identified challenges, we propose Urban Rhapsody, a framework that combines state-of-the-art audio representation, machine learning, and visual analytics to allow  ...  Acknowledgements We would like to thank our colleagues at CUSP (NYU) for their feedback during the development of this work.  ... 
arXiv:2205.13064v1 fatcat:6wivrrtr5few7jgpvsw7eq6r6y

Online, self-supervised vision-based terrain classification in unstructured environments

Peyman Moghadam, Wijerupage Sardha Wijesoma
2009 2009 IEEE International Conference on Systems, Man and Cybernetics  
machine-learning techniques.  ...  Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get  ...  Procopio for providing the log data used as data sets considered in this study.  ... 
doi:10.1109/icsmc.2009.5345942 dblp:conf/smc/MoghadamW09 fatcat:l2p5oqicybgbbjwqto5xp3nfo4

Environment exploration for object-based visual saliency learning

Celine Craye, David Filliat, Jean-Francois Goudou
2016 2016 IEEE International Conference on Robotics and Automation (ICRA)  
Searching for objects in an indoor environment can be drastically improved if a task-specific visual saliency is available.  ...  We describe a method to incrementally learn such an object-based visual saliency directly on a robot, using an environment exploration mechanism.  ...  Environment Exploration for Object-Based Visual Saliency Learning Céline Craye 1,2 , David Filliat 1 and Jean-François Goudou 2 Abstract-Searching for objects in an indoor environment can be drastically  ... 
doi:10.1109/icra.2016.7487379 dblp:conf/icra/CrayeFG16 fatcat:44onrih3fnbtpdnbirhqm24pam

GAPLE: Generalizable Approaching Policy LEarning for Robotic Object Searching in Indoor Environment [article]

Xin Ye, Zhe Lin, Joon-Young Lee, Jianming Zhang, Shibin Zheng and Yezhou Yang
2019 arXiv   pre-print
We study the problem of learning a generalizable action policy for an intelligent agent to actively approach an object of interest in an indoor environment solely from its visual inputs.  ...  While scene-driven or recognition-driven visual navigation has been widely studied, prior efforts suffer severely from the limited generalization capability.  ...  With the current surge of deep reinforcement learning [1] - [3] , a joint learning method of visual recognition and planning emerges as end-to-end learning [4] , [5] .  ... 
arXiv:1809.08287v2 fatcat:3up3mviorjflpmo4eogxotjfdq
« Previous Showing results 1 — 15 out of 267,829 results