12,699 Hits in 6.6 sec

Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors [article]

Fei Han, Xue Yang, Christopher Reardon, Yu Zhang, Hao Zhang
2017 arXiv   pre-print
information together to enable real-time robot awareness of human behaviors.  ...  In this paper, we propose a novel simultaneous Feature And Body-part Learning (FABL) approach that simultaneously identifies discriminative body parts and features, and efficiently integrates all available  ...  In this paper, we introduce a novel Feature And Body-part Learning (FABL) method to enable real-time robot awareness of human behaviors, through learning discriminative skeletal features and body parts  ... 
arXiv:1702.07474v1 fatcat:dfiquf7d4jdxjdkmmom3z2qxp4

How can humans and robots communicate better?

Mariofanna Milanova, Belinda Blevins Ghosal, Lawrence O'Gorman
2020 International Robotics & Automation Journal  
If robots can "read" gestures of a human, this will determine the success of a task, and learning will occur through trial-anderror or reinforcement learning.  ...  For example, when the human becomes more aware of whether his/her facial expressions and gestures match the intent; this expands information vertically into the Johari window pane for the human.  ...  Acknowledgements University of Arkansas at Little Rock, USA and Nokia Bell Labs, Murray Hill, New Jersey, USA.  ... 
doi:10.15406/iratj.2020.06.00214 fatcat:wmn6xifaqjgzbohj4v3dfy6ikm

Special issue on user profiling and behavior adaptation for human-robot interaction

Silvia Rossi, Dongheui Lee
2017 Pattern Recognition Letters  
The recognition of the user's status and gestures is used in [9] , where the authors present a real-time approach for human-aware motion re-planning using a two-level hierarchical architecture.  ...  Indeed, for improved and natural human-robot cooperation, human users will learn how to interact with the robot but, at the same time, the robotic systems should adapt to the users.  ... 
doi:10.1016/j.patrec.2017.06.020 fatcat:3rrsxmysyfc4bm6zuxnatrat4e

Robotic Dance in Social Robotics—A Taxonomy

Hua Peng, Changle Zhou, Huosheng Hu, Fei Chao, Jing Li
2015 IEEE Transactions on Human-Machine Systems  
Robotic dance is classified into four categories: cooperative human-robot dance, imitation of human dance motions, synchronization for music, and creation of robotic choreography.  ...  Robotic dance is an important topic in the field of social robotics. Its research has a vital significance to both humans and robotics.  ...  Third, it is not clear how to synthesize the motion primitives of all body parts into a coordinated dance motion of a whole body for a robot. 2) Task Model: The Learning-From-Observation (LFO) paradigm  ... 
doi:10.1109/thms.2015.2393558 fatcat:zv5yzzr275b4fld2qmxbm6zrrq

Robot's play

Andrew G. Brooks, Jesse Gray, Guy Hoffman, Andrea Lockerd, Hans Lee, Cynthia Breazeal
2004 Computers in Entertainment  
_________________________________________________________________________________________ Personal robots for human entertainment form a new class of computer-based entertainment that is beginning to become  ...  We describe this form of gaming and summarize our current efforts in this direction on our lifelike, expressive, autonomous humanoid robot.  ...  Some arcade games feature ultrasonic tracking of the player's body to make real-world ducking and weaving part of game-play.  ... 
doi:10.1145/1027154.1027171 fatcat:qmc3zqjxbrcebpe7k7r4xjfch4

Multimodal Emotional Understanding in Robotics [chapter]

Juanpablo Heredia, Yudith Cardinale, Irvin Dongo, Ana Aguilera, Jose Diaz-Amado
2022 Ambient Intelligence and Smart Environments  
(face, posture, body, and context features) captured through the robot's sensors; the predicted emotion triggers some robot behavior changes.  ...  This research is focused on analyzing, explaining, and arguing the usability and viability of an out-of-robot and multimodal approach for emotional robots.  ...  Although the communication time between robot and server allows a real-time (or near real-time) data exchange, the whole processing of images takes more time, making it difficult to act in realtime and  ... 
doi:10.3233/aise220020 fatcat:juxjzq2vofebneutqdngm7nhem

SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports [chapter]

Rudolph Triebel, Kai Arras, Rachid Alami, Lucas Beyer, Stefan Breuers, Raja Chatila, Mohamed Chetouani, Daniel Cremers, Vanessa Evers, Michelangelo Fiore, Hayley Hung, Omar A. Islas Ramírez (+13 others)
2016 Springer Tracts in Advanced Robotics  
, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors.  ...  The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in realtime for mobile platforms.  ...  actions in real-time for a mobile robotic platform.  ... 
doi:10.1007/978-3-319-27702-8_40 fatcat:v5wk5kpnnfhbthfess7l525pqy

A Basic Architecture of an Autonomous Adaptive System With Conscious-Like Function for a Humanoid Robot

Yasuo Kinouchi, Kenneth James Mackin
2018 Frontiers in Robotics and AI  
One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them.  ...  The binding problem and the basic causes of delay in Libet's experiment are also explained by capturing awareness in this manner.  ...  One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them.  ... 
doi:10.3389/frobt.2018.00030 pmid:33644117 pmcid:PMC7904312 fatcat:wbgemysslfgdbeuckbwe3dxvkm

Affective Human-Humanoid Interaction Through Cognitive Architecture [chapter]

Ignazio Infantino
2012 The Future of Humanoid Robots - Research and Applications  
Systems for the automatic analysis of human behavior should treat all human interaction channels (audio, visual, and tactile), and should analyze both verbal and non verbal signals (words, body gestures  ...  The face and body are the elements analyzed to infer the affective state of the human, and for the recognition of identity.  ...  perception, adaptive behavior, human-robot interaction, neuroscience and machine learning.  ... 
doi:10.5772/25794 fatcat:62hsiwkmj5gbtjbhejy2ffqx6u

Detecting socially interacting groups using f-formation: A survey of taxonomy, methods, datasets, applications, challenges, and future research directions [article]

Hrishav Bakul Barua, Theint Haythi Mg, Pradip Pramanick, Chayan Sarkar
2021 arXiv   pre-print
To possess such a quality, first, a robot needs to determine the formation of the group and then determine a position for itself, which we humans do implicitly.  ...  In this article, we investigate one such social behavior for collocated robots. Imagine a group of people is interacting with each other and we want to join the group.  ...  No need for prior information about camera parameters/features of the scene is needed. The evaluation is performed on the basis of human experience study in real-life scenes using robots and humans.  ... 
arXiv:2108.06181v2 fatcat:walfqfi55fe4fja3imr4qu6asu

Human Behavior Understanding for Robotics [chapter]

Albert Ali Salah, Javier Ruiz-del-Solar, Çetin Meriçli, Pierre-Yves Oudeyer
2012 Lecture Notes in Computer Science  
This paper discusses the scientific, technological and application challenges that arise from the mutual interaction of robotics and computational human behavior understanding.  ...  Robotic systems interacting with people in uncontrolled environments need capabilities to correctly interpret, predict and respond to human behaviors.  ...  This work is supported by INRIA project PAL, Bogaziçi University project BAP-6531, EUCogIII, and ERC EXPLORERS 240007.  ... 
doi:10.1007/978-3-642-34014-7_1 fatcat:i2fvsrkp4vfzhkzivzzjk2gwp4

Towards open and expandable cognitive AI architectures for large-scale multi-agent human-robot collaborative learning

Georgios Th. Papadopoulos, Margherita Antona, Constantine Stephanidis
2021 IEEE Access  
based framework for incarnating a multi-human multi-robot collaborative learning environment.  ...  Despite the large body of research works already reported, current key technological challenges include those of multi-agent learning and long-term autonomy.  ...  ACKNOWLEDGEMENTS The work presented in this paper was supported by the ICS-FORTH internal RTD Programme 'Ambient Intelligence and Smart Environments'.  ... 
doi:10.1109/access.2021.3080517 fatcat:nzxzaxbx2jaf5owuihlxxfizuq

Autonomously Learning to Visually Detect Where Manipulation Will Succeed [article]

Hai Nguyen, Charles C. Kemp
2012 arXiv   pre-print
Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch.  ...  After training, the robot also continued to learn in order to adapt in the event of failure.  ...  Acknowledgments We thank Aaron Bobick, Jim Rehg, and Tucker Hermans for their input. We thank Willow Garage for the use of a PR2 robot, financial support, and other assistance.  ... 
arXiv:1212.6837v1 fatcat:rmto4tokcfgatnmcwvo3yg3ixe

Design of Advanced Human–Robot Collaborative Cells for Personalized Human–Robot Collaborations

Alessandro Umbrico, Andrea Orlandini, Amedeo Cesta, Marco Faroni, Manuel Beschi, Nicola Pedrocchi, Andrea Scala, Piervincenzo Tavormina, Spyros Koukas, Andreas Zalonis, Nikos Fourtakas, Panagiotis Stylianos Kotsaris (+2 others)
2022 Applied Sciences  
This holds especially for humanrobot collaboration in manufacturing, which needs continuous interaction between humans and robots.  ...  The coexistence of human and autonomous robotic agents raises several methodological and technological challenges for the design of effective, safe, and reliable control paradigms.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/app12146839 fatcat:ywwquuyq4zb4nmoupm4agiqvje

From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation [article]

Juana Valeria Hurtado, Laura Londoño, Abhinav Valada
2021 Frontiers in Robotics and AI   accepted
In order to make the presence of robots safe as well as comfortable for humans, and to facilitate their acceptance in public environments, they are often equipped with social abilities for navigation and  ...  Our proposed framework consists of two components: learning which incorporates social context into the learning process to account for safety and comfort, and relearning to detect and correct potentially  ...  In the specific case of learning socially-aware robot navigation from real-world data, robots can reproduce biased behaviors implicit in human-human interaction.  ... 
doi:10.3389/frobt.2021.650325 pmid:33842558 pmcid:PMC8024571 arXiv:2101.02647v2 fatcat:55bpyrj6fjbb3ebmswekxsvyzi
« Previous Showing results 1 — 15 out of 12,699 results