Filters








47,277 Hits in 5.3 sec

The Overlooked Classifier in Human-Object Interaction Recognition [article]

Ying Jin, Yinpeng Chen, Lijuan Wang, Jianfeng Wang, Pei Yu, Lin Liang, Jenq-Neng Hwang, Zicheng Liu
2022 arXiv   pre-print
Human-Object Interaction (HOI) recognition is challenging due to two factors: (1) significant imbalance across classes and (2) requiring multiple labels per image.  ...  Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.  ...  Introduction Human-Object Interaction (HOI) recognition has drawn significant interest for its essential role in scene understanding.  ... 
arXiv:2112.06392v2 fatcat:djhysbnqbjcxtnltbjatidwqra

The Overlooked Classifier in Human-Object Interaction Recognition [article]

Ying Jin, Yinpeng Chen, Lijuan Wang, Jianfeng Wang, Pei Yu, Lin Liang, Jenq-Neng Hwang, Zicheng Liu
2022 arXiv   pre-print
Human-Object Interaction (HOI) recognition is challenging due to two factors: (1) significant imbalance across classes and (2) requiring multiple labels per image.  ...  Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.  ...  Introduction Human-Object Interaction (HOI) recognition has drawn significant interest for its essential role in scene understanding.  ... 
arXiv:2203.05676v1 fatcat:kfrqnvebnbdxnpdlnrpfv3ej2y

Precise Affordance Annotation for Egocentric Action Video Datasets [article]

Zecheng Yu, Yifei Huang, Ryosuke Furuta, Takuma Yagi, Yusuke Goutsu, Yoichi Sato
2022 arXiv   pre-print
Object affordance is an important concept in human-object interaction, providing information on action possibilities based on human motor capacity and objects' physical property thus benefiting tasks such  ...  to represent the action possibilities between two objects.  ...  The affordance recognition model focuses on hands, while the mechanical action recognition model focuses on the interaction between objects.  ... 
arXiv:2206.05424v1 fatcat:x7dxkjepmnhrvduzven7l32luy

Deep Contextual Attention for Human-Object Interaction Detection

Tiancai Wang, Rao Muhammad Anwer, Muhammad Haris Khan, Fahad Shahbaz Khan, Yanwei Pang, Ling Shao, Jorma Laaksonen
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Most existing approaches decompose the problem into object localization and interaction recognition.  ...  Despite showing progress, these approaches only rely on the appearances of humans and objects and overlook the available context information, crucial for capturing subtle interactions between them.  ...  In this problem, the aim is to detect a human, an object, and label the interaction between them.  ... 
doi:10.1109/iccv.2019.00579 dblp:conf/iccv/WangAKKP0L19 fatcat:vyijo42s4zgcvnsx7p6ebs6zmy

Deep Contextual Attention for Human-Object Interaction Detection [article]

Tiancai Wang, Rao Muhammad Anwer, Muhammad Haris Khan, Fahad Shahbaz Khan, Yanwei Pang, Ling Shao, Jorma Laaksonen
2019 arXiv   pre-print
Most existing approaches decompose the problem into object localization and interaction recognition.  ...  Despite showing progress, these approaches only rely on the appearances of humans and objects and overlook the available context information, crucial for capturing subtle interactions between them.  ...  In this problem, the aim is to detect a human, an object, and label the interaction between them.  ... 
arXiv:1910.07721v1 fatcat:rrwxa74hdbhodh7a2qsvsicvhi

Handcrafted vs. learned representations for human action recognition

Xiantong Zhen, Ling Shao, Stephen J. Maybank, Rama Chellappa
2016 Image and Vision Computing  
Pantic, Editor-in-Chief of the Image and Vision Computing Journal for giving us the opportunity to guest edit this special issue, and the Elsevier staff, Yanhong Zhai for her great support to this special  ...  Acknowledgement We would like to thank all the authors for their contributions to this special issue, and reviewers for their timely and insightful reviews. We thank Professors J.-M. Frahm and M.  ...  Other methods The article "Using the conflict in Dempster-Shafer evidence theory as a rejection criterion in classifier output combination for 3D human action recognition" proposes a comprehensive solution  ... 
doi:10.1016/j.imavis.2016.10.002 fatcat:j4c2txj3g5glra67qvzab5mmke

A Fusion based Approach of Face Detection using Viola Jones and Skin Color Modeling Technique

Nancy Goyal, Harsh Dev
2015 International Journal of Computer Applications  
For an effective interaction between the machines and humans a user friendly and interactive interface needed. Automatic Face Detection and Recognition proves as a solution for that.  ...  Human computer interaction is dealing with different branches of learning that involves the interaction of human with machines directly.  ...  The selective features of face in the image can be detected, and any other background objects like trees, buildings, bodies, etc. are overlooked.  ... 
doi:10.5120/21094-3791 fatcat:ef6fkuws7jdk7hhsbt2fa7f4ce

A Celebration of 'Boring' Daily Life [chapter]

2020 Architecture and Naturing Affairs  
Many engineers try to increase the accuracy of recognition, but I would like to focus on 'mistakes'. Sometimes these programs classify different objects as the same by mistake.  ...  In fact, deep learning technology is approaching very similar object recognition as the Chihuahua or Muffin Problem.  ...  Careful and patient observation is needed to find the quality of fascination in 'boring' daily life.  ... 
doi:10.1515/9783035622164-064 fatcat:pwmkjcgh5nbntlobp2brjork7a

De Copia [chapter]

2020 Architecture and Naturing Affairs  
Many engineers try to increase the accuracy of recognition, but I would like to focus on 'mistakes'. Sometimes these programs classify different objects as the same by mistake.  ...  In fact, deep learning technology is approaching very similar object recognition as the Chihuahua or Muffin Problem.  ...  Careful and patient observation is needed to find the quality of fascination in 'boring' daily life.  ... 
doi:10.1515/9783035622164-048 fatcat:hcgqxbdmerccxeixrcipahdz2q

From 3D scene geometry to human workspace

Abhinav Gupta, Scott Satkin, Alexei A. Efros, Martial Hebert
2011 CVPR 2011  
Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions  ...  Our approach goes beyond estimating 3D scene geometry and predicts the "workspace" of a human which is represented by a data-driven vocabulary of human interactions.  ...  The authors would like to thank Varsha Hedau and David Lee for providing their results on the indoor scene dataset.  ... 
doi:10.1109/cvpr.2011.5995448 dblp:conf/cvpr/GuptaSEH11 fatcat:sd6dx3oztrenbeu5p37r3ahu6y

Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection [article]

Xiaoqian Wu, Yong-Lu Li, Xinpeng Liu, Junyi Zhang, Yuzhe Wu, Cewu Lu
2022 arXiv   pre-print
Human-Object Interaction (HOI) detection plays a crucial role in activity understanding.  ...  Though interactiveness has been studied in both whole body- and part- level and facilitates the H-O pairing, previous works only focus on the target person once (i.e., in a local perspective) and overlook  ...  This work was supported by the National Key R&D Program of China (No. 2021ZD0110700), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi Zhi Institute, and SHEITC (2018  ... 
arXiv:2207.14192v1 fatcat:dmbhvkxy2vd37n2i6vv5gfugz4

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

Samar Helou, Victoria Abou-Khalil, Riccardo Iacobucci, Elie El Helou, Ken Kiyono
2021 Journal of Medical Internet Research  
Objective We aimed to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of screen gaze and dialogue  ...  Similar to the human coder, the classifier was more accurate in fully inclusive layouts than in semi-inclusive layouts.  ...  Acknowledgments This research was supported by Japan Society for the Promotion of Science Kakenhi (grant number JP20K20244).We thank the 5 physicians who provided us with video data.  ... 
doi:10.2196/25218 pmid:33970117 fatcat:pakb7uyhh5hzlb673vor753caq

Chairs Can be Stood on: Overcoming Object Bias in Human-Object Interaction Detection [article]

Guangzhi Wang, Yangyang Guo, Yongkang Wong, Mohan Kankanhalli
2022 arXiv   pre-print
Existing work often shed light on improving either human and object detection, or interaction recognition.  ...  Detecting Human-Object Interaction (HOI) in images is an important step towards high-level visual comprehension.  ...  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.  ... 
arXiv:2207.02400v1 fatcat:tr7x5zek2ne5lpqber6kspxi6a

Real-time Action Recognition by Spatiotemporal Semantic and Structural Forests

Tsz-Ho Yu, Tae-Kyun Kim, Roberto Cipolla
2010 Procedings of the British Machine Vision Conference 2010  
In the experiments using KTH and the latest UT-interaction data sets, we demonstrate real-time performance as well as state-ofthe-art accuracy by the proposed method.  ...  We propose the kernel k-means forest classifier using PSRM to perform classification.  ...  As reported in table 4, the proposed method marked the best accuracy in classifying the challenging realistic human-human interactions.  ... 
doi:10.5244/c.24.52 dblp:conf/bmvc/YuKC10 fatcat:6olwm3mygvca5e5c6raetxkrg4

KFSENet: A Key Frame-Based Skeleton Feature Estimation and Action Recognition Network for Improved Robot Vision with Face and Emotion Recognition

Dinh-Son Le, Hai-Hong Phan, Ha Huy Hung, Van-An Tran, The-Hung Nguyen, Dinh-Quan Nguyen
2022 Applied Sciences  
Moreover, our proposed framework integrates face and emotion recognition to enable social robots to engage in more personal interaction with humans.  ...  emotion recognition to enable social robots to engage in more personal interactions.  ...  Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.  ... 
doi:10.3390/app12115455 fatcat:ilpgzrfvcrbapkve7ylzfcu3qq
« Previous Showing results 1 — 15 out of 47,277 results