Filters








203,511 Hits in 6.0 sec

Viewpoint Selection for Human Actions

Dmitry Rudoy, Lihi Zelnik-Manor
2011 International Journal of Computer Vision  
Typically, (e.g. in TV broadcasts) a human producer manually selects the best view.  ...  We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume.  ...  Hence, our view selection methods performed well. Conclusion This paper presented a method for selection of the best viewpoint for human actions.  ... 
doi:10.1007/s11263-011-0484-5 fatcat:q7cpk5hb4vch5iaryk6urdptsq

Posing to the Camera: Automatic Viewpoint Selection for Human Actions [chapter]

Dmitry Rudoy, Lihi Zelnik-Manor
2011 Lecture Notes in Computer Science  
In this paper we propose a method for evaluating the quality of a view, captured by a single camera. This can be used to automate viewpoint selection.  ...  We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume.  ...  ) 57 (cam5) Conclusion This paper presented a method for viewpoint quality estimation of human actions.  ... 
doi:10.1007/978-3-642-19282-1_25 fatcat:lun7si5leffrjl73oppssjhvu4

Viewpoint-Aware Action Recognition using Skeleton-Based Features from Still Images

Seong-heum Kim, Donghyeon Cho
2021 Electronics  
A real-world application for recognizing various actions was also qualitatively demonstrated.  ...  In this paper, we propose a viewpoint-aware action recognition method using skeleton-based features from static images. Our method consists of three main steps.  ...  We also appreciate Je-hyeong Kim, Heon-ki Kim at IOYS, Geun-tae Ryu and Ka-yeong Kim at IPL, and lastly Ki-san Hwang at Yujin Robot for system implementation and mobile demonstration.  ... 
doi:10.3390/electronics10091118 fatcat:gita5i46nves7jvkyrkcdz6hlu

A Multiviewpoint Outdoor Dataset for Human Action Recognition

Asanka G. Perera, Yee Wei Law, Titilayo T. Ogunwa, Javaan Chahl
2020 IEEE Transactions on Human-Machine Systems  
Owing to the articulated nature of the human body, it is challenging to detect an action from multiple viewpoints, particularly from an aerial viewpoint.  ...  However, human action recognition is still far from human-level performance.  ...  We thank Anoop Cherian for his help with our kernelized rank pooling implementation.  ... 
doi:10.1109/thms.2020.2971958 fatcat:q4gs4twbyjbsdgsc7rqc5zbfhy

Human Action Recognition from Depth Videos Using Pool of Multiple Projections with Greedy Selection

Chien-Quang LE, Sang PHAN, Thanh Duc NGO, Duy-Dinh LE, Shin'ichi SATOH, Duc Anh DUONG
2016 IEICE transactions on information and systems  
Then, we train and test action classifiers independently for each projection. To reduce the computational cost, we propose a greedy method to select a small yet robust combination of projections.  ...  Thus, a large number of projections, which may be useful for discriminating actions, are discarded.  ...  For 3D Action Pairs, the selected subset includes viewpoints: 1, 10, and 13.  ... 
doi:10.1587/transinf.2015edp7430 fatcat:bikwtan5zzenfhl64et6ecgir4

Motion overview of human actions

Jackie Assa, Daniel Cohen-Or, I-Cheng Yeh, Tong-Yee Lee
2008 ACM SIGGRAPH Asia 2008 papers on - SIGGRAPH Asia '08  
(top row) Poor selection of viewpoint and occlusion of significant body parts for the actions.  ...  Figure 1: Examples of poor selections from camera control algorithms which do not consider the human actions.  ...  (top row) Poor selection of viewpoint and occlusion of significant body parts for the actions.  ... 
doi:10.1145/1457515.1409068 fatcat:x6tr3igykbeqdjcgfiknlqg67y

Motion overview of human actions

Jackie Assa, Daniel Cohen-Or, I-Cheng Yeh, Tong-Yee Lee
2008 ACM Transactions on Graphics  
(top row) Poor selection of viewpoint and occlusion of significant body parts for the actions.  ...  Figure 1: Examples of poor selections from camera control algorithms which do not consider the human actions.  ...  (top row) Poor selection of viewpoint and occlusion of significant body parts for the actions.  ... 
doi:10.1145/1409060.1409068 fatcat:hy7nrwlgzjesxlqtrasc6jbwxe

Deep Reinforcement Learning for Active Human Pose Estimation [article]

Erik Gärtner, Aleksis Pirinen, Cristian Sminchisescu
2020 arXiv   pre-print
In extensive experiments with the Panoptic multi-view setup, and for complex scenes containing multiple people, we show that our model learns to select viewpoints that yield significantly more accurate  ...  Most 3d human pose estimation methods assume that input -- be it images of a scene collected from one or several viewpoints, or from a video -- is given.  ...  As described earlier, the two types of actions are viewpoint selection and continue. We will next cover the reward functions for them. Viewpoint selection reward.  ... 
arXiv:2001.02024v2 fatcat:nmqdjxmmm5aq3abzmli3bdtogq

Editorial: Social Responsibility—Measures and Measurement Viewpoint

Matjaz Mulej, Anita Hrast, Zdenka Zenko
2013 Systemic Practice and Action Research  
The collected articles address very different practices and actions.  ...  Measurement provides information as a basis for requisitely holistically chosen measures. But: how can one measure the level of holism as the central concept of systemic behavior?  ...  This actual situation is the action under systemic research in this collection of articles. The selected viewpoint is measures and measurement as the informative basis for action.  ... 
doi:10.1007/s11213-013-9297-5 fatcat:3scofabrwncvzlhmku2ny7wdo4

Deep Reinforcement Learning for Active Human Pose Estimation

Erik Gärtner, Aleksis Pirinen, Cristian Sminchisescu
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In extensive experiments with the Panoptic multi-view setup, and for complex scenes containing multiple people, we show that our model learns to select viewpoints that yield significantly more accurate  ...  Most 3d human pose estimation methods assume that input – be it images of a scene collected from one or several viewpoints, or from a video – is given.  ...  As described earlier, the two types of actions are viewpoint selection and continue. We will next cover the reward functions for them. Viewpoint selection reward.  ... 
doi:10.1609/aaai.v34i07.6714 fatcat:j32l3gxotffpbkgvngurnqcr5u

Viewpoints AI: Procedurally Representing and Reasoning about Gestures

Mikhail Jacob, Alexander Zook, Brian Magerko
2013 Conference of the Digital Games Research Association  
Viewpoints is a contemporary theatrical composition technique for understanding the expressive powers of gesture used to formally describe a dance performance or theatrical movement (Bogart 2005) .  ...  Toward this end, we describe our prototype for live interaction with a projected virtual agent in an interactive installation piece.  ...  ACKNOWLEDGMENTS The authors would like to thank Adam Fristoe for introducing them to the Viewpoints technique and for providing invaluable Viewpoints expertise and feedback in developing the Viewpoints  ... 
dblp:conf/digra/JacobZM13 fatcat:soe47hyxbvfhpcqphufb6ruvfm

Learning Human Pose Models from Synthesized Data for Robust RGB-D Action Recognition [article]

Jian Liu, Naveed Akhtar, Ajmal Mian
2018 arXiv   pre-print
Experiments on three benchmark cross-view human action datasets show that our algorithm outperforms existing methods by significant margins for RGB only and RGB-D action recognition.  ...  We propose Human Pose Models that represent RGB and depth images of human poses independent of clothing textures, backgrounds, lighting conditions, body shapes and camera viewpoints.  ...  The Tesla K-40 GPU used for this research was donated by the NVIDIA Corporation.  ... 
arXiv:1707.00823v2 fatcat:poz2y4vr3vbohgssiwe7qgc7lu

Evaluation of Organizational Structure in Emergency from the Viewpoint of Communication [chapter]

S. Nishida, M. Nakatani, Y. Hijikata, T. Koiso
2002 Knowledge and Technology Integration in Production and Services  
This paper focuses on evaJuaJiOll if orgOl1imtiona1 structure in emergency from the communicatim viewpoint. 11re communicaJinn process in emergency is anaiyudfirs~ and the prvblems caused in dre process  ...  1mdel is proposed. in which Iuomn related factors such as "competence': "duly': "responsibility" and "knowledge" are ClXISidered 111£11 a system to evaluate organkpIimaJ structure in enrergency from dre viewpoint  ...  I This work was partially supported by the JapanDSocietyDfor the Promotion of Science under Grant-in-Aid for Creative Scientific Research (Project No. 13S0018)  ... 
doi:10.1007/978-0-387-35613-6_18 fatcat:sx35urjqxvaodkefowodczphr4

Evaluation of Organizational Structure in Emergency from the Viewpoint of Communication

Shogo Nishida, Takashi Koiso, Mie Nakatani
2000 IFAC Proceedings Volumes  
This paper focuses on evaJuaJiOll if orgOl1imtiona1 structure in emergency from the communicatim viewpoint. 11re communicaJinn process in emergency is anaiyudfirs~ and the prvblems caused in dre process  ...  1mdel is proposed. in which Iuomn related factors such as "competence': "duly': "responsibility" and "knowledge" are ClXISidered 111£11 a system to evaluate organkpIimaJ structure in enrergency from dre viewpoint  ...  I This work was partially supported by the JapanDSocietyDfor the Promotion of Science under Grant-in-Aid for Creative Scientific Research (Project No. 13S0018)  ... 
doi:10.1016/s1474-6670(17)37293-2 fatcat:4xmnl53asbbwhngqlsjfpmdinm

Best Viewpoints for External Robots or Sensors Assisting Other Robots [article]

Jan Dufek, Xuesu Xiao, Robin R. Murphy
2020 arXiv   pre-print
This model will enable autonomous selection of the best possible viewpoint and path planning for the assistant robot.  ...  In this approach, viewpoints for the affordances are rated based on the psychomotor behavior of human operators and clustered into manifolds of viewpoints with the equivalent value.  ...  The model will also allow the robotic visual assistant to select a viewpoint for each action that enables direct apprehension of the affordance for that action reducing the need for high-workload deliberative  ... 
arXiv:2007.10452v1 fatcat:js22m7gp4jcttepxx6xvomecfq
« Previous Showing results 1 — 15 out of 203,511 results