Filters








14,848 Hits in 7.9 sec

Automatic Dataset Expansion with Structured Feature Learning for Human Lying Pose Detection

Daoxun Xia, Lingjin Zhao, Fang Guo, Xi Chen
<span title="">2019</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
INDEX TERMS Human lying pose detection, automatic dataset expansion, perspective transformation, gibbs sampling, deep learning. This work is licensed under a Creative Commons Attribution 4.0 License.  ...  An important problem with lying pose detection is the training dataset, which hardly accounts for each possible body configuration.  ...  The challenge in pose estimation is to perform the estimation by using structured feature learning, one for each pose.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2019.2962100">doi:10.1109/access.2019.2962100</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vcwm7rs4rrgijgnqjazcff5gmq">fatcat:vcwm7rs4rrgijgnqjazcff5gmq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108120744/https://ieeexplore.ieee.org/ielx7/6287639/8948470/08941100.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/25/95/2595f6547a261211e83c19a1cbce7e6abfba2164.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2019.2962100"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Vision-Based Fallen Person Detection for the Elderly [article]

Markus D. Solbach, John K. Tsotsos
<span title="2017-08-15">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The key novelty is a human fall detector which uses a CNN based human pose estimator in combination with stereo data to reconstruct the human pose in 3D and estimate the ground plane in 3D.  ...  With this paper we present a new , non-invasive system for fallen people detection. Our approach uses only stereo camera data for passively sensing the environment.  ...  This research was supported by several sources for which the authors are grateful: the NSERC Canadian Field Robotics Network (2016-0157), the Canada Research Chairs Program (950-219525), and the Natural  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1707.07608v2">arXiv:1707.07608v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ijafurh7pjhj7d7qwe4szksbwi">fatcat:ijafurh7pjhj7d7qwe4szksbwi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911222705/https://arxiv.org/pdf/1707.07608v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c9/d0/c9d0c4639ed18e7a3e34406ae98c6d7da3b7c1ce.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1707.07608v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Vision-Based Fallen Person Detection for the Elderly

Markus D. Solbach, John K. Tsotsos
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6s36fqp6q5hgpdq2scjq3sfu6a" style="color: black;">2017 IEEE International Conference on Computer Vision Workshops (ICCVW)</a> </i> &nbsp;
The key novelty is a human fall detector which uses a CNN based human pose estimator in combination with stereo data to reconstruct the human pose in 3D and estimate the ground plane in 3D.  ...  With this paper we present a new , non-invasive system for fallen people detection. Our approach uses only stereo camera data for passively sensing the environment.  ...  This research was supported by several sources for which the authors are grateful: the NSERC Canadian Field Robotics Network (2016-0157), the Canada Research Chairs Program (950-219525), and the Natural  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.170">doi:10.1109/iccvw.2017.170</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccvw/SolbachT17.html">dblp:conf/iccvw/SolbachT17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dsfn6toi2fformpqrhbjdvscpa">fatcat:dsfn6toi2fformpqrhbjdvscpa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200319035805/http://openaccess.thecvf.com/content_ICCV_2017_workshops/papers/w22/Solbach_Vision-Based_Fallen_Person_ICCV_2017_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2c/b8/2cb82c9c144df2aa8f691176985ece2168a818e8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.170"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Developing visual competencies for socially assistive robots

K. Papoutsakis, P. Padeleris, A. Ntelidakis, S. Stefanou, X. Zabulis, D. Kosmopoulos, A. A. Argyros
<span title="">2013</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/x2nft42cpndnln67jzca2rd3dq" style="color: black;">Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments - PETRA &#39;13</a> </i> &nbsp;
We present the key modules of independent motion detection, object detection, body localization, person tracking, head pose estimation and action recognition and we explain how they serve the goal of natural  ...  In this paper, we present our approach towards developing visual competencies for socially assistive robots within the framework of the HOBBIT project.  ...  ., independent motion detection, object detection, body localization, tracking, head pose estimation and human action recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2504335.2504395">doi:10.1145/2504335.2504395</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/petra/PapoutsakisPNSZKA13.html">dblp:conf/petra/PapoutsakisPNSZKA13</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hjfha7dwj5byfp4gzu3aql2tnm">fatcat:hjfha7dwj5byfp4gzu3aql2tnm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170706023006/http://users.ics.forth.gr/~argyros/mypapers/2013_05_petra_hobbit.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/6d/be6d160616cf0fb6b4c056cae67142d2c40e36f1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/2504335.2504395"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

In Defense of the Direct Perception of Affordances [article]

David F. Fouhey and Xiaolong Wang and Abhinav Gupta
<span title="2015-05-05">2015</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The field of functional recognition or affordance estimation from images has seen a revival in recent years.  ...  As originally proposed by Gibson, the affordances of a scene were directly perceived from the ambient light: in other words, functional properties like sittable were estimated directly from incoming pixels  ...  The authors thank NVIDIA for GPU donations. The authors would like to thank Abhinav Shrivastava and Ishan Misra for helpful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1505.01085v1">arXiv:1505.01085v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5swwrc2h5becjmea3p6sa6fm2a">fatcat:5swwrc2h5becjmea3p6sa6fm2a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200917231316/https://arxiv.org/pdf/1505.01085v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6e/68/6e68a7fb40510d1b8d5631bbd9647e5e9fed7341.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1505.01085v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Cascaded Models for Articulated Pose Estimation [chapter]

Benjamin Sapp, Alexander Toshev, Ben Taskar
<span title="">2010</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
We address the problem of articulated human pose estimation by learning a coarse-to-fine cascade of pictorial structure models.  ...  We propose to learn a sequence of structured models at different pose resolutions, where coarse models filter the pose space for the next level via their max-marginals.  ...  Introduction Pictorial structure models [1] are a popular method for human body pose estimation [2] [3] [4] [5] [6] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-15552-9_30">doi:10.1007/978-3-642-15552-9_30</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uq6zouh655a2bnb74hps2yf3bu">fatcat:uq6zouh655a2bnb74hps2yf3bu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170810225050/https://homes.cs.washington.edu/~taskar/pubs/eccv10.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4f/f5/4ff5a096e33a0c3e3866b5593bd3a43bcbaeb8ac.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-15552-9_30"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Loss Guided Activation for Action Recognition in Still Images [article]

Lu Liu, Robby T. Tan, Shaodi You
<span title="2018-12-11">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
One significant problem of deep-learning based human action recognition is that it can be easily misled by the presence of irrelevant objects or backgrounds.  ...  We propose a multi-task deep learning method that jointly predicts the human action class and human location heatmap.  ...  To name a few, HyperFace [16] jointly learns face detection, landmarks localization, pose estimation and gender recognition tasks, and improves individual performances.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.04194v1">arXiv:1812.04194v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bhqctonywbbmxaacyvg3ndvjgq">fatcat:bhqctonywbbmxaacyvg3ndvjgq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014090310/https://arxiv.org/pdf/1812.04194v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/16/26/16261e2af94e20db19bba11080636049b9919cd3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.04194v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Human Body Poses Recognition Using Neural Networks with Data Augmentation

Ahmad al-Qerem, Zarqa University, Jordan
<span title="2019-10-15">2019</span> <i title="The World Academy of Research in Science and Engineering"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/naqzxq5hurh2bp2pnvwitnnx44" style="color: black;">International Journal of Advanced Trends in Computer Science and Engineering</a> </i> &nbsp;
Learning rich features from objectness estimation for human lying- pose detection. Multimedia Systems, 23(4), 515-526. https://doi.org/10.1007/s00530-016-0518-5 8.  ...  evaluation between dimensions for results[1] This issue is very essential for future intelligent mapping human body poses to the right estimation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.30534/ijatcse/2019/40852019">doi:10.30534/ijatcse/2019/40852019</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tmacxo4jjvgijalnufpwgclhsm">fatcat:tmacxo4jjvgijalnufpwgclhsm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220308234646/http://www.warse.org/IJATCSE/static/pdf/file/ijatcse40852019.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/87/0d/870d2b709db4bf3132c3cf52e24d93dfd8b85b45.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.30534/ijatcse/2019/40852019"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Predicting the Location of "interactees" in Novel Human-Object Interactions [chapter]

Chao-Yeh Chen, Kristen Grauman
<span title="">2015</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
The result is a human interaction-informed saliency metric, which we show is valuable for both improved object detection and image retargeting applications.  ...  Having learned the generic, actionindependent connections between (1) a person's pose, gaze, and scene cues and (2) the interactee object's position and scale, our method estimates a probability distribution  ...  Acknowledgements This research is supported in part DARPA PECASE award from ONR.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16865-4_23">doi:10.1007/978-3-319-16865-4_23</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z7n2q4iiz5av3otd3f4m7uxwuu">fatcat:z7n2q4iiz5av3otd3f4m7uxwuu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20150321042109/http://www.cs.utexas.edu/%7Egrauman/papers/accv_2014_interactee.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c1/ae/c1ae89b26dc27b5725d08aba484afff43fa39d77.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-16865-4_23"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Iterative Improvement of Human Pose Classification Using Guide Ontology

Kazuhiro TASHIRO, Takahiro KAWAMURA, Yuichi SEI, Hiroyuki NAKAGAWA, Yasuyuki TAHARA, Akihiko OHSUGA
<span title="">2016</span> <i title="Institute of Electronics, Information and Communications Engineers (IEICE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xosmgvetnbf4zpplikelekmdqe" style="color: black;">IEICE transactions on information and systems</a> </i> &nbsp;
We thus propose a method that refines result of human pose estimation by Pose Guide Ontology (PGO) and a set of energy functions.  ...  Although advances in computer vision research have made huge contributions to image recognition, it is not enough to estimate human poses accurately.  ...  They employed the GO as a semantic source of background knowledge, and proposed Object Relation Network to transfer rich semantics in the GO to the detected object and their relations in the image.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.2015edp7067">doi:10.1587/transinf.2015edp7067</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/eu25ek6lrrd2zagqfb7zbv6lyu">fatcat:eu25ek6lrrd2zagqfb7zbv6lyu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20181031172708/https://www.jstage.jst.go.jp/article/transinf/E99.D/1/E99.D_2015EDP7067/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ad/7b/ad7bbdc6c2d5536c07cfe53eb2e346174199cc38.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.2015edp7067"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Recognition and 3D Localization of Pedestrian Actions from Monocular Video [article]

Jun Hayakawa, Behzad Dariush
<span title="2020-08-03">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The estimated pose and associated body key-points are also used as input to a network that estimates the 3D location of the pedestrian using a unique loss function.  ...  This paper focuses on monocular pedestrian action recognition and 3D localization from an egocentric view for the purpose of predicting intention and forecasting future trajectory.  ...  Moreover, 3D pose estimation from a monocular image [22] proposes an adversarial learning framework that can estimate the 3D human pose structures learned from the fully annotated dataset with only 2D  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.01162v1">arXiv:2008.01162v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2fys4aevyrcxhp3fdmvjjgxosq">fatcat:2fys4aevyrcxhp3fdmvjjgxosq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200810225953/https://arxiv.org/pdf/2008.01162v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.01162v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning from Synthetic Animals [article]

Jiteng Mu, Weichao Qiu, Gregory Hager, Alan Yuille
<span title="2020-04-05">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Despite great success in human parsing, progress for parsing other deformable articulated objects, like animals, is still limited by the lack of labeled data.  ...  Our synthetic dataset contains 10+ animals with diverse poses and rich ground truth, which enables us to use the multi-task learning strategy to further boost models' performance.  ...  The authors would like to thank Chunyu Wang, Qingfu Wan, Yi Zhang for helpful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.08265v2">arXiv:1912.08265v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/y4ykhdv5afbdjpbscstblyvrqy">fatcat:y4ykhdv5afbdjpbscstblyvrqy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200408001137/https://arxiv.org/pdf/1912.08265v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.08265v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-person 3D pose estimation from 3D cloud data using 3D convolutional neural networks

Manolis Vasileiadis, Christos-Savvas Bouganis, Dimitrios Tzovaras
<span title="">2019</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dsarpzh6wfgadnyijwv5l6gh2i" style="color: black;">Computer Vision and Image Understanding</a> </i> &nbsp;
for 3D human pose estimation from 3D data.  ...  While, in the last few years, there has been an increased number of research approaches towards CNN-based 2D human pose estimation from RGB images, respective work on CNN-based 3D human pose estimation  ...  Du et al. (2016) introduced additional built-in knowledge for reconstructing the 2D pose thus formulating an objective function to estimate the 3D pose from the detected 2D pose, while Zhou et al. (2016a  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2019.04.011">doi:10.1016/j.cviu.2019.04.011</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2uyj2x2zm5fktnjy2bda5mgile">fatcat:2uyj2x2zm5fktnjy2bda5mgile</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210428080752/https://spiral.imperial.ac.uk:8443/bitstream/10044/1/70442/2/Vasileiadis_CVIU2019.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/50/825010dc8efa6b38a4eea006bdca58253e722c0d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2019.04.011"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

4D-OR: Semantic Scene Graphs for OR Domain Modeling [article]

Ege Özsoy, Evin Pınar Örnek, Ulrich Eck, Tobias Czempiel, Federico Tombari, Nassir Navab
<span title="2022-03-22">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
and object poses, and clinical roles.  ...  Towards this goal, for the first time, we propose using semantic scene graphs (SSG) to describe and summarize the surgical scene.  ...  For human and object pose estimation, we use the state-of-the-art methods, VoxelPose [27] and Group-Free [16] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11937v1">arXiv:2203.11937v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6jl6ycvbzrbgviyt357fnimtoe">fatcat:6jl6ycvbzrbgviyt357fnimtoe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220521144400/https://arxiv.org/pdf/2203.11937v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/40/81/4081a93e8c7a525fe3099f948dc3b4c0decab746.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11937v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Synthesizing Training Images for Boosting Human 3D Pose Estimation

Wenzheng Chen, Huan Wang, Yangyan Li, Hao Su, Zhenhua Wang, Changhe Tu, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
<span title="">2016</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/fnmyhnfycff7tiixnks6dqb6sy" style="color: black;">2016 Fourth International Conference on 3D Vision (3DV)</a> </i> &nbsp;
Human 3D pose estimation from a single image is a challenging task with numerous applications.  ...  We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images.  ...  The synthetic images may look fake to a human, but exhibit a rich diversity of poses and appearance for pushing CNNs to learn better. skin textures.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/3dv.2016.58">doi:10.1109/3dv.2016.58</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/3dim/ChenWLSWTLCC16.html">dblp:conf/3dim/ChenWLSWTLCC16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/74ascdzccjaezm43gxmbf2pfbe">fatcat:74ascdzccjaezm43gxmbf2pfbe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170809051051/http://www.cs.huji.ac.il/~danix/publications/Deep3DPose.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/8b/298ba374ef65f398574c272b42075672fc693cd5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/3dv.2016.58"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 14,848 results