Filters








11,746 Hits in 6.8 sec

Differentiable Hierarchical Graph Grouping for Multi-Person Pose Estimation [article]

Sheng Jin, Wentao Liu, Enze Xie, Wenhai Wang, Chen Qian, Wanli Ouyang, Ping Luo
<span title="2020-07-23">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Especially, we propose a novel differentiable Hierarchical Graph Grouping (HGG) method to learn the graph grouping in bottom-up multi-person pose estimation task.  ...  Multi-person pose estimation is challenging because it localizes body keypoints for multiple persons simultaneously.  ...  -We reformulate the task of multi-person pose estimation as a graph clustering problem and present the first fully end-to-end trainable framework with grouping supervision for bottom-up multi-person pose  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.11864v1">arXiv:2007.11864v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/roh5uaheljgsrgfdjwa6gzj664">fatcat:roh5uaheljgsrgfdjwa6gzj664</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200731023438/https://arxiv.org/pdf/2007.11864v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/eb/91/eb915e9d7860ecf75b3a7e3ad99d7554e4a46a4f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.11864v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances in Monocular 2D and 3D Human Pose Estimation: A Deep Learning Perspective [article]

Wu Liu, Qian Bao, Yu Sun, Tao Mei
<span title="2021-04-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
2D and 3D, and the complex multi-person scenarios.  ...  We believe this survey will provide the readers with a deep and insightful understanding of monocular human pose estimation.  ...  Fig. 3 : 3 Typical framework for single person pose estimation. Fig. 4 : 4 Typical frameworks for multi-person pose estimation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.11536v1">arXiv:2104.11536v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tdag2jq2vjdrjekwukm5nu7l6a">fatcat:tdag2jq2vjdrjekwukm5nu7l6a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210427042152/https://arxiv.org/pdf/2104.11536v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/33/44/33441300a116fe57051619d3680ce30280da3b33.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.11536v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances of Monocular 2D and 3D Human Pose Estimation: A Deep Learning Perspective

Wu Liu, Tao Mei
<span title="2022-03-31">2022</span> <i title="Association for Computing Machinery (ACM)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/eiea26iqqjcatatlgxdpzt637y" style="color: black;">ACM Computing Surveys</a> </i> &nbsp;
Furthermore, we analyze the solutions for challenging cases, such as the lack of data, the inherent ambiguity between 2D and 3D, and the complex multi-person scenarios.  ...  Especially, we provide insightful analyses for the intrinsic connections and methods evolution from 2D to 3D pose estimation.  ...  For example, the graph partitioning-based methods in [1, 67, 71] extend the image-level bottom-up multi-person pose estimation [15, 160] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3524497">doi:10.1145/3524497</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4pbvntngrnfp7lqhcpjmy7p2fq">fatcat:4pbvntngrnfp7lqhcpjmy7p2fq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220515103959/https://dl.acm.org/doi/pdf/10.1145/3524497" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/09/bf/09bfad56d2038eea5b264a3306169adec7f8bf5e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3524497"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Multi-person Articulated Tracking with Spatial and Temporal Embeddings [article]

Sheng Jin, Wentao Liu, Wanli Ouyang, Chen Qian
<span title="2019-03-21">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a unified framework for multi-person pose estimation and tracking. Our framework consists of two main components,~\ie~SpatialNet and TemporalNet.  ...  We model the grouping procedure into a differentiable Pose-Guided Grouping (PGG) module to make the whole part detection and grouping pipeline fully end-to-end trainable.  ...  Related Work Multi-person Pose Estimation in Images Recent multi-person pose estimation approaches can be classified into top-down and bottom-up methods.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.09214v1">arXiv:1903.09214v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fnxhh7qqhfemnj3zhfqayj7gkq">fatcat:fnxhh7qqhfemnj3zhfqayj7gkq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200831100626/https://arxiv.org/pdf/1903.09214v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ef/4e/ef4ebbd079e741ea59c9ab09bf6034f77d050469.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.09214v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

2020 Index IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 42

<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/3px634ph3vhrtmtuip6xznraqi" style="color: black;">IEEE Transactions on Pattern Analysis and Machine Intelligence</a> </i> &nbsp;
., +, TPAMI June 2020 1515-1521 LCR-Net++: Multi-Person 2D and 3D Pose Detection in Natural Images.  ...  Poullis, C., TPAMI May 2020 1132-1145 LCR-Net++: Multi-Person 2D and 3D Pose Detection in Natural Images.  ...  ., +, TPAMI March 2020 568-579 Hierarchical Gaussian Descriptors with Application to Person Re-Identification.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tpami.2020.3036557">doi:10.1109/tpami.2020.3036557</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3j6s2l53x5eqxnlsptsgbjeebe">fatcat:3j6s2l53x5eqxnlsptsgbjeebe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201206031755/https://ieeexplore.ieee.org/ielx7/34/9280439/09280440.pdf?tp=&amp;arnumber=9280440&amp;isnumber=9280439&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5d/0f/5d0f6720e5d59a6f48114885f173a016d6bfa302.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tpami.2020.3036557"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Learning to Forecast Videos of Human Activity with Multi-granularity Models and Adaptive Rendering [article]

Mengyao Zhai, Jiacheng Chen, Ruizhi Deng, Lei Chen, Ligeng Zhu, Greg Mori
<span title="2017-12-05">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
An architecture combining a hierarchical temporal model for predicting human poses and encoder-decoder convolutional neural networks for rendering target appearances is proposed.  ...  Our hierarchical model captures interactions among people by adopting a dynamic group-based interaction mechanism.  ...  Given input frames, the poses of each person in the scene are estimated, then (b) our multi-granularity LSTM predicts future poses of each person (temporal links for LSTM nodes are omitted. red node denotes  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1712.01955v1">arXiv:1712.01955v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/22a5mvzl55g4bdbjj367jup22e">fatcat:22a5mvzl55g4bdbjj367jup22e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826081256/https://arxiv.org/pdf/1712.01955v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ed/58/ed587213a5092c623bbb7ee6e0c85d67c11b7336.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1712.01955v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Comprehensive Review of Group Activity Recognition in Videos

Li-Fang Wu, Qi Wang, Meng Jian, Yu Qiao, Bo-Xuan Zhao
<span title="2021-01-11">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nvylz6fxhjaqtckvt4zfwob2qi" style="color: black;">International Journal of Automation and Computing</a> </i> &nbsp;
From this comprehensive literature review, readers can obtain an overview of progress in group activity recognition for future studies.  ...  Finally, we outline several challenging issues and possible directions for future research.  ...  This method depends on pose estimation. Lu et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11633-020-1258-8">doi:10.1007/s11633-020-1258-8</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ycka4thcy5a6vghpenpthtrndi">fatcat:ycka4thcy5a6vghpenpthtrndi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429062538/https://link.springer.com/content/pdf/10.1007/s11633-020-1258-8.pdf?error=cookies_not_supported&amp;code=f036796d-ca29-440c-83e3-79fd4837c6a0" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/01/97/01974a6133a56ca663fa49ae0f9ad7d8fcf68d2c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11633-020-1258-8"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

The Center of Attention: Center-Keypoint Grouping via Attention for Multi-Person Pose Estimation [article]

Guillem Brasó, Nikita Kister, Laura Leal-Taixé
<span title="2021-10-11">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our approach uses a transformer to obtain context-aware embeddings for all detected keypoints and centers and then applies multi-head attention to directly group joints into their corresponding person  ...  We introduce CenterGroup, an attention-based framework to estimate human poses from a set of identity-agnostic keypoints and person center predictions in an image.  ...  architecture for explainable single person pose estimation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.05132v1">arXiv:2110.05132v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/azof6anzebc2dh3377qchsreo4">fatcat:azof6anzebc2dh3377qchsreo4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211013114947/https://arxiv.org/pdf/2110.05132v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/35/d9/35d91f765e1122a9b3eda85598634b9922e0d05f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.05132v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning-Based Human Pose Estimation: A Survey [article]

Ce Zheng and Wenhan Wu and Chen Chen and Taojiannan Yang and Sijie Zhu and Ju Shen and Nasser Kehtarnavaz and Mubarak Shah
<span title="2022-01-23">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The goal of this survey paper is to provide a comprehensive review of recent deep learning-based solutions for both 2D and 3D pose estimation via a systematic analysis and comparison of these solutions  ...  Furthermore, 2D and 3D human pose estimation datasets and evaluation metrics are included.  ...  [90] proposed a new differentiable Hierarchical Graph Grouping method to learn the human part grouping. Based on [169] and [218] , Cheng et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.13392v4">arXiv:2012.13392v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ypnqtq3sbncr5fuujif2dhqwji">fatcat:ypnqtq3sbncr5fuujif2dhqwji</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220128090533/https://arxiv.org/pdf/2012.13392v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/bc/02bc9df6c9f9c0ec0e370b1d378a14136d570375.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.13392v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Guest Editorial Introduction to the Special Section on Intelligent Visual Content Analysis and Understanding

Hongliang Li, Lu Fang, Tianzhu Zhang
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jqw2pm7kwvhchpdxpcm5ryoic4" style="color: black;">IEEE transactions on circuits and systems for video technology (Print)</a> </i> &nbsp;
VID applies adversarial learning to differentiate between estimated 3-D poses and real 3-D poses to avoid implausible results.  ...  To improve the performance of 3-D human pose estimation, especially under the context of diverse viewpoints, the paper "View invariant 3D human pose estimation," by Guo et al., proposes a marvelous view-invariant  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tcsvt.2020.3031416">doi:10.1109/tcsvt.2020.3031416</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gpwbmydqbza5lddatxcfcidwcq">fatcat:gpwbmydqbza5lddatxcfcidwcq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201206020927/https://ieeexplore.ieee.org/ielx7/76/9280452/09280497.pdf?tp=&amp;arnumber=9280497&amp;isnumber=9280452&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/98/b8983b6c0b33ad08b5f8acdfd49d249033c2915e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tcsvt.2020.3031416"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

DeepCap: Monocular Human Performance Capture Using Weak Supervision [article]

Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt
<span title="2020-03-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The network architecture is based on two separate networks that disentangle the task into a pose estimation and a non-rigid surface deformation step.  ...  Our method is trained in a weakly supervised manner based on multi-view supervision completely removing the need for training data with 3D ground truth annotations.  ...  We also compare to a multi-view baseline approach (MVBL), where we use our differentiable skeleton model in an optimization framework to solve for the pose per frame using the proposed multi-view losses  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.08325v1">arXiv:2003.08325v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3sb7icxkhvhbteftru3a6kg27y">fatcat:3sb7icxkhvhbteftru3a6kg27y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200320194450/https://arxiv.org/pdf/2003.08325v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.08325v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

HDNet: Human Depth Estimation for Multi-Person Camera-Space Localization [article]

Jiahao Lin, Gim Hee Lee
<span title="2020-07-17">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Current works on multi-person 3D pose estimation mainly focus on the estimation of the 3D joint locations relative to the root joint and ignore the absolute locations of each pose.  ...  Our HDNet first estimates the 2D human pose with heatmaps of the joints. These estimated heatmaps serve as attention masks for pooling features from image regions corresponding to the target person.  ...  Single-person 3D pose estimation. Approaches for 3D pose estimation can be generally categorized into two groups.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08943v1">arXiv:2007.08943v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6pxzh47bprennpprionh264zy4">fatcat:6pxzh47bprennpprionh264zy4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200723052222/https://arxiv.org/pdf/2007.08943v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08943v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Design sparse features for age estimation using hierarchical face model

Jinli Suo, Tianfu Wu, Songchun Zhu, Shiguang Shan, Xilin Chen, Wen Gao
<span title="">2008</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kpdlnoy5crhxtiv7j3acen4kje" style="color: black;">2008 8th IEEE International Conference on Automatic Face &amp; Gesture Recognition</a> </i> &nbsp;
The experimental results in this paper show that designing feature set for age estimation under the guidance of hierarchical face model is a promising method and a flexible framework as well.  ...  On age estimation, this paper follows the popular regression formulation for mapping feature vectors to its age label.  ...  In our estimation task, we adopt hierarchical graph model for face representation which is composed by a set of nodes and edges connecting them.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/afgr.2008.4813314">doi:10.1109/afgr.2008.4813314</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/fgr/SuoWZSCG08.html">dblp:conf/fgr/SuoWZSCG08</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yk2hrdbfx5fsnbex6b6tqt5vnu">fatcat:yk2hrdbfx5fsnbex6b6tqt5vnu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20110708184712/http://www.jdl.ac.cn:80/doc/2008/Design%20S%20parse%20Features%20for%20Age%20Estimation%20using%20Hierarchical%20Face%20Model.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8f/c6/8fc64f1f07e70af7a5cffebe0db143f41b5c0a14.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/afgr.2008.4813314"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

2020 Index IEEE Transactions on Image Processing Vol. 29

<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dhlhr4jqkbcmdbua2ca45o7kru" style="color: black;">IEEE Transactions on Image Processing</a> </i> &nbsp;
., +, TIP 2020 2344-2355 Deep Spatial Transformation for Pose-Guided Person Image Generation and Animation.  ...  ., +, TIP 2020 7245-7260 Web-Shaped Model for Head Pose Estimation: An Approach for Best Exemplar Selection.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tip.2020.3046056">doi:10.1109/tip.2020.3046056</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/24m6k2elprf2nfmucbjzhvzk3m">fatcat:24m6k2elprf2nfmucbjzhvzk3m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201224144031/https://ieeexplore.ieee.org/ielx7/83/8835130/09301460.pdf?tp=&amp;arnumber=9301460&amp;isnumber=8835130&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/56/93/5693eebc307c33915511489f6dcddcb127981534.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tip.2020.3046056"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

IEEE Access Special Section Editorial: Advanced Data Mining Methods for Social Computing

Yongqiang Zhao, Shirui Pan, Jia Wu, Huaiyu Wan, Huizhi Liang, Haishuai Wang, Huawei Shen
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
The article by Gong and Zhu, ''Person re-identification based on two-stream network with attention and pose features,'' combines the advantages of pose estimation and attention mechanism to better solve  ...  proximity information extracted from the hierarchical topological structure of the input graph.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3043060">doi:10.1109/access.2020.3043060</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qbqk5f4ojvadlazhk2mc343sra">fatcat:qbqk5f4ojvadlazhk2mc343sra</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210224053332/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09311472.pdf?tp=&amp;arnumber=9311472&amp;isnumber=8948470&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/40/40/4040d136e3529e2e70be7c0c71ad9b84b17ea672.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3043060"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 11,746 results