A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit <a rel="external noopener" href="http://openaccess.thecvf.com:80/content_cvpr_2018/papers/Wang_Temporal_Hallucinating_for_CVPR_2018_paper.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Filters
Temporal Hallucinating for Action Recognition with Few Still Images
<span title="">2018</span>
<i title="IEEE">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</a>
</i>
Action recognition in still images has been recently pro moted by deep learning. ...
As spatial and temporal features are complementary to represent different actions, we apply spatial-temporal prediction fusion to further boost performance. ...
action recognition via spatial-temporal integration. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2018.00557">doi:10.1109/cvpr.2018.00557</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/WangZQ18.html">dblp:conf/cvpr/WangZQ18</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oor6stwdfvdt7ojmglxpcpoekm">fatcat:oor6stwdfvdt7ojmglxpcpoekm</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819012311/http://openaccess.thecvf.com:80/content_cvpr_2018/papers/Wang_Temporal_Hallucinating_for_CVPR_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/69/19/691964c43bfd282f6f4d00b8b0310c554b613e3b.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2018.00557">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>
CUHK & ETHZ & SIAT Submission to ActivityNet Challenge 2016
[article]
<span title="2016-08-02">2016</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
We follow the basic pipeline of temporal segment networks and further raise the performance via a number of other techniques. ...
Additionally, we incorporate the audio as a complementary channel, extracting relevant information via a CNN applied to the spectrograms. ...
improve the training procedure, e.g. temporal pre-training, and scale jittering augmentation. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1608.00797v1">arXiv:1608.00797v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ffawius2hfcs3dwklvwwxrhpue">fatcat:ffawius2hfcs3dwklvwwxrhpue</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200906102433/https://arxiv.org/pdf/1608.00797v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/d3/b8d3b24cd4e6477e9dc7979580449db962d50e19.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1608.00797v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
[article]
<span title="2017-05-26">2017</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. ...
Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB. ...
Acknowledgements We acknowledge funding by the ERC Starting Grant VideoLearn and the Freiburg Graduate School of Robotics. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1704.00616v2">arXiv:1704.00616v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ng2ldkw2mje2fo3cq3il67p3aq">fatcat:ng2ldkw2mje2fo3cq3il67p3aq</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911040056/https://arxiv.org/pdf/1704.00616v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/fe/29feff5876bbc3aa14decae9ff8cb27ddd00dce5.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1704.00616v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Global Temporal Representation based CNNs for Infrared Action Recognition
[article]
<span title="2019-09-18">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
The experimental results show that the proposed approach outperforms the representative state-of-the-art handcrafted features and deep learning features based methods for the infrared action recognition ...
We conduct the experiments on infrared action recognition datasets InfAR and NTU RGB+D. ...
Specifically, we use the temporal CNNs originated from the OF as local temporal stream, and replace the original spatial stream with one spatial-temporal stream and one global temporal stream via learning ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.08287v1">arXiv:1909.08287v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oceuepso2nbkljjb6ossddeoba">fatcat:oceuepso2nbkljjb6ossddeoba</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930154612/https://arxiv.org/ftp/arxiv/papers/1909/1909.08287.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/d2/30/d2308acd185a3dc0e76521aefe606fd77822e8e0.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.08287v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Temporal Hockey Action Recognition via Pose and Optical Flows
[article]
<span title="2018-12-22">2018</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
A novel two-stream framework has been designed to improve action recognition accuracy for hockey using three main components. ...
A novel publicly available dataset named HARPET (Hockey Action Recognition Pose Estimation, Temporal) was created, composed of sequences of annotated actions and pose of hockey players including their ...
It also demonstrates the complementary nature of pose estimation and optical flow in improving action recognition accuracy. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.09533v1">arXiv:1812.09533v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yc3vxgo2wvbljfa7wf46i7sd4a">fatcat:yc3vxgo2wvbljfa7wf46i7sd4a</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901175320/https://arxiv.org/pdf/1812.09533v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/3b/90/3b904964cf61cb8d344245939870533b955960ae.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.09533v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Guest Editorial Introduction to the Special Section on Intelligent Visual Content Analysis and Understanding
<span title="">2020</span>
<i title="Institute of Electrical and Electronics Engineers (IEEE)">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jqw2pm7kwvhchpdxpcm5ryoic4" style="color: black;">IEEE transactions on circuits and systems for video technology (Print)</a>
</i>
To explore the complementary properties between the hand-crafted shallow feature representation and deep features, "Discriminative multi-view subspace feature learning for action recognition" by Sheng ...
Cooperated with a sequential classifier, the full model could utilize complementary information of time dynamic in an action sequence, achieving much higher recognition accuracy than previous works. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tcsvt.2020.3031416">doi:10.1109/tcsvt.2020.3031416</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gpwbmydqbza5lddatxcfcidwcq">fatcat:gpwbmydqbza5lddatxcfcidwcq</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201206020927/https://ieeexplore.ieee.org/ielx7/76/9280452/09280497.pdf?tp=&arnumber=9280497&isnumber=9280452&ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/98/b8983b6c0b33ad08b5f8acdfd49d249033c2915e.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tcsvt.2020.3031416">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>
Cross-Model Pseudo-Labeling for Semi-Supervised Action Recognition
[article]
<span title="2022-04-18">2022</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Semi-supervised action recognition is a challenging but important task due to the high cost of data annotation. ...
We observe that, due to their different structural biases, these two models tend to learn complementary representations from the same video clips. ...
Acknowledgement We thank Nanxuan Zhao for discussion and comments about this work. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09690v2">arXiv:2112.09690v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/27w2ncb34jhqjicxhhwao2upvi">fatcat:27w2ncb34jhqjicxhhwao2upvi</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211225175800/https://arxiv.org/pdf/2112.09690v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<span style="color: #f43e3e;">✱</span>
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/d2/57/d257cb00436b82aea4b95aa5f1b179e2ffeb6398.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09690v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Dual-AI: Dual-path Actor Interaction Learning for Group Activity Recognition
[article]
<span title="2022-04-06">2022</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Via self-supervised actor consistency in both frame and video levels, MAC-Loss can effectively distinguish individual actor representations to reduce action confusion among different actors. ...
Learning spatial-temporal relation among multiple actors is crucial for group activity recognition. Different group activities often show the diversified interactions between actors in the video. ...
Foundation of China (61876176, U1813218), the Joint Lab of CAS-HK, Guangdong NSF Project (No.2020B1515120085), the Shenzhen Research Program (RCJC20200714114557087), the Shanghai Committee of Science and ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.02148v2">arXiv:2204.02148v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3pbjsthw6bderks4xcvydfwmle">fatcat:3pbjsthw6bderks4xcvydfwmle</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220515115612/https://arxiv.org/pdf/2204.02148v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/18/3f1881a4ee6a376139c4351f44ca64514533ba61.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.02148v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Collaborative Attention Mechanism for Multi-View Action Recognition
[article]
<span title="2020-11-25">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Multi-view action recognition (MVAR) leverages complementary temporal information from different views to improve the learning performance. ...
It paves a novel way to leverage attention information and enhances the multi-view representation learning. ...
Temporal Attention for Action Recognition Given an action sample and the corresponding label, the temporal attention model aims to encode the sequential input and optimize the following objective: θ * ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06599v2">arXiv:2009.06599v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gzwmxgsoebfnnlr3mnrqfgne2a">fatcat:gzwmxgsoebfnnlr3mnrqfgne2a</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201203085159/https://arxiv.org/pdf/2009.06599v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/cf/77/cf77c40166fecebff5a93d6981f0afe5ccdbe1cf.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06599v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Multi-Modal Human Action Recognition With Sub-Action Exploiting and Class-Privacy Preserved Collaborative Representation Learning
<span title="">2020</span>
<i title="Institute of Electrical and Electronics Engineers (IEEE)">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a>
</i>
The sub-action based depth motion and skeleton features are then extracted and fused. ...
It models long-range temporal structure over video sequences to better distinguish the similar actions bearing sub-action sharing phenomenon. ...
The proposed method has recognition accuracy of 87.0%, 90.7% and 91.2%, 94.2% for the schemes Time-guided+Fusion via CCA-parallel, Time-guided+Fusion via CCA-serial and Energy-guided+Fusion via CCA-parallel ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2976496">doi:10.1109/access.2020.2976496</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/357i4iqivzhjpitbgttmjix3ri">fatcat:357i4iqivzhjpitbgttmjix3ri</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108144947/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09016045.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/fc/08/fc0874acab295adb52631ec72c572f6f8ee72772.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2976496">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
ieee.com
</button>
</a>
Recognizing Human Actions as the Evolution of Pose Estimation Maps
<span title="">2018</span>
<i title="IEEE">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition</a>
</i>
Most video-based action recognition approaches choose to extract features from the whole video to recognize actions. ...
The complementary properties between both types of images are explored by deep convolutional neural networks to predict action label. ...
[9] firstly used hierarchical RNN for pose-based action recognition. Liu et al. [25] extended this idea and proposed spatio-temporal LSTM to learning spatial and temporal domains. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2018.00127">doi:10.1109/cvpr.2018.00127</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/LiuY18.html">dblp:conf/cvpr/LiuY18</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/i3rmf6vm4jh3zhhouxvzwi5a5i">fatcat:i3rmf6vm4jh3zhhouxvzwi5a5i</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190623133741/http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_Recognizing_Human_Actions_CVPR_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/46/94/469475c1f7a8078354f64aa3fe5dbb474cf6df8a.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2018.00127">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>
Deeply-Supervised CNN Model for Action Recognition with Trainable Feature Aggregation
<span title="">2018</span>
<i title="International Joint Conferences on Artificial Intelligence Organization">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/vfwwmrihanevtjbbkti2kc3nke" style="color: black;">Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence</a>
</i>
We conduct experiments on two action recognition datasets: HMDB51 and UCF101. Results show that our model outperforms the state-of-the-art methods. ...
Moreover, we train this model in a deep supervision manner, which brings improvement in both performance and efficiency. ...
In this way, we exploit multi-level video representations in a single network, which brings improvement in both performance and efficiency for action recognition. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2018/112">doi:10.24963/ijcai.2018/112</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ijcai/LiLW18.html">dblp:conf/ijcai/LiLW18</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/24pxm6ah4zfnpe7paqjbqajsz4">fatcat:24pxm6ah4zfnpe7paqjbqajsz4</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190428080627/https://www.ijcai.org/proceedings/2018/0112.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/ac/1c/ac1c192f920fa501175d1edc187db1d31dc97c03.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2018/112">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
Publisher / doi.org
</button>
</a>
Self-attention based anchor proposal for skeleton-based action recognition
[article]
<span title="2021-12-17">2021</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Skeleton sequences are widely used for action recognition task due to its lightweight and compact characteristics. ...
Further ablation study have shown the effectiveness of our proposed SAP module, which is able to obviously improve the performance of many popular skeleton-based action recognition methods. ...
In oder to extract discriminate features for skeleton-based action recognition, great effort has been made to learn patterns from spatial configuration and temporal dynamics of joints. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09413v1">arXiv:2112.09413v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lxvou76i4nbxndiymia5stplo4">fatcat:lxvou76i4nbxndiymia5stplo4</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211223114535/https://arxiv.org/pdf/2112.09413v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/11/ad/11ad03ce4388c05492eea242fcb1b5a46dfe546e.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09413v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Literature Review of Action Recognition in the Wild
[article]
<span title="2019-11-27">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Action Recognition problem in the untrimmed videos is a challenging task and most of the papers have tackled this problem using hand-crafted features with shallow learning techniques and sophisticated ...
The literature review presented below on Action Recognition in the wild is the in-depth study of Research Papers. ...
The combination of skeleton-based model and frame-based model further improves action recognition efficiency.
XII. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.12249v1">arXiv:1911.12249v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/46qu4wtyqvhuxcomoymdd5owcm">fatcat:46qu4wtyqvhuxcomoymdd5owcm</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826020616/https://arxiv.org/pdf/1911.12249v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/e6/02/e6020c184b7c3c4f41475510008dfbebe50930df.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.12249v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Hi-EADN: Hierarchical Excitation Aggregation and Disentanglement Frameworks for Action Recognition Based on Videos
<span title="2021-04-12">2021</span>
<i title="MDPI AG">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nzoj5rayr5hutlurimhzyjlory" style="color: black;">Symmetry</a>
</i>
Most existing video action recognition methods mainly rely on high-level semantic information from convolutional neural networks (CNNs) but ignore the discrepancies of different information streams. ...
MFEA specifically uses long-short range motion modelling and calculates the feature-level temporal difference. ...
Abbreviations The following abbreviations are used in this manuscript: Hi-EAD Hierarchical Excitation Aggregation and Disentanglement Networks MFEA Multiple Frames Excitation Aggregation SEHD Squeeze-and-Excitation ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/sym13040662">doi:10.3390/sym13040662</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6mjobgpymbcgncus6z3za3saam">fatcat:6mjobgpymbcgncus6z3za3saam</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210421044801/https://res.mdpi.com/d_attachment/symmetry/symmetry-13-00662/article_deploy/symmetry-13-00662.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/7a/d0/7ad0b27715ddd0204b52d6589f4d111d5d8344e5.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/sym13040662">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
mdpi.com
</button>
</a>
« Previous
Showing results 1 — 15 out of 27,927 results