A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2204.10209v1.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Filters
BTranspose: Bottleneck Transformers for Human Pose Estimation with Self-Supervised Pre-Training
[article]
<span title="2022-04-21">2022</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
apply it to the task of 2D human pose estimation. ...
We consider different backbone architectures and pre-train them using the DINO self-supervised learning method [3], this pre-training is found to improve the overall prediction accuracy. ...
Conclusion For the task of 2D human pose estimation, we explored a model-BTransposeby combining Bottleneck Transformers with the vanilla Transformer Encoder (TE). ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.10209v1">arXiv:2204.10209v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zh3u2cg2orapdp5f5odsjpf5fa">fatcat:zh3u2cg2orapdp5f5odsjpf5fa</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220607234338/https://arxiv.org/pdf/2204.10209v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/fe/fa/fefa4d2696aaab7852cade560d55fccbdf46c0fe.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.10209v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>