Filters








212,240 Hits in 4.1 sec

Temporal Cycle-Consistency Learning [article]

Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman
<span title="2019-04-16">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Project webpage: https://sites.google.com/view/temporal-cycle-consistency .  ...  The method trains a network using temporal cycle consistency (TCC), a differentiable cycle-consistency loss that can be used to find correspondences across time in multiple videos.  ...  We name this version of cycle-consistency as the final temporal cycle consistency (TCC) method, and use this version for the rest of the experiments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.07846v1">arXiv:1904.07846v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oytx6gdzgzbmnb527jt2csbnvi">fatcat:oytx6gdzgzbmnb527jt2csbnvi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823171133/https://arxiv.org/pdf/1904.07846v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/47/35/473590d32d36596124d418a57a16375c822b581d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.07846v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Temporal Cycle-Consistency Learning

Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
Project webpage: https://sites.google. com/view/temporal-cycle-consistency.  ...  embedding space time time Video 1 Video 2 Temporal Alignment Alignment Embedding Videos Figure 1: We present a self-supervised representation learning technique called temporal cycle consistency (TCC)  ...  We name this version of cycle-consistency as the final temporal cycle consistency (TCC) method, and use this version for the rest of the experiments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00190">doi:10.1109/cvpr.2019.00190</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/DwibediATSZ19.html">dblp:conf/cvpr/DwibediATSZ19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4vytz5nx25djdmrkgljvjzsila">fatcat:4vytz5nx25djdmrkgljvjzsila</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190612070450/http://openaccess.thecvf.com/content_CVPR_2019/papers/Dwibedi_Temporal_Cycle-Consistency_Learning_CVPR_2019_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/75/66/75662c7ab05db37c52a2d750af2a8b712bbf3d53.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00190"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Representation Learning via Global Temporal Alignment and Cycle-Consistency [article]

Isma Hadji, Konstantinos G. Derpanis, Allan D. Jepson
<span title="2021-05-11">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
consistency loss that verifies correspondences.  ...  We introduce a weakly supervised method for representation learning based on aligning temporal sequences (e.g., videos) of the same process (e.g., human action).  ...  Temporal Cycle Consistency (TCC) [15] learns finegrained temporal correspondences between individual video frames by imposing a soft version of cycle consistency on the individual matches. D.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.05217v1">arXiv:2105.05217v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dzuonxwag5d3pilwehfuuvuul4">fatcat:dzuonxwag5d3pilwehfuuvuul4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210513071536/https://arxiv.org/pdf/2105.05217v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/2a/4a2a12de8c7d00746710c321cc62b53c4f5a517f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.05217v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Back to the Future: Cycle Encoding Prediction for Self-supervised Contrastive Video Representation Learning [article]

Xinyu Yang, Majid Mirmehdi, Tilo Burghardt
<span title="2021-10-24">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper we show that learning video feature spaces in which temporal cycles are maximally predictable benefits action classification.  ...  In particular, we propose a novel learning approach termed Cycle Encoding Prediction (CEP) that is able to effectively represent high-level spatio-temporal structure of unlabelled video content.  ...  The depicted 6 most fundamental cycles are considered for cycle consistency.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07217v5">arXiv:2010.07217v5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/iz2ffhboi5ailkgrhlqeq2sksa">fatcat:iz2ffhboi5ailkgrhlqeq2sksa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211029201616/https://arxiv.org/pdf/2010.07217v5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/02/8b026474a0a77a802b520b828be33c3b4fc17799.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07217v5" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Aligning Videos in Space and Time [article]

Senthil Purushwalkam, Tian Ye, Saurabh Gupta, Abhinav Gupta
<span title="2020-07-09">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Hence, we propose a novel alignment procedure that learns such correspondence in space and time via cross video cycle-consistency.  ...  Cycles that connect overlapping patches together are encouraged to score higher than cycles that connect non-overlapping patches.  ...  [62] use cycle consistency to learn how to generate images, and Wang et al. [52] use cycle consistency to learn features for correspondence over time in videos. Work from Wang et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.04515v1">arXiv:2007.04515v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6tdjecdhyrfzrgcblhsbydn4iy">fatcat:6tdjecdhyrfzrgcblhsbydn4iy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829082826/https://arxiv.org/pdf/2007.04515v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9e/3a/9e3a6c70799ee375e4b6035f59236677439c41a5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.04515v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Cycle-Contrast for Self-Supervised Video Representation Learning [article]

Quan Kong, Wenpeng Wei, Ziwei Deng, Tomoaki Yoshinaga, Tomokazu Murakami
<span title="2020-10-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present Cycle-Contrastive Learning (CCL), a novel self-supervised method for learning video representation.  ...  the cycle-contrastive loss.  ...  [3] introduced a self-supervised representation learning method for video synchronization named temporal cycle consistency (TCC).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.14810v1">arXiv:2010.14810v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gu2bzblh5jdklizisb37byxboi">fatcat:gu2bzblh5jdklizisb37byxboi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201119003817/https://arxiv.org/pdf/2010.14810v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/81/b6/81b6c5b86ff596876078dc7eab4c0f093832056a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.14810v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning Temporal Dynamics from Cycles in Narrated Video [article]

Dave Epstein, Jiajun Wu, Cordelia Schmid, Chen Sun
<span title="2021-09-12">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a self-supervised solution to this problem using temporal cycle consistency jointly in vision and language, training on narrated video.  ...  We justify the design of our model with an ablation study on different configurations of the cycle consistency problem.  ...  We learn these dynamics by solving a multi-modal temporal cycle consistency problem. able to learn from large unlabeled datasets of in-the-wild action and discover transitions autonomously, to enable practical  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.02337v2">arXiv:2101.02337v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bvdmwbpfv5g25lrjwrxhgcp6se">fatcat:bvdmwbpfv5g25lrjwrxhgcp6se</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210917174712/https://arxiv.org/pdf/2101.02337v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a3/8f/a38f450f8d1265c9da0fa291dd6e37a69766ab76.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.02337v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Echocardiography Segmentation with Enforced Temporal Consistency [article]

Nathan Painchaud, Nicolas Duchateau, Olivier Bernard, Pierre-Marc Jodoin
<span title="2021-12-03">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
accurate and temporally consistent segmentation maps across the whole cycle.  ...  In this paper, we propose a framework to learn the 2D+time long-axis cardiac shape such that the segmented sequences can benefit from temporal and anatomical consistency constraints.  ...  iii) Wei et al. for supplying the predictions of their CLAS method on the full cycle US sequences.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.02102v1">arXiv:2112.02102v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dru4znskpzcw7ibmiylzfwfjve">fatcat:dru4znskpzcw7ibmiylzfwfjve</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211208175421/https://arxiv.org/pdf/2112.02102v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/5c/025c587f4527645856f619bc6f3508d1a422ce24.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.02102v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recycle-GAN: Unsupervised Video Retargeting [article]

Aayush Bansal, Shugao Ma, Deva Ramanan, Yaser Sheikh
<span title="2018-08-15">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation.  ...  Fig. 2 . 2 Spatial cycle consistency is not sufficient: We show two examples illustrating why spatial cycle consistency alone is not sufficient for the optimization.  ...  Cycle loss: Zhu et al. [53] use cycle consistency [51] to define a reconstruction loss when the pairs are not available.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.05174v1">arXiv:1808.05174v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2vychysagvf43molzagxxtvxqi">fatcat:2vychysagvf43molzagxxtvxqi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929030623/https://arxiv.org/pdf/1808.05174v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/29/ad/29ad8b57463443e77a403a4ca99a35fc6a6a3490.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.05174v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Dynamics of temporal discrimination

Paulo Guilhardi, Russell M. Church
<span title="">2005</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ehz5ft7bcbf77ipiyqnjmuwi54" style="color: black;">Learning &amp; Behavior</a> </i> &nbsp;
They readily learned the three temporal discriminations, whether they were presented simultaneously or successively, and they rapidly adjusted their performance to new intervals when the intermediate interval  ...  The purpose of this research was to describe and explain the acquisition of temporal discriminations, transitions from one temporal interval to another, and asymptotic performance of stimulus and temporal  ...  The temporal learning consisted of an increase in response rate late in the interval and a decrease in response rate early in the interval.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3758/bf03193179">doi:10.3758/bf03193179</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/16573211">pmid:16573211</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/va5aaqmvmncrrcaki4z2jbrjqi">fatcat:va5aaqmvmncrrcaki4z2jbrjqi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170912210852/https://link.springer.com/content/pdf/10.3758%2FBF03193179.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/23/3b/233b88e5aebc3ce0a275a7af913ba1259788281d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3758/bf03193179"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Recycle-GAN: Unsupervised Video Retargeting [chapter]

Aayush Bansal, Shugao Ma, Deva Ramanan, Yaser Sheikh
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation.  ...  Cycle loss: Zhu et al. [53] use cycle consistency [51] to define a reconstruction loss when the pairs are not available.  ...  [53] proposed to use the cycle-consistency constraint [51] in adversarial learning framework to deal with this problem of unpaired data, and demonstrate effective results for various tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01228-1_8">doi:10.1007/978-3-030-01228-1_8</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/y35ontbezngepnysioqzs6leiu">fatcat:y35ontbezngepnysioqzs6leiu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819040624/http://openaccess.thecvf.com:80/content_ECCV_2018/papers/Aayush_Bansal_Recycle-GAN_Unsupervised_Video_ECCV_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/96/16/961607a171812a72f9519443577ea43ed663b083.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-01228-1_8"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Stimulus control in multiple temporal discriminations

Marcelo S. Caetano, Paulo Guilhardi, Russell M. Church
<span title="2012-03-24">2012</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ehz5ft7bcbf77ipiyqnjmuwi54" style="color: black;">Learning &amp; Behavior</a> </i> &nbsp;
Although the stimuli reliably signaled the upcoming FI, when trained in successive blocks of 60 cycles, rats rapidly adjusted performance early in the sessions on the basis of the temporal aspects of the  ...  task, and not on the basis of the stimulus presented in the current cycle.  ...  Fast temporal discrimination learning is not restricted to single intervals. Rats can rapidly learn to time different FIs that are signaled by different stimuli (Guilhardi & Church, 2005) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3758/s13420-012-0071-9">doi:10.3758/s13420-012-0071-9</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/22447102">pmid:22447102</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jy5oaxbtgzfdjm7hxqtjy6upxi">fatcat:jy5oaxbtgzfdjm7hxqtjy6upxi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180725072811/https://link.springer.com/content/pdf/10.3758%2Fs13420-012-0071-9.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/de/d1/ded14b160da9b70389fa3dbf4e55ebddba59f9df.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3758/s13420-012-0071-9"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Accelerating the Training of Video Super-Resolution Models [article]

Lijian Lin, Xintao Wang, Zhongang Qi, Ying Shan
<span title="2022-05-17">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
It usually takes an order of magnitude more time than training their counterpart image models, leading to long research cycles.  ...  Existing VSR methods typically train models with fixed spatial and temporal sizes from beginning to end.  ...  Table 1 shows that both the spatial cycle and temporal cycle bring consistent speedup to BasicVSR with different model sizes.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.05069v2">arXiv:2205.05069v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gqnj2mfjdrdapbcqjwtylvvpd4">fatcat:gqnj2mfjdrdapbcqjwtylvvpd4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220525014316/https://arxiv.org/pdf/2205.05069v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/06/0b/060bac73d0c6cab65b91f77364645a0afe142d38.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.05069v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised Domain Adaptation with Temporal-Consistent Self-Training for 3D Hand-Object Joint Reconstruction [article]

Mengshi Qi, Edoardo Remelli, Mathieu Salzmann, Pascal Fua
<span title="2020-12-21">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
temporal consistency to fine-tune the domain-adapted model in a self-supervised fashion.  ...  Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will  ...  Long-Term Temporal Consistency.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.11260v1">arXiv:2012.11260v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wzloca4avzbtfcumf26bek6dc4">fatcat:wzloca4avzbtfcumf26bek6dc4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201225063227/https://arxiv.org/pdf/2012.11260v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/20/fe/20fe18430326c59943152c306195dec50ed3b92a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.11260v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Spatio-Temporal Appearance Representation for Video-Based Pedestrian Re-Identification

Kan Liu, Bingpeng Ma, Wei Zhang, Rui Huang
<span title="">2015</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/753trptklbb4nj6jquqadzwwdu" style="color: black;">2015 IEEE International Conference on Computer Vision (ICCV)</a> </i> &nbsp;
Particularly, given a video sequence we exploit the periodicity exhibited by a walking person to generate a spatio-temporal body-action model, which consists of a series of body-action units corresponding  ...  Fisher vectors are learned and extracted from individual body-action units and concatenated into the final representation of the walking person.  ...  For each walking cycle, we divide the chunk of video data both spatially and temporally.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2015.434">doi:10.1109/iccv.2015.434</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccv/LiuMZH15.html">dblp:conf/iccv/LiuMZH15</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vxpltuy5kjepbhct54omstbht4">fatcat:vxpltuy5kjepbhct54omstbht4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20160128085757/http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Liu_A_Spatio-Temporal_Appearance_ICCV_2015_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9d/cb/9dcba9e2039f13ac6e40265e3d408c62e9b9665b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2015.434"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 212,240 results