Filters








2,684 Hits in 4.6 sec

Forecasting Characteristic 3D Poses of Human Actions [article]

Christian Diller, Thomas Funkhouser, Angela Dai
<span title="2022-03-08">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose the task of forecasting characteristic 3d poses: from a short sequence observation of a person, predict a future 3d pose of that person in a likely action-defining, characteristic pose -- for  ...  Prior work on human motion prediction estimates future poses at fixed time intervals.  ...  Acknowledgements This project is funded by the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation (bidt).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.15079v3">arXiv:2011.15079v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pcnbanunmzgbldeiwv6l47wmhi">fatcat:pcnbanunmzgbldeiwv6l47wmhi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220310002906/https://arxiv.org/pdf/2011.15079v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/52/4852e7325d089011bdf728004f2bf16c8222446b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.15079v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Space-Time-Separable Graph Convolutional Network for Pose Forecasting [article]

Theodoros Sofianos, Alessio Sampieri, Luca Franco, Fabio Galasso
<span title="2021-10-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Here we propose a novel Space-Time-Separable Graph Convolutional Network (STS-GCN) for pose forecasting.  ...  Human pose forecasting is a complex structured-data sequence-modelling task, which has received increasing attention, also due to numerous potential applications.  ...  Acknowledgements The authors wish to acknowledge Panasonic for partially supporting this work and the project of the Italian Ministry of Education, Universities and Research (MIUR) "Dipartimenti di Eccellenza  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.04573v1">arXiv:2110.04573v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ms2wf7xv5zcsrkmtucn2ggbox4">fatcat:ms2wf7xv5zcsrkmtucn2ggbox4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211012144417/https://arxiv.org/pdf/2110.04573v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ba/3e/ba3e0779233f83ceee0e0864dcf5cc307a646140.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.04573v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Generating Person Images with Appearance-aware Pose Stylizer

Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou
<span title="">2020</span> <i title="International Joint Conferences on Artificial Intelligence Organization"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/vfwwmrihanevtjbbkti2kc3nke" style="color: black;">Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence</a> </i> &nbsp;
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.  ...  The framework is highly flexible and controllable by effectively decoupling various complex person image factors in the encoding phase, followed by re-coupling them in the decoding phase.  ...  We note that such a mode decoupling is significant for the decoding phase.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2020/87">doi:10.24963/ijcai.2020/87</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ijcai/HuangXCWZWHD20.html">dblp:conf/ijcai/HuangXCWZWHD20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jotq2thvone2jfxng4jvpl4j7e">fatcat:jotq2thvone2jfxng4jvpl4j7e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201106203630/https://www.ijcai.org/Proceedings/2020/0087.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d2/20/d220f7e04c47f57d9e8e1f8f711f320fc61d302b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2020/87"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Generating Person Images with Appearance-aware Pose Stylizer [article]

Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou
<span title="2020-07-17">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.  ...  The framework is highly flexible and controllable by effectively decoupling various complex person image factors in the encoding phase, followed by re-coupling them in the decoding phase.  ...  We note that such a mode decoupling is significant for the decoding phase.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.09077v1">arXiv:2007.09077v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dpuyxvvkcrav5mwrol3mnuvdtq">fatcat:dpuyxvvkcrav5mwrol3mnuvdtq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200729102920/https://arxiv.org/pdf/2007.09077v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.09077v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning [article]

Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
<span title="2022-04-09">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
form unified representations of the complex environment.  ...  It also leverages a memory decoupling loss to keep the memory cells from learning redundant features.  ...  However, the state transition pathway of LSTM memory cells may not be optimal for spatiotemporal predictive learning, as this task requires different focuses on the learned representations in many aspects  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.09504v4">arXiv:2103.09504v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/al5ij37d3nhj7nynglu7rod5k4">fatcat:al5ij37d3nhj7nynglu7rod5k4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220422201412/https://arxiv.org/pdf/2103.09504v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d8/04/d8041b594e52d1ef27288c7fd5ecd2f846d1c768.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.09504v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Disentangling Physical Dynamics from Unknown Factors for Unsupervised Video Prediction [article]

Vincent Le Guen, Nicolas Thome
<span title="2020-03-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Finally, we show that PhyDNet presents interesting features for dealing with missing data and long-term forecasting.  ...  A second contribution is to propose a new recurrent physical cell (PhyCell), inspired from data assimilation techniques, for performing PDE-constrained prediction in latent space.  ...  necessary for accurate forecasting.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01460v2">arXiv:2003.01460v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fviqde6rczf4hjxxnb2tzf53uq">fatcat:fviqde6rczf4hjxxnb2tzf53uq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200320214633/https://arxiv.org/pdf/2003.01460v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01460v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Structural-RNN: Deep Learning on Spatio-Temporal Graphs

Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena
<span title="">2016</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
For a new spatio- lems (human pose modeling and forecasting, human-object temporal problem in hand, all a practitioner needs to do is interaction, and driver decision making), and show signif  ...  Fully connected deep structured tured learning with deep networks for 3d human pose esti- networks. arXiv:1503.02351, 2015. mation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2016.573">doi:10.1109/cvpr.2016.573</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/JainZSS16.html">dblp:conf/cvpr/JainZSS16</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/n7lshpornvdbxkqbf7gus5pihy">fatcat:n7lshpornvdbxkqbf7gus5pihy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20161118071118/http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Jain_Structural-RNN_Deep_Learning_CVPR_2016_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/96/af9651083626e8b842ff35b3e6272559ccfd8707.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2016.573"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets [article]

Irtiza Hasan, Francesco Setti, Theodore Tsesmelis, Vasileios Belagiannis, Sikandar Amin, Alessio Del Bue, Marco Cristani, Fabio Galasso
<span title="2019-10-15">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We argue that people trajectory and head pose forecasting can be modelled as a joint problem.  ...  MX-LSTM predicts future pedestrians location and head pose, increasing the standard capabilities of the current approaches on long-term trajectory forecasting.  ...  Table 7 , further validates the fact that 8 frames are sufficient for the LSTM approach to learn the representation of the trajectory.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.02000v2">arXiv:1901.02000v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6on6vvsribdlbmw6cdvlpqvfhi">fatcat:6on6vvsribdlbmw6cdvlpqvfhi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930073342/https://arxiv.org/pdf/1901.02000v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e7/91/e7917c7ee9390bd4a77ac693d3926805de171d83.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.02000v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Complex sequential understanding through the awareness of spatial and temporal concepts

Bo Pang, Kaiwen Zha, Hanwen Cao, Jiajun Tang, Minghui Yu, Cewu Lu
<span title="2020-04-27">2020</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/v66j35cgxvajrnw3y4tkpw4ine" style="color: black;">Nature Machine Intelligence</a> </i> &nbsp;
Current neural networks attempt to learn spatial and temporal information as a whole, limited their abilities to represent large scale spatial representations over long-range sequences.  ...  Here, we introduce a new modeling strategy called Semi-Coupled Structure (SCS), which consists of deep neural networks that decouple the complex spatial and temporal concepts learning.  ...  Taking action recognition as an example, r s can be human poses in a single frame that is unrelated to temporal information but useful for action understanding, and r t can be the estimates of optical  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1038/s42256-020-0168-3">doi:10.1038/s42256-020-0168-3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dyrxp3pqvvb3tmrjv2yig5egru">fatcat:dyrxp3pqvvb3tmrjv2yig5egru</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200603050432/https://arxiv.org/pdf/2006.00212v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1038/s42256-020-0168-3"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> nature.com </button> </a>

TridentNet: A Conditional Generative Model for Dynamic Trajectory Generation [article]

David Paz, Hengyuan Zhang, Henrik I. Christensen
<span title="2022-03-26">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To address these shortcomings, we introduce an approach that leverages lightweight map representations, explicitly enforcing geometric constraints, and learns feasible trajectories using a conditional  ...  While end-to-end models are geared towards solving the scalability constraints from HD maps, they do not generalize for different vehicles and sensor configurations.  ...  For example, [5] and [7] use coarse map representations and raw camera data to learn a control action for the next time step.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.06374v4">arXiv:2101.06374v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uwqplk6rdjholhd34h4eqrx354">fatcat:uwqplk6rdjholhd34h4eqrx354</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210419220205/https://arxiv.org/pdf/2101.06374v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/aa/d5aa886958e4d848d99df1267d94aac3e72f6d03.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.06374v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Spatio-temporal Transformer for 3D Human Motion Prediction [article]

Emre Aksan, Manuel Kaufmann, Peng Cao, Otmar Hilliges
<span title="2021-11-29">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The proposed model learns high dimensional embeddings for skeletal joints and how to compose a temporally coherent pose via a decoupled temporal and spatial self-attention mechanism.  ...  We propose a novel Transformer-based architecture for the task of generative modelling of 3D human motion.  ...  We thank the NVIDIA Corporation for the donation of GPUs used in this work.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.08692v3">arXiv:2004.08692v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u2iyk3ij6jc3rnnaxxr5mnoho4">fatcat:u2iyk3ij6jc3rnnaxxr5mnoho4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928084250/https://arxiv.org/pdf/2004.08692v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9f/84/9f84b06c0686543444668f1b63436011b9c7fa84.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.08692v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Structural-RNN: Deep Learning on Spatio-Temporal Graphs [article]

Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena
<span title="2016-04-11">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks~(RNNs).  ...  The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin.  ...  Tompson et al. [58] jointly train CNN and MRF for human pose estimation. Chen et al. [7] use a similar approach for image classification with general MRF.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1511.05298v3">arXiv:1511.05298v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xu6kbnabjvhuxfmtwj2y6zhxoa">fatcat:xu6kbnabjvhuxfmtwj2y6zhxoa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905062429/https://arxiv.org/pdf/1511.05298v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/77/11/7711869156b293f87fe0992766184dd3263f1864.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1511.05298v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Semi-supervised Deep Generative Model for Human Body Analysis [chapter]

Rodrigo de Bem, Arnab Ghosh, Thalaiyasingam Ajanthan, Ondrej Miksik, N. Siddharth, Philip Torr
<span title="">2019</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/mih2xhsfhzgnlbqdp7mxi3yrna" style="color: black;">Landolt-Börnstein - Group III Condensed Matter</a> </i> &nbsp;
Deep generative modelling for human body analysis is an emerging problem with many interesting applications.  ...  In this work, we adopt a structured semisupervised approach and present a deep generative model for human body analysis where the body pose and the visual appearance are disentangled in the latent space  ...  [38] proposed a hybrid VAEGAN architecture for forecasting future poses in a video.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-11012-3_38">doi:10.1007/978-3-030-11012-3_38</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7xzwoudk3zhrled4durvztnkzu">fatcat:7xzwoudk3zhrled4durvztnkzu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200709213406/https://openaccess.thecvf.com/content_ECCVW_2018/papers/11130/de_A_Semi-supervised_Deep_Generative_Modelfor_Human_Body_Analysis_ECCVW_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/26/d0/26d06d00bdacb4864399b9c701ab00f30f5b468d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-11012-3_38"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Contextually Plausible and Diverse 3D Human Motion Prediction [article]

Sadegh Aliakbarian, Fatemeh Sadat Saleh, Lars Petersson, Stephen Gould, Mathieu Salzmann
<span title="2020-12-05">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We tackle the task of diverse 3D human motion prediction, that is, forecasting multiple plausible future 3D poses given a sequence of observed 3D poses.  ...  In this paper, we address both of these problems by developing a new variational framework that accounts for both diversity and context of the generated future motion.  ...  Introduction Human motion prediction is the task of forecasting plausible 3D human motion continuation(s) given a sequence of past 3D human poses.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.08521v4">arXiv:1912.08521v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vq66fl3s6zdclbtwygs42cflcu">fatcat:vq66fl3s6zdclbtwygs42cflcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200810042221/https://arxiv.org/pdf/1912.08521v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.08521v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning unknown ODE models with Gaussian processes [article]

Markus Heinonen, Cagatay Yildiz, Henrik Mannerström, Jukka Intosalmi, Harri Lähdesmäki
<span title="2018-03-12">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However, for many complex systems it is practically impossible to determine the equations or interactions governing the underlying dynamics.  ...  We propose to learn non-linear, unknown differential functions from state observations using Gaussian process vector fields within the exact ODE formalism.  ...  Finally, npODE can accurately predict five poses, and still retains adequate performance on remaining poses, except for pose 2.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1803.04303v1">arXiv:1803.04303v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/un52xko66jfxfkavxjwzci63ie">fatcat:un52xko66jfxfkavxjwzci63ie</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824170205/https://arxiv.org/pdf/1803.04303v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/07/7b/077bcf0f4ebe04f7e6dc8465b427201d9c0650f6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1803.04303v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 2,684 results