Filters








17,223 Hits in 2.5 sec

Autoregressive Models for Sequences of Graphs

Daniele Zambon, Daniele Grattarola, Lorenzo Livi, Cesare Alippi
2019 2019 International Joint Conference on Neural Networks (IJCNN)  
This paper proposes an autoregressive (AR) model for sequences of graphs, which generalises traditional AR models.  ...  A first novelty consists in formalising the AR model for a very general family of graphs, characterised by a variable topology, and attributes associated with nodes and edges.  ...  Autoregressive Model for a GGP Due to the lack of basic mathematical operators in the vast family of graphs we consider here, the generalisation of model (1) to account for graph data is non-trivial.  ... 
doi:10.1109/ijcnn.2019.8852131 dblp:conf/ijcnn/ZambonGLA19 fatcat:5oxhw4cjmbdozhkx4g7yp6wyiq

FNetAR: Mixing Tokens with Autoregressive Fourier Transforms [article]

Tim Lou, Michael Park, Mohammad Ramezanali, Vincent Tang
2021 arXiv   pre-print
The autoregressive Fourier transform could likely be used for parameterreduction on most Transformer-based time-series prediction models.  ...  only half the number self-attention layers,thus providing further evidence for the superfluity of deep neural networks with heavily compoundedattention mechanisms.  ...  The generated graph matrices W (n) ij (x), being the only source of token-mixing in the Transformer, act as a bottleneck for the flow of information between sequence elements i and j.  ... 
arXiv:2107.10932v1 fatcat:m4hgxfs7qrg7hfbdcigaxjq4k4

Non-autoregressive electron flow generation for reaction prediction [article]

Hangrui Bi, Hengyi Wang, Chence Shi, Jian Tang
2021 arXiv   pre-print
Our model achieves both an order of magnitude lower inference latency, with state-of-the-art top-1 accuracy and comparable performance on Top-K sampling.  ...  These autoregressive generating methods impose an arbitrary ordering of outputs and prevent parallel decoding during inference.  ...  of product graphs. • MEGAN (Sacha et al., 2020) models chemical reactions as a sequence of graph edits, and learns to predict the sequence autoregressively.  ... 
arXiv:2012.12124v2 fatcat:m4kqoilwavgjhm5jysfsvmnizy

Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation [article]

Xiaohui Chen, Xu Han, Jiajing Hu, Francisco J. R. Ruiz, Liping Liu
2021 arXiv   pre-print
However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging.  ...  A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph.  ...  Acknowledgements We thank Yujia Li for his insightful comments, and the anonymous reviewers for their constructive feedback. The work was supported by NSF 1850358 and NSF 1908617.  ... 
arXiv:2106.06189v2 fatcat:rxymwg6dzvfbbfbew33zygctci

TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition [article]

Zhengkun Tian, Jiangyan Yi, Jianhua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen, Xuefei Liu
2021 arXiv   pre-print
On the other hand, it's difficult for most of the NAR models to train and converge.  ...  The non-autoregressive (NAR) models can get rid of the temporal dependency between the output tokens and predict the entire output tokens in at least one step.  ...  (b) illustrates an output probability graph of the non-autoregressive model. The first red circle of each column means the end-of-sentence token <EOS>.  ... 
arXiv:2104.01522v1 fatcat:sprlomng4rea7en7hxqgx64z3a

HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps [article]

Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov
2021 arXiv   pre-print
In this work, we explore several autoregressive models using different data representations, including sequence, plain graph, and hierarchical graph.  ...  We propose HDMapGen, a hierarchical graph generation model capable of producing high-quality and diverse HD maps through a coarse-to-fine approach.  ...  We explore three autoregressive generative models for sequence, plain graph, and hierarchical graph data representations and evaluate generation quality, diversity, scalability, and efficiency.  ... 
arXiv:2106.14880v1 fatcat:ja63umgdxnhw7ky5wc6xmhydbq

Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers [article]

Angel Martínez-González, Michael Villamizar, Jean-Marc Odobez
2021 arXiv   pre-print
We propose to leverage Transformer architectures for non-autoregressive human motion prediction.  ...  In that context, our contributions are fourfold: (i) we frame human motion prediction as a sequence-to-sequence problem and propose a non-autoregressive Transformer to infer the sequences of poses in parallel  ...  Most neural networkbased models for sequence-to-sequence modelling use autoregressive decoding: generating entries in the sequence one at a time conditioned on previous predicted elements.  ... 
arXiv:2109.07531v1 fatcat:ifkuwbmp3ndjrl2epdxauytt3e

Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT [article]

Ye Bai, Jiangyan Yi, Jianhua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang
2021 arXiv   pre-print
When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable.  ...  However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel.  ...  The authors are grateful to the anonymous reviewers for their invaluable comments that improve the completeness and readability of this paper.  ... 
arXiv:2102.07594v6 fatcat:6rtjstjwb5bhxg7bdks5ik6qxe

Multitask Non-Autoregressive Model for Human Motion Prediction [article]

Bin Li, Jian Tian, Zhongfei Zhang, Hailin Feng, Xi Li
2020 arXiv   pre-print
Our approach is evaluated on Human3.6M and CMU-Mocap benchmarks and outperforms state-of-the-art autoregressive methods.  ...  Hence, a novel Non-auToregressive Model (NAT) is proposed with a complete non-autoregressive decoding scheme, as well as a context encoder and a positional encoding module.  ...  In general, these inertial motion models are based on either sequential autoregression or sequence-to-sequence encoderdecoder learning, which generates a sequence of future human skeletons in a recurrent  ... 
arXiv:2007.06426v1 fatcat:i7j7cmmp6zcu5btcqaslsthqbi

Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement [article]

Alireza Mohammadshahi, James Henderson
2020 arXiv   pre-print
We propose the Recursive Non-autoregressive Graph-to-Graph Transformer architecture (RNGTr) for the iterative refinement of arbitrary graphs through the recursive application of a non-autoregressive Graph-to-Graph  ...  over the new state-of-the-art results achieved by SynTr, significantly improving the state-of-the-art for all corpora tested.  ...  We also thank Lesly Miculicich, other members of the Idiap NLU group, the anonymous reviewers, and Yue Zhang for helpful discussions and suggestions.  ... 
arXiv:2003.13118v2 fatcat:oop5sq47uzdohi7dtnovjvrf3u

Benchmarking deep generative models for diverse antibody sequence design [article]

Igor Melnyk, Payel Das, Vijil Chenthamarakshan, Aurelie Lozano
2021 arXiv   pre-print
Here we consider three recently proposed deep generative frameworks for protein design: (AR) the sequence-based autoregressive generative model, (GVP) the precise structure-based graph neural network,  ...  We benchmark these models on the task of computational design of antibody sequences, which demand designing sequences with high diversity for functional implication.  ...  For example, [7] used a generative model for protein sequences design given a target structure, represented as a graph over the residues.  ... 
arXiv:2111.06801v1 fatcat:pj3ceyetjfccte4nbz3albrvwq

Research on Expected Return Based on Time Series Model: Taking Kweichow Moutai Co., Ltd. as an Example

Wennuan Fang
2020 Financial Forum  
It can be used for the results of expected earnings of enterprises and provides a reference for enterprise value evaluation.</p>  ...  At the same time, by fitting single linear regression model and Autoregressive Integrated Moving Average model, the free cash flow is predicted, and finally the ARIMA (1,2,2) model is obtained.  ...  ; the error rate of autoregressive moving average model is low, only 6.09%, which is a good fit for the prediction of free cash flow of Kweichow Moutai Company.  ... 
doi:10.18282/ff.v9i2.858 fatcat:u7xggbhmkvfpranbnwnbuc6mou

Adjustment of sampling locations in rail-geometry datasets: Using dynamic programming and nonlinear filtering

Masako Kamiyama, Tomoyuki Higuchi
2005 Systems and Computers in Japan  
Also, to simplify the search for the identification, they devised a parameter search procedure for the parameters in the autoregressive (AR) model.  ...  evaluation function of a number sequence representing data points.  ...  Fig. 3 (characterizing the differences ẽ 1:T as a Gaussian white noise sequence) Remodeling the Difference Sequence Introduction of an autoregressive (AR) model Assume that the difference component  ... 
doi:10.1002/scj.20313 fatcat:vimd35udpze3zd2njtldadrfxy

Forecasting In One-Dimensional And Generalized Integrated Autoregressive Bilinear Time Series Models

JF Ofo
2013 Global Journal of Mathematical Sciences  
In this paper, forecast of one-dimensional integrated autoregressive bilinear is compared with forecast of generalized integrated autoregressive bilinear model.  ...  We describe the method for estimation of these models and the forecast. It is also pointed out that for this class of non-linear time series models; it is possible to obtain optimal forecast.  ...  Figure 2 shows graph of forecasts of generalized model while Figure 3 shows graph of forecast of one-dimensional model.  ... 
doi:10.4314/gjmas.v12i1.4 fatcat:2kyqbcsp55auhkufljlsognkom

Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement

Alireza Mohammadshahi, James Henderson
2021 Transactions of the Association for Computational Linguistics  
We propose the Recursive Non-autoregressive Graph-to-Graph Transformer architecture (RNGTr) for the iterative refinement of arbitrary graphs through the recursive application of a non-autoregressive Graph-to-Graph  ...  over the new state-of-the-art results achieved by SynTr, significantly improving the state-of-the-art for all corpora tested.  ...  We also thank Lesly Miculicich, other members of the Idiap NLU group, the anonymous reviewers, and Yue Zhang for helpful discussions and suggestions.  ... 
doi:10.1162/tacl_a_00358 fatcat:r5skd7tnkjfvhpch42vencvvqy
« Previous Showing results 1 — 15 out of 17,223 results