A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
[article]
2020
arXiv
pre-print
We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values. This setting includes the recently proposed deep probabilistic autoregressive forecasting models that estimate the probability distribution of a time series given its past and achieve state-of-the-art results in a diverse set of application domains. The key technical challenge we address is effectively differentiating through the
arXiv:2003.03778v1
fatcat:nibk7reftnab5ibqnulfv6mqqu