Filters








26 Hits in 7.4 sec

Explainable Abstract Trains Dataset [article]

Manuel de Sousa Ribeiro, Ludwig Krippahl, Joao Leite
2020 arXiv   pre-print
The Explainable Abstract Trains Dataset is an image dataset containing simplified representations of trains.  ...  The dataset is accompanied by an ontology that conceptualizes and classifies the depicted trains based on their visual characteristics, allowing for a precise understanding of how each train was labeled  ...  In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learn- ing Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. 8.  ... 
arXiv:2012.12115v1 fatcat:deixf5bzajf2rnktjk36q2rwhm

Score-Based Generative Classifiers [article]

Roland S. Zimmermann, Lukas Schott, Yang Song, Benjamin A. Dunn, David A. Klindt
2021 arXiv   pre-print
Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST, but this robustness has not been observed on more complex datasets like CIFAR-10.  ...  Nevertheless, we find that these models are only slightly, if at all, more robust than discriminative baseline models on out-of-distribution tasks based on common image corruptions.  ...  In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. [45] Sasha  ... 
arXiv:2110.00473v2 fatcat:6yer6cgkxnbf7kg2m2xmxjtre4

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function [article]

Manaar Alam, Shubhajit Datta, Debdeep Mukhopadhyay, Arijit Mondal, Partha Pratim Chakrabarti
2021 arXiv   pre-print
The security of Deep Learning classifiers is a critical field of study because of the existence of adversarial attacks.  ...  Such attacks usually rely on the principle of transferability, where an adversarial example crafted on a surrogate classifier tends to mislead the target classifier trained on the same dataset even if  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.  ... 
arXiv:2112.04948v1 fatcat:i7ab4hvgprcgvdpnowrlwmaiwa

Variational Neural Machine Translation with Normalizing Flows [article]

Hendra Setiawan, Matthias Sperber, Udhay Nallasamy, Matthias Paulik
2020 arXiv   pre-print
Unfortunately, learning informative latent variables is non-trivial, as the latent space can be prohibitively large, and the latent codes are prone to be ignored by many translation models at training  ...  Variational Neural Machine Translation (VNMT) is an attractive framework for modeling the generation of target translations, conditioned not only on the source sentence but also on some latent random variables  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.  ... 
arXiv:2005.13978v1 fatcat:7ysqtt4kgzgm3l4kddct7byihy

Invertible generative models for inverse problems: mitigating representation error and dataset bias [article]

Muhammad Asim, Max Daniels, Oscar Leong, Ali Ahmed, Paul Hand
2020 arXiv   pre-print
We additionally compare performance for compressive sensing to unlearned methods, such as the deep decoder, and we establish theoretical bounds on expected recovery error in the case of a linear invertible  ...  In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive  ...  (eds.), 2nd Interna- tional Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Liu, Z., Luo, P., Wang, X., and Tang, X.  ... 
arXiv:1905.11672v4 fatcat:hgpfoh6frfa4thyxvhmqjzqomi

Variational Tracking and Prediction with Generative Disentangled State-Space Models [article]

Adnan Akhundov, Maximilian Soelch, Justin Bayer, Patrick van der Smagt
2019 arXiv   pre-print
Generative and inference model are jointly learned from observations only.  ...  Tracking performance is increased significantly over prior art.  ...  In: 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. u r l: http://arxiv.org/abs/ 1312.6114.  ... 
arXiv:1910.06205v1 fatcat:7radjf2djnfh7copebflvopid4

PDFNet: Pointwise Dense Flow Network for Urban-Scene Segmentation [article]

Venkata Satya Sai Ajay Daliparthi
2021 arXiv   pre-print
Moreover, our method achieves considerable performance in classifying out-of-the training distribution samples, evaluated on Cityscapes to KITTI dataset.  ...  The extensive experiments on Cityscapes and CamVid benchmarks demonstrate that our method significantly outperforms baselines in capturing small classes and in few-data regimes.  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. [62] Sergey Ioffe and Christian Szegedy.  ... 
arXiv:2109.10083v1 fatcat:4zzppv7iuzelxiqu2i6h5fdiey

What to Pre-Train on? Efficient Intermediate Task Selection [article]

Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych
2021 arXiv   pre-print
Our results show that efficient embedding based methods that rely solely on the respective datasets outperform computational expensive few-shot fine-tuning approaches.  ...  similar sequential fine-tuning gains can be achieved in adapter settings, and subsequently consolidate previously proposed methods that efficiently identify beneficial tasks for intermediate transfer learning  ...  We thank Leonardo Ribeiro and the anonymous reviewers for insightful feedback and suggestions on a draft of this paper.  ... 
arXiv:2104.08247v2 fatcat:4ljcfshev5f3tmgugrrrkh3s4m

A Survey of Deep Active Learning [article]

Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, Xin Wang
2021 arXiv   pre-print
Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize massive parameters, so that the model learns how to extract high-quality features.  ...  A natural idea is whether AL can be used to reduce the cost of sample annotations, while retaining the powerful learning capabilities of DL. Therefore, deep active learning (DAL) has emerged.  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. [115] Andreas Kirsch, Joost van Amersfoort, and Yarin Gal  ... 
arXiv:2009.00236v2 fatcat:zuk2doushzhlfaufcyhoktxj7e

Neural Network Module Decomposition and Recomposition [article]

Hiroaki Kingetsu, Kenichi Kobayashi, Taiji Suzuki
2021 arXiv   pre-print
In contrast to existing studies based on reusing models that involve retraining, such as a transfer learning model, the proposed method does not require retraining and has wide applicability as it can  ...  To extract modules, we designed a learning method and a loss function to maximize shared weights among modules.  ...  A review of modularization Canada, April 14-16, 2014, Conference Track Proceedings. techniques in artificial neural networks.  ... 
arXiv:2112.13208v1 fatcat:zcccxh6nmrberacsaj3b5ct4vi

Towards Robust Explanations for Deep Neural Networks

Ann-Kathrin Dombrowski, Christopher J. Anders, Klaus-Robert Müller, Pan Kessel
2021 Pattern Recognition  
We develop a unified theoretical framework for deriving bounds on the maximal manipulability of a model.  ...  Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks. But their usefulness can be compromised because they are susceptible to manipulations.  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. [3] Matthew D. Zeiler and Rob Fergus.  ... 
doi:10.1016/j.patcog.2021.108194 fatcat:qv77e5cilzfudkd2dcce3fkgsy

Improving Neural Topic Models using Knowledge Distillation [article]

Alexander Hoyle, Pranav Goel, Philip Resnik
2020 arXiv   pre-print
In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings. Thomas K. Landauer and Susan T. Dumais. 1997.  ...  In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.  ... 
arXiv:2010.02377v1 fatcat:xm7rr7hw7nc6nngheo4x5t6ncu

When are Non-Parametric Methods Robust? [article]

Robi Bhattacharjee, Kamalika Chaudhuri
2020 arXiv   pre-print
In 2nd International Confer- ence on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceed- ings, 2014. Wang, Y., Jha, S., and Chaudhuri, K.  ...  In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Pro- ceedings, 2018.  ...  Therefore one must have cardinality at least |S| 2 , which implies the same about |S r |. Proof.  ... 
arXiv:2003.06121v2 fatcat:jz2a2oadoffxtg4ph7hsqq6vc4

Transforming Gaussian Processes With Normalizing Flows [article]

Juan Maroñas, Oliver Hamelijnck, Jeremias Knoblauch, Theodoros Damoulas
2021 arXiv   pre-print
The resulting algorithm's computational and inferential performance is excellent, and we demonstrate this on a range of data sets.  ...  Inspired by the growing body of work on Normalizing Flows, we enlarge this class of priors through a parametric invertible transformation that can be made input-dependent.  ...  In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.  ... 
arXiv:2011.01596v2 fatcat:njvhgcnpjbfo3oeojxg4icryzu

Beyond Robustness: Resilience Verification of Tree-Based Classifiers [article]

Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando
2021 arXiv   pre-print
In this paper we criticize the robustness measure traditionally employed to assess the performance of machine learning models deployed in adversarial settings.  ...  To mitigate the limitations of robustness, we introduce a new measure called resilience and we focus on its verification.  ...  CoRR, abs/2004.03295, 2020. Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, [5] Stefano Calzavara, Claudio Lucchese, and Gabriele Tolomei.  ... 
arXiv:2112.02705v1 fatcat:ahw6lbkf7fbnnlubo7z5zuq4wy
« Previous Showing results 1 — 15 out of 26 results