Filters








42 Hits in 5.3 sec

Deep Learning under Privileged Information Using Heteroscedastic Dropout [article]

John Lambert, Ozan Sener, Silvio Savarese
2018 arXiv   pre-print
We propose to use a heteroscedastic dropout (i.e. dropout with a varying variance) and make the variance of the dropout a function of privileged information.  ...  This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training.  ...  Learning Language under Privileged Visual Information: Using images as privileged information to learn language is not new. Chrupala et al.  ... 
arXiv:1805.11614v1 fatcat:o2xkrwsu75aqvndjs7jxzz63sm

Deep Learning Under Privileged Information Using Heteroscedastic Dropout

John Lambert, Ozan Sener, Silvio Savarese
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
We propose to use a heteroscedastic dropout (i.e. dropout with a varying variance) and make the variance of the dropout a function of privileged information.  ...  This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training.  ...  Acknowledgements We thank Alessandro Achille for his help on comparison with information dropout.  ... 
doi:10.1109/cvpr.2018.00926 dblp:conf/cvpr/LambertSS18 fatcat:gs5cnyautrh2rpluxtumfuwe2u

Disease gene prediction with privileged information and heteroscedastic dropout

Juan Shu, Yu Li, Sheng Wang, Bowei Xi, Jianzhu Ma
2021 Bioinformatics  
Results In this work, we propose a graph neural network (GNN) version of the Learning under Privileged Information paradigm to predict new disease gene associations.  ...  Availability and implementation Our method is realized with Python 3.7 and Pytorch 1.5.0 and method and data are freely available at: https://github.com/juanshu30/Disease-Gene-Prioritization-with-Privileged-Information-and-Heteroscedastic-Dropout  ...  Privileged information and heteroscedastic Gaussian dropout Privileged features represent those features that are often hard to collect in practice so that sometimes we only have this kind of features  ... 
doi:10.1093/bioinformatics/btab310 pmid:34252957 pmcid:PMC8275341 fatcat:4itlkcdsjzauxhqbzhv5vndh54

Transfer and Marginalize: Explaining Away Label Noise with Privileged Information [article]

Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou, Jesse Berent
2022 arXiv   pre-print
We argue that privileged information is useful for explaining away label noise, thereby reducing the harmful impact of noisy labels.  ...  Our method, TRAM (TRansfer and Marginalize), has minimal training time overhead and has the same test-time cost as not using privileged information.  ...  Method: TRAM We consider learning under privileged information (Vapnik & Vashist, 2009) , LUPI.  ... 
arXiv:2202.09244v2 fatcat:r354np5aszeozag377snymryca

A neural network strategy for supervised classification via the Learning Under Privileged Information paradigm

Ludovica Sacco, Dino Ienco, Roberto Interdonato
2021 Sistemi Evoluti per Basi di Dati  
In order to exploit such additional information, the Learning Using Privileged Information (LUPI) paradigm has been proposed, based on the use of the teacher role in the learning process.  ...  In this work, we apply this paradigm in the context of neural networks, by proposing a LUPI based deep learning architecture able to exploit a larger set of attributes at training time, with the aim to  ...  A LUPI framework for CNNs and RNNs is offered in [16] , where a heteroscedastic dropout is used and the privileged information is represented by the variance of the dropout.  ... 
dblp:conf/sebd/SaccoII21 fatcat:mgowfodoxrdixlqaw2djotsbjm

Correlated Input-Dependent Label Noise in Large-Scale Image Classification [article]

Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, Jesse Berent
2021 arXiv   pre-print
Our method is simple to use, and we provide an implementation that is a drop-in replacement for the final fully-connected layer in a deep classifier.  ...  We take a principled probabilistic approach to modelling input-dependent, also known as heteroscedastic, label noise in these datasets.  ...  Dropout as a bayesian approximation: Representing model uncertainty in deep learn- ing.  ... 
arXiv:2105.10305v1 fatcat:r75nlz5ymbdyddwkjurmxhzq5q

A review of uncertainty quantification in deep learning: Techniques, applications and challenges

Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U. Rajendra Acharya, Vladimir Makarenkov, Saeid Nahavandi
2021 Information Fusion  
This study reviews recent advances in UQ methods used in deep learning, investigates the application of these methods in reinforcement learning, and highlights fundamental research challenges and directions  ...  Bayesian approximation and ensemble learning techniques are two widely-used types of uncertainty quantification (UQ) methods.  ...  Then using the privileged information in heteroscedastic dropout to estimate uncertainty function which can BDL A principled loss (modeling bounding respectively) function, and Dirac delta Gaussian distribution  ... 
doi:10.1016/j.inffus.2021.05.008 fatcat:yschhguyxbfntftj6jv4dgywxm

Epistemic Neural Networks [article]

Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy
2022 arXiv   pre-print
We introduce the epistemic neural network (ENN) as an interface for models that represent uncertainty as required to generate useful joint predictions.  ...  We demonstrate this efficacy across synthetic data, ImageNet, and some reinforcement learning tasks. As part of this effort we open-source experiment code.  ...  We also extend a special thanks to Mark Collier, Mike Dusenberry, Jeremiah Luh, Balaji Lakshminarayanan and the rest of the Google Brain reliable deep learning team for their help in integrating our research  ... 
arXiv:2107.08924v5 fatcat:gaiua6m5vbckvbpmsv2vysxd2u

Privileged Pooling: Better Sample Efficiency Through Supervised Attention [article]

Andres C. Rodriguez, Stefano D'Aronco, Konrad Schindler, Jan Dirk Wegner
2021 arXiv   pre-print
We propose a scheme for supervised image classification that uses privileged information, in the form of keypoint annotations for the training data, to learn strong models from small and/or biased training  ...  In experiments with three different animal species datasets, we show that deep networks with privileged pooling can use small training sets more efficiently and generalize better.  ...  Heteroscedastic dropout (h-dropout) [19] highlights how learning under privileged information can be implemented via a dropout regularization.  ... 
arXiv:2003.09168v3 fatcat:jhx4xjupjnb6ri4jn3fukeb2b4

Neural Network Models for Empirical Finance

Hector F. Calvo-Pardo, Tullio Mancini, Jose Olmo
2020 Journal of Risk and Financial Management  
We also review other features of machine learning methods, such as the selection of hyperparameters, the role of the architecture of a deep neural network for model prediction, or the importance of using  ...  This paper presents an overview of the procedures that are involved in prediction with machine learning models with special emphasis on deep learning.  ...  , called deep learning.  ... 
doi:10.3390/jrfm13110265 fatcat:pxeaxmv6nngxfekzo2qsrktw5q

Advances in artificial neural networks, machine learning and computational intelligence

Luca Oneto, Kerstin Bunte, Frank-Michael Schleif
2019 Neurocomputing  
intervals to the setting of privileged information, i.e. potentially relevant information is available for training purposes only, but cannot be used for the predic- in adversarial defense, yet it is  ...  for evaluation of classification stability Dropout and DropConnect are useful methods to prevent multilayer neural networks from overfitting.  ...  To show the benefits of the approach, authors use a challenging data set where the dynamics of the underlying system exhibit both operational phase shifts and heteroscedastic noise.  ... 
doi:10.1016/j.neucom.2019.01.081 fatcat:2uyl25ojxbf2tpdoiepsog2q74

Robust Asymmetric Learning in POMDPs [article]

Andrew Warrington and J. Wilder Lavington and Adam Ścibior and Mark Schmidt and Frank Wood
2021 arXiv   pre-print
unsafe, under partial information.  ...  We derive an objective to instead train the expert to maximize the expected reward of the imitating agent policy, and use it to construct an efficient algorithm, adaptive asymmetric DAgger (A2D), that  ...  Deep learning under privileged information using heteroscedastic dropout. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8886-8895, 2018.  ... 
arXiv:2012.15566v3 fatcat:etbg3phqnvgdtfcm2ctbawhane

BiGAN: LncRNA-disease association prediction based on bidirectional generative adversarial network

Qiang Yang, Xiaokun Li
2021 BMC Bioinformatics  
LncRNA- disease association prediction is very useful for understanding pathogenesis, diagnosis, and prevention of diseases, and is helpful for labelling relevant biological information.  ...  In recent years, heteroscedastic dropout has been one of the best regularization techniques for controlling deep neural networks to absorb privileged information.  ...  In the past decade, deep learning has become one of the most popular subjects in scientific research. Many deep learning models have been created by scholars and applied in various fields.  ... 
doi:10.1186/s12859-021-04273-7 fatcat:3uw3ldpdxjhldmw2rsg4ti54tu

From Dependence to Causation [article]

David Lopez-Paz
2016 arXiv   pre-print
Machine learning is the science of discovering statistical dependencies in data, and the use of those dependencies to perform predictions.  ...  Third, we discover causal structures in convolutional neural network features using our algorithms.  ...  Section 5.4.4 explores the use of RCCA in Vapnik's learning using privileged information setup.  ... 
arXiv:1607.03300v1 fatcat:img5m23n5ncx5mfejgqkjft2ua

High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning [article]

Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang Chen, Jun Wang, Jan Peters, Haitham Bou-Ammar
2021 arXiv   pre-print
By adapting ideas from deep metric learning, we use label guidance from the blackbox function to structure the VAE latent space, facilitating the Gaussian process fit and yielding improved BO performance  ...  We introduce a method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces.  ...  High dimensional bayesian optimization using dropout.  ... 
arXiv:2106.03609v3 fatcat:hzy5d6iawfhdna2qpbx3w57mx4
« Previous Showing results 1 — 15 out of 42 results