Filters








7,588 Hits in 7.0 sec

A Group-Theoretic Framework for Data Augmentation [article]

Shuxiao Chen, Edgar Dobriban, Jane H Lee
2020 arXiv   pre-print
Data augmentation is a widely used trick when training deep neural networks: in addition to the original data, properly transformed data are also added to the training set.  ...  We show data augmentation is equivalent to an averaging operation over the orbits of a certain group that keeps the data distribution approximately invariant.  ...  We thank Jialin Mao for participating in several meetings and for helpful references.  ... 
arXiv:1907.10905v4 fatcat:kzmi3hcmczd3xdqqj4f3s5k5rq

Conditional variance penalties and domain shift robustness

Christina Heinze-Deml, Nicolai Meinshausen
2020 Machine Learning  
changes such as changes in movement and posture.  ...  We group observations if they share the same class and identifier $$(Y,\mathrm {ID})=(y,\mathrm {id})$$ ( Y , ID ) = ( y , id ) and penalize the conditional variance of the prediction or the loss if we  ...  The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.  ... 
doi:10.1007/s10994-020-05924-1 fatcat:6u2j4yvvvrfgxc57c4qrmzfkkq

A high-bias, low-variance introduction to Machine Learning for physicists [article]

Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexandre G.R. Day, Clint Richardson, Charles K. Fisher, David J. Schwab
2019 arXiv   pre-print
Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines  ...  more advanced topics in both supervised and unsupervised learning.  ...  This prevents overfitting by reducing correlations among neurons and reducing the variance in a method similar in spirit to ensemble methods. FIG. 42 42 A.  ... 
arXiv:1803.08823v2 fatcat:vmtp62jyvjfxhpidpdcozfnza4

On the Bias-Variance Tradeoff: Textbooks Need an Update [article]

Brady Neal
2019 arXiv   pre-print
We observe a similar phenomenon beyond supervised learning, with a set of deep reinforcement learning experiments.  ...  We argue that textbook and lecture revisions are in order to convey this nuanced modern understanding of the bias-variance tradeoff.  ...  We thank NVIDIA for donating a DGX-1 computer used in this work.  ... 
arXiv:1912.08286v1 fatcat:amclswbmezdk7iazf2g2bwccd4

Analysis of variance?why it is more important than ever

Andrew Gelman
2005 Annals of Statistics  
Discussion of "Analysis of variance--why it is more important than ever" by A. Gelman [math.ST/0504499]  ...  We thank Hal Stern for help with the linear model formulation; John Nelder, Donald Rubin, Iven Van Mechelen and the editors and referees for helpful comments; and Alan Edelman for the data used in Section  ...  ., Gelman, Carlin, Stern and Rubin (1995) ] or data augmentation [see Albert and Chib (1993) and Liu (2002) ].  ... 
doi:10.1214/009053604000001048 fatcat:drlm4sjvxbg7rayqzzngsc67qy

Inflation/Output Variance Trade-Offs and Optimal Monetary Policy

Jeffrey C. Fuhrer
1997 Journal of Money, Credit and Banking  
This means the Phillips Curve is statistically valid for describing the Thai inflation only in some cases and sub-samples.  ...  The New Keynesian Phillips Curve which contains the natural rate and the individual behavior, in contrast, holds true for Thailand.  ...  augmented model and the NKPC.  ... 
doi:10.2307/2953676 fatcat:tmtmax6stvd7vkrj7l2qsengqi

Variance Ranking for Multi-Classed Imbalanced Datasets: A Case Study of One-Versus-All

Solomon H. Ebenuwa, Mhd Saeed Sharif, Ameer Al-Nemrat, Ali H. Al-Bayatti, Nasser Alalwan, Ahmed Ibrahim Alzahrani, Osama Alfarraj
2019 Symmetry  
This paper employs the variance ranking technique to deal with the real-world class imbalance problem. We augmented this technique using one-versus-all re-coding of the multi-classed datasets.  ...  In predictions, there are always majority and minority classes, and in most cases it is difficult to capture the members of item belonging to the minority classes.  ...  We now use data in every aspect of our life, from education to health, security, transportation, and beyond.  ... 
doi:10.3390/sym11121504 fatcat:lwl5jbo6hjhxdawnrs545z6lka

An Exploration of Consistency Learning with Data Augmentation

Connor Shorten, Taghi M. Khoshgoftaar
2022 Proceedings of the ... International Florida Artificial Intelligence Research Society Conference  
Readers will understand the practice of adding a Consistency Loss to improve Robustness in Deep Learning.  ...  However, Supervised Learning still fails to be Robust, making different predictions for original and augmented data points.  ...  Contrastive Learning (Huang et al. 20212020)ch 2019)with Augmented Data is designed to improve Robustness in Deep Learning.Robustness is one of the largest outstanding limitations of Deep Learning and  ... 
doi:10.32473/flairs.v35i.130669 fatcat:ymy2wxjknjbrhljxghmk7apscq

A Kernel Theory of Modern Data Augmentation [article]

Tri Dao, Albert Gu, Alexander J. Ratner, Virginia Smith, Christopher De Sa, Christopher Ré
2019 arXiv   pre-print
These frameworks both serve to illustrate the ways in which data augmentation affects the downstream learning model, and the resulting analyses provide novel connections between prior work in invariant  ...  In this paper, we seek to establish a theoretical framework for understanding data augmentation.  ...  resources to data-hungry deep learning models.  ... 
arXiv:1803.06084v2 fatcat:mkj36zcauvg27ian4vcccmrx6y

Last Layer Marginal Likelihood for Invariance Learning [article]

Pola Schwöbel, Martin Jørgensen, Sebastian W. Ober, Mark van der Wilk
2022 arXiv   pre-print
Data augmentation is often used to incorporate inductive biases into models. Traditionally, these are hand-crafted and tuned with cross validation.  ...  We show partial success on standard benchmarks, in the low-data regime and on a medical imaging dataset by designing a custom optimisation routine.  ...  In modern machine learning pipelines invariances are achieved through data augmentation.  ... 
arXiv:2106.07512v2 fatcat:5ojyoa62kra6jlw4hoaqnwqema

On the Benefits of Invariance in Neural Networks [article]

Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, Benjamin Bloem-Reddy
2020 arXiv   pre-print
In this work, we analyze the benefits and limitations of two widely used approaches in deep learning in the presence of invariance: data augmentation and feature averaging.  ...  Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning.  ...  Training Behavior of DA and FA In Section 3, we showed that feature averaging reduces variance in both function outputs and gradient steps when compared to data augmentation.  ... 
arXiv:2005.00178v1 fatcat:45lmcynbjnertgapp6x2ok2yu4

Finite Versus Infinite Neural Networks: an Empirical Study [article]

Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein
2020 arXiv   pre-print
ensembled finite networks have reduced posterior variance and behave more similarly to infinite networks; weight decay and the use of a large learning rate break the correspondence between finite and  ...  Our experiments additionally motivate an improved layer-wise scaling for weight decay which improves generalization in finite-width networks.  ...  Acknowledgments and Disclosure of Funding We thank Yasaman Bahri and Ethan Dyer for discussions and feedback on the project.  ... 
arXiv:2007.15801v2 fatcat:6ervrlzxybgeteh4cpdytu3w2q

Towards Deep Cellular Phenotyping in Placental Histology [article]

Michael Ferlaino, Craig A. Glastonbury, Carolina Motta-Mejia, Manu Vatish, Ingrid Granne, Stephen Kennedy, Cecilia M. Lindgren, Christoffer Nellåker
2018 arXiv   pre-print
Furthermore, we learn deep embeddings encoding phenotypic knowledge that is capable of both stratifying five distinct cell populations and learn intraclass phenotypic variance.  ...  In this work, we present an open sourced, computationally tractable deep learning pipeline to analyse placenta histology at the level of the cell.  ...  With larger training sets, we expect our model (and any given deep CNN) to become invariant to this, with learned representations being more specific to precise VAS cellular morphology.  ... 
arXiv:1804.03270v2 fatcat:2nf3tgk2g5do7hsrvl2n7zf47u

Learning in the Machine: To Share or Not to Share? [article]

Jordan Ott, Erik Linstead, Nicholas LaHaye, Pierre Baldi
2019 arXiv   pre-print
Under the assumption of translationally augmented data, Free Convolutional Networks learn translationally invariant representations that yield an approximate form of weight sharing.  ...  Furthermore, Free Convolutional Networks match the performance observed in standard architectures when trained using properly translated data (akin to video).  ...  This result indicates that only translationally augmented training data allows FCNs to learn translationally invariant representations.  ... 
arXiv:1909.11483v2 fatcat:62jvw6jnajgzvf6xal7mu7xsym

GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks [article]

Christopher Bowles, Liang Chen, Ricardo Guerrero, Paul Bentley, Roger Gunn, Alexander Hammers, David Alexander Dickie, Maria Valdés Hernández, Joanna Wardlaw, Daniel Rueckert
2018 arXiv   pre-print
One of the biggest issues facing the use of machine learning in medical imaging is the lack of availability of large, labelled datasets.  ...  The limited amount of training data can inhibit the performance of supervised machine learning algorithms which often need very large quantities of data on which to train to avoid overfitting.  ...  Introduction Data augmentation is commonly used by many deep learning approaches in the presence of limited training data.  ... 
arXiv:1810.10863v1 fatcat:547jbgy4ubh77amr4jaebsazxa
« Previous Showing results 1 — 15 out of 7,588 results