A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Measuring Robustness to Natural Distribution Shifts in Image Classification
[article]
2020
arXiv
pre-print
We study how robust current ImageNet models are to distribution shifts arising from natural variations in datasets. ...
Moreover, most current techniques provide no robustness to the natural distribution shifts in our testbed. ...
Acknowledgements We would like to thank Logan Engstrom, Justin Gilmer, Moritz Hardt, Daniel Kang, Jerry Li, Percy Liang, Nelson Liu, John Miller, Preetum Nakkiran, Rebecca Roelofs, Aman Sinha, Jacob Steinhardt ...
arXiv:2007.00644v2
fatcat:ef6s3w4ignam7a2cvbvkps6ixm
Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)
[article]
2022
arXiv
pre-print
Contrastively trained image-text models such as CLIP, ALIGN, and BASIC have demonstrated unprecedented robustness to multiple challenging natural distribution shifts. ...
Since these image-text models differ from previous training approaches in several ways, an important question is what causes the large robustness gains. ...
Acknowledgements We would like to thank Wieland Brendel, Nicholas Carlini, Yair Carmon, Rahim Entezari, Tatsunori Hashimoto, Jong Wook Kim, Hongseok Namkoong, Alec Radford, and Rohan Taori for valuable ...
arXiv:2205.01397v1
fatcat:qrjrygxpwzhupcm26x4omhplbe
CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters
[article]
2022
arXiv
pre-print
shifts in image data. ...
In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. ...
A benchmark for distribution shifts that arise in real-world applications is provided in [34] and [35] measured robustness to natural distribution shifts of 204 ImageNet1k models. ...
arXiv:2203.15331v2
fatcat:rs3xn665tvcmbmnbvccxcgqgvq
Natural Adversarial Examples
[article]
2021
arXiv
pre-print
However, we find that improvements to computer vision architectures provide a promising path towards robust models. ...
We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models. ...
Robustness to Shifted Input Distributions. ...
arXiv:1907.07174v4
fatcat:ep7vjwxw5ve7tlfzgdbizs2k24
Measuring Robustness in Deep Learning Based Compressive Sensing
[article]
2021
arXiv
pre-print
sensitive to small, yet adversarially-selected perturbations, (ii) may perform poorly under distribution shifts, and (iii) may fail to recover small but important features in an image. ...
In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as ...
“Measuring
robustness to natural distribution shifts in image classification”. In: Advances in Neural
Information Processing Systems (NeurIPS). 2020.
[Uec+14] M. Uecker, P. ...
arXiv:2102.06103v2
fatcat:57p7hrq2abeqbb4x2ufntj2274
Using Synthetic Corruptions to Measure Robustness to Natural Distribution Shifts
[article]
2021
arXiv
pre-print
Synthetic corruptions gathered into a benchmark are frequently used to measure neural network robustness to distribution shifts. ...
However, robustness to synthetic corruption benchmarks is not always predictive of robustness to distribution shifts encountered in real-world applications. ...
Using synthetic corruptions to measure robustness to natural distribution shifts. ...
arXiv:2107.12052v2
fatcat:ryeohenmqbda7fj7aclpd6uiya
The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers
[article]
2020
arXiv
pre-print
These baselines highlight the challenges of covariate shifts in image data, and establish metrics for improving the generalization capacity of image classifiers. ...
Microscopy images provide a standardized way to measure the generalization capacity of image classifiers, as we can image the same classes of objects under increasingly divergent, but controlled factors ...
Here, we sought to create a standardized dataset for measuring the robustness of image classifiers under various degrees of covariate shift. ...
arXiv:1906.07282v3
fatcat:uupktyzfffafvjx4kmvqwdkrqy
BREEDS: Benchmarks for Subpopulation Shift
[article]
2020
arXiv
pre-print
Finally, we utilize these benchmarks to measure the sensitivity of standard model architectures as well as the effectiveness of off-the-shelf train-time robustness interventions. ...
We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training. ...
Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. ...
arXiv:2008.04859v1
fatcat:5pl5t2dscrexvbypfulb4uf7kq
Human uncertainty makes classification more robust
[article]
2019
arXiv
pre-print
out-of-training-distribution test datasets, and confers robustness to adversarial attacks. ...
The classification performance of deep neural networks has begun to asymptote at near-perfect levels. ...
Introduction On natural-image classification benchmarks, state-ofthe-art convolutional neural network (CNN) models have been said to equal or even surpass human performance, as measured in terms of "top ...
arXiv:1908.07086v1
fatcat:jekrkkriyvh3diistypiekqhmq
Human Uncertainty Makes Classification More Robust
2019
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
out-of-training-distribution test datasets, and confers robustness to adversarial attacks. ...
The classification performance of deep neural networks has begun to asymptote at near-perfect levels. ...
Introduction On natural-image classification benchmarks, state-ofthe-art convolutional neural network (CNN) models have been said to equal or even surpass human performance, as measured in terms of "top ...
doi:10.1109/iccv.2019.00971
dblp:conf/iccv/PetersonBGR19
fatcat:igfrwkgcwrfirk6yz7bgog7vxi
A Closer Look at Domain Shift for Deep Learning in Histopathology
[article]
2019
arXiv
pre-print
Domain shift is a significant problem in histopathology. ...
The results show how learning is heavily influenced by the preparation of training data, and that the latent representation used to do classification is sensitive to changes in data distribution, especially ...
We are also able to demonstrate a correlation between the representation shift and the drop in classification accuracy on images from a new domain. ...
arXiv:1909.11575v2
fatcat:a4ycqalmejgnhoxi2fywfh6mey
Towards robust cellular image classification: theoretical foundations for wide-angle scattering pattern analysis
2010
Biomedical Optics Express
when using standard image classification methods. ...
Clinical analysis of light scattering from cellular organelle distributions can help identify disease and predict a patient's response to treatment. ...
Acknowledgements This work was made possible by financial support from the Natural Sciences and Engineering Research Council (NSERC) and the Canadian Institute for Photonic Innovations (CIPI). ...
doi:10.1364/boe.1.001225
pmid:21258544
pmcid:PMC3018092
fatcat:t6l6jhv2sbaavgcbvlt5ijbl74
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
[article]
2021
arXiv
pre-print
We find improvements in artificial robustness benchmarks can transfer to real-world distribution shifts, contrary to claims in prior work. ...
We find that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work. ...
Networks are robust to some natural distribution shifts but are substantially more sensitive than the geographic shift. Here data augmentation hardly helps. ...
arXiv:2006.16241v3
fatcat:pjn4cxnb3rbnfputwmkyhua2ku
Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data
[article]
2021
arXiv
pre-print
This assumption doesn't generally hold true in a natural setting. Usually, the deployment data is subject to various types of distributional shifts. ...
The magnitude of a model's performance is proportional to this shift in the distribution of the dataset. ...
Acknowledgments We would like to thank Mars Rover Manipal for providing the necessary resources for our research. ...
arXiv:2111.04665v2
fatcat:ixcrpwagarf5vg32qr2qq34xji
Do Image Classifiers Generalize Across Time?
[article]
2019
arXiv
pre-print
We study the robustness of image classifiers to temporal perturbations derived from videos. ...
Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. ...
Acknowledgements We thank Rohan Taori for providing models trained for robustness to image corruptions, and Pavel Tokmakov for his help with training detection models on ImageNet-Vid. ...
arXiv:1906.02168v3
fatcat:trmemkurqndffb7i553eaknfma
« Previous
Showing results 1 — 15 out of 101,400 results