A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness
[article]
2021
arXiv
pre-print
Adversarial attacks have been shown to be highly effective at degrading the performance of deep neural networks (DNNs). The most prominent defense is adversarial training, a method for learning a robust model. Nevertheless, adversarial training does not make DNNs immune to adversarial perturbations. We propose a novel solution by adopting the recently suggested Predictive Normalized Maximum Likelihood. Specifically, our defense performs adversarial targeted attacks according to different
arXiv:2109.01945v1
fatcat:n4q567hubnginjzczx5z4bjpi4
more »
... ses, where each hypothesis assumes a specific label for the test sample. Then, by comparing the hypothesis probabilities, we predict the label. Our refinement process corresponds to recent findings of the adversarial subspace properties. We extensively evaluate our approach on 16 adversarial attack benchmarks using ResNet-50, WideResNet-28, and a2-layer ConvNet trained with ImageNet, CIFAR10, and MNIST, showing a significant improvement of up to 5.7%, 3.7%, and 0.6% respectively.
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
[article]
2021
arXiv
pre-print
Bibas et al. (2019b) and Bibas and Feder (2021) showed the pNML solution for linear regression. ...
The pNML was developed before for linear regression (Bibas et al., 2019b) and was evaluated empirically for DNN (Fu and Levine, 2021; Bibas et al., 2019a) . ...
arXiv:2110.09246v1
fatcat:uypwdn7idvftxezpb3zv3dgu3i
Balancing Specialization, Generalization, and Compression for Detection and Tracking
[article]
2019
arXiv
pre-print
We propose a method for specializing deep detectors and trackers to restricted settings. Our approach is designed with the following goals in mind: (a) Improving accuracy in restricted domains; (b) preventing overfitting to new domains and forgetting of generalized capabilities; (c) aggressive model compression and acceleration. To this end, we propose a novel loss that balances compression and acceleration of a deep learning model vs. loss of generalization capabilities. We apply our method to
arXiv:1909.11348v1
fatcat:wyeiokps2nhwxbzdkdagsnvylm
more »
... the existing tracker and detector models. We report detection results on the VIRAT and CAVIAR data sets. These results show our method to offer unprecedented compression rates along with improved detection. We apply our loss for tracker compression at test time, as it processes each video. Our tests on the OTB2015 benchmark show that applying compression during test time actually improves tracking performance.
Learning Rotation Invariant Features for Cryogenic Electron Microscopy Image Reconstruction
[article]
2021
arXiv
pre-print
Cryo-Electron Microscopy (Cryo-EM) is a Nobel prize-winning technology for determining the 3D structure of particles at near-atomic resolution. A fundamental step in the recovering of the 3D single-particle structure is to align its 2D projections; thus, the construction of a canonical representation with a fixed rotation angle is required. Most approaches use discrete clustering which fails to capture the continuous nature of image rotation, others suffer from low-quality image reconstruction.
arXiv:2101.03549v1
fatcat:gqbfz62nczbxldcabsddqqwwye
more »
... We propose a novel method that leverages the recent development in the generative adversarial networks. We introduce an encoder-decoder with a rotation angle classifier. In addition, we utilize a discriminator on the decoder output to minimize the reconstruction error. We demonstrate our approach with the Cryo-EM 5HDB and the rotated MNIST datasets showing substantial improvement over recent methods.
A New Look at an Old Problem: A Universal Learning Approach to Linear Regression
[article]
2019
arXiv
pre-print
Linear regression is a classical paradigm in statistics. A new look at it is provided via the lens of universal learning. In applying universal learning to linear regression the hypotheses class represents the label y∈ R as a linear combination of the feature vector x^Tθ where x∈ R^M, within a Gaussian error. The Predictive Normalized Maximum Likelihood (pNML) solution for universal learning of individual data can be expressed analytically in this case, as well as its associated learnability
arXiv:1905.04708v1
fatcat:li5a5s74lrahdiwupir6tp3jsm
more »
... sure. Interestingly, the situation where the number of parameters M may even be larger than the number of training samples N can be examined. As expected, in this case learnability cannot be attained in every situation; nevertheless, if the test vector resides mostly in a subspace spanned by the eigenvectors associated with the large eigenvalues of the empirical correlation matrix of the training data, linear regression can generalize despite the fact that it uses an "over-parametrized" model. We demonstrate the results with a simulation of fitting a polynomial to data with a possibly large polynomial degree.
Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks
[article]
2020
arXiv
pre-print
The pNML has been derived for several model classes in related works (Fogel and Feder, 2018b) such as the barrier (or 1-D perceptron) model, and in Bibas et al. (2019) for the linear regression problem ...
arXiv:1904.12286v2
fatcat:jgcm4iruhvdnla3hwj5r2c5omi
Distribution Free Uncertainty for the Minimum Norm Solution of Over-parameterized Linear Regression
[article]
2021
arXiv
pre-print
Bibas et al. (2019b) derived the pNML regret for under-parameterized linear regression. Theorem 2 (Bibas et al. (2019b) ). ...
These work dealt with 1D barrier (Fogel and Feder, 2018) , linear regression (Bibas et al., 2019b) , and the last layer of DNN (Bibas et al., 2019a) . ...
arXiv:2102.07181v2
fatcat:2yzkmnl4kbesphmxfghbcrgtgq
Universal Supervised Learning for Individual Data
[article]
2018
arXiv
pre-print
Acknowledgments Koby Bibas is acknowledged for discussions and for implementing and analyzing the pNML in various problems, from linear regression to deep neural networks. ...
The joint work with Koby appears in Bibas et al. (2018a,b). We also acknowledge the discussion with Amichai Painsky regarding section 4, and the related work Painsky and Feder (2018) . ...
These works are reported in Bibas et al. (2018a,b) . In our opinion it will be interesting to find under what "local" conditions on the model class the pNML regret is small. ...
arXiv:1812.09520v1
fatcat:4aezx32m25dxnio6wcpfvtwiue
ENDÜSTRİ 4.0 VE VERİMLİLİK: TÜRK BEYAZ EŞYA SEKTÖRÜNDE KEŞFEDİCİ DURUM ÇALIŞMASI
2021
Verimlilik Dergisi
Nitekim birçok raporda ve akademik araştırmada da Endüstri 4.0 bir marka olarak anılmaktadır (Glas ve Kleeman, 2016; Huchler, 2017; Bíba, 2018; Germany Trade and Invest, 2018; Kheyfets ve Chernova, 2019 ...
Bir KOBİ olan X'in teknoloji yönetimine ayrılmış bir departmanı bulunmamakta, ancak şirket, Endüstri 4.0 çözümleri üreten kardeş şirketinden destek almaktadır. ...
doi:10.51551/verimlilik.988466
fatcat:c6iroz2tvnhirlvnl2i3duw6fa
Zespół dewocjonaliów z wykopalisk na cmentarzu przy kościele pw. św. Barbary na Starym Mieście w Częstochowie
2018
Acta Universitatis Lodziensis Folia Archaeologica
, czaszka kobie-
W tym miejscu pragnę podziękować Panu dr. ...
R S N S M V [S M Q] L [I] V [B] -IHS VADE RETRO SATANO NON SVADE MIHI VANA SVNT MALA QVE LIBAS IPSE VENENA BIBAS. ...
doi:10.18778/0208-6034.33.11
fatcat:34xpxnpsjndirir7vsusnj6pua
The morphology and phonology of metathesis in Amarasi
2017
Morphology
V α V α C# U-form M-form U-form M-form nima → niim 'five' n-biba → n-biib 'massages' Pbeba → Pbeeb 'palm leaves' n-nena → n-neen 'hears' n-sosa → n-soos 'buys' na-tona → na-toon 'tells' n-nuka → n-nuuk ...
PnenuP → Pnen~nenuP 'turn' kberoP → kber~beroP 'move' msena → msen~sena 'full, satiated' thoe → tho~hoe 'inundate, bless' Proo → Pro~roo 'far, distant' maPfenaP → maPfen~fenaP 'heavy' taikobi → taikob~kobi ...
doi:10.1007/s11525-017-9314-y
fatcat:atag2emumffzzfyuyyaurcn6fi