Filters








6,801 Hits in 3.0 sec

Do Neural Networks Compress Manifolds Optimally? [article]

Sourbh Bhadane, Aaron B. Wagner, Johannes Ballé
2022 arXiv   pre-print
Artificial Neural-Network-based (ANN-based) lossy compressors have recently obtained striking results on several sources.  ...  In contrast, we determine the optimal entropy-distortion tradeoffs for two low-dimensional manifolds with circular structure and show that state-of-the-art ANN-based compressors fail to optimally compress  ...  The analysis and synthesis transforms are fully-connected feedforward neural networks with 2 hidden layers containing 100 neurons each.  ... 
arXiv:2205.08518v1 fatcat:rh7mel342fdyjixq2fs5od2kpi

Optimal Stable Nonlinear Approximation [article]

Albert Cohen, Ronald DeVore, Guergana Petrova, Przemyslaw Wojtaszczyk
2020 arXiv   pre-print
A measure of optimal performance, called stable manifold widths, for approximating a model class K in a Banach space X by stable manifold methods is introduced.  ...  The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.  ...  This is discussed in §6 for compressed sensing and neural network approximation.  ... 
arXiv:2009.09907v1 fatcat:wbkxyqw6cffgnit2ybulrjfpfa

Extrapolative Bayesian Optimization with Gaussian Process and Neural Network Ensemble Surrogate Models

Yee-Fun Lim, Chee Koon Ng, U.S. Vaitesswar, Kedar Hippalgaonkar
2021 Advanced Intelligent Systems  
Herein, various surrogate models for BO, including GPs and neural network ensembles (NNEs), are investigated.  ...  Two materials datasets of different complexity with different properties are used, to compare the performance of GP and NNE-the first is the compressive strength of concrete (8 inputs and 1 target), and  ...  [30] In yet another implementation, Abolhasani et al. utilized randomly generated neural network architectures to produce an ensemble of neural networks as the surrogate model.  ... 
doi:10.1002/aisy.202100101 fatcat:tnmlfhgsyven5lgjywk7oexpdq

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization [article]

Avishek Joey Bose, Parham Aarabi
2018 arXiv   pre-print
In this paper, we propose a novel strategy to craft adversarial examples by solving a constrained optimization problem using an adversarial generator network.  ...  In a different experiment, also on 300-W, we demonstrate the robustness of our attack to a JPEG compression based defense typical JPEG compression level of 75% reduces the effectiveness of our attack from  ...  One theory for a JPEG based defense are that adversarial examples lie off the data manifold under which neural networks are so successful and by using JPEG compression the adversarial examples are projected  ... 
arXiv:1805.12302v1 fatcat:a4sy3gswbjb23kc6h55qrj5qkq

Overcoming Catastrophic Forgetting via Direction-Constrained Optimization [article]

Yunfei Teng, Anna Choromanska, Murray Campbell, Songtao Lu, Parikshit Ram, Lior Horesh
2022 arXiv   pre-print
This paper studies a new design of the optimization algorithm for training deep learning models with a fixed architecture of the classification network in a continual learning framework.  ...  Furthermore, in order to control the memory growth as the number of tasks increases, we propose a memory-efficient version of our algorithm called compressed DCO (DCO-COMP) that allocates a memory of fixed  ...  ., network parameters) that correspond to good performance of the network on all encountered tasks determine a common manifold of plausible solutions for all these optimization problems.  ... 
arXiv:2011.12581v2 fatcat:vvdhj72d7vggljvoyzhdh3w5wy

ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs [article]

Christopher A. Strong, Sydney M. Katz, Anthony L. Corso, Mykel J. Kochenderfer
2022 arXiv   pre-print
We demonstrate the versatility of the optimizer in analyzing networks by projecting onto the range of a generative adversarial network and visualizing the differences between a compressed and uncompressed  ...  Using ZoPE, we observe a 25× speedup on property 1 of the ACAS Xu neural network verification benchmark compared to several state-of-the-art verifiers, and an 85× speedup on a set of linear optimization  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, any NASA entity, or the National Science  ... 
arXiv:2106.05325v2 fatcat:psvcsb42xjbw5at4nm5fp6kwfm

Optimizing Grouped Convolutions on Edge Devices

Perry Gibson, Jose Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
2020 2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)  
When deploying a deep neural network on constrained hardware, it is possible to replace the network's standard convolutions with grouped convolutions.  ...  However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed.  ...  The opinions expressed and arguments employed herein do not necessarily reflect the official views of these funding bodies.  ... 
doi:10.1109/asap49362.2020.00039 dblp:conf/asap/GibsonCTCOS20 fatcat:gt2mb2n3sbgj7hbddbbsqkn5je

Optimizing Grouped Convolutions on Edge Devices [article]

Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
2020 arXiv   pre-print
When deploying a deep neural network on constrained hardware, it is possible to replace the network's standard convolutions with grouped convolutions.  ...  However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed.  ...  • We evaluate the performance of GSPC using different network models on the CPU of several edge devices. • We compare GSPC against implementations of grouped convolutions present in widely used deep learning  ... 
arXiv:2006.09791v1 fatcat:prpnrh3y2fevnclcz32flkcjmq

Instance-Optimal Compressed Sensing via Posterior Sampling [article]

Ajil Jalal and Sushrut Karmalkar and Alexandros G. Dimakis and Eric Price
2021 arXiv   pre-print
We show for Gaussian measurements and any prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.  ...  We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors  ...  Learned d- amp: Principled neural network based compressive image recovery. In Advances in Neural Information Processing Systems, pp. 1772-1783, 2017. Mosser, L., Dubrule, O., and Blunt, M. J.  ... 
arXiv:2106.11438v1 fatcat:n6tqdezb7zgrnal36iqznydy64

An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks [article]

Qianxiao Li, Shuji Hao
2018 arXiv   pre-print
This allows one to characterize necessary conditions for optimality and develop training algorithms that do not rely on gradients with respect to the trainable parameters.  ...  The developed methods are applied to train, in a rather principled way, neural networks with weights that are constrained to take values in a discrete set.  ...  manifolds must contain an optimal solution, if one exists.  ... 
arXiv:1803.01299v2 fatcat:5rhxonq5sjfcngkj7y3wozs53q

Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks [article]

Elahe Arani, Fahad Sarfraz, Bahram Zonooz
2020 arXiv   pre-print
However, there seems to be an inherent trade-off between optimizing the model for accuracy and robustness.  ...  We demonstrate the effectiveness of the proposed approach across different datasets and network architectures.  ...  Model complexity The magnitudes of the weights of neural networks can provide an estimate of the model's complexity.  ... 
arXiv:2008.07015v2 fatcat:avwvknudcvcfhmfek7fw7ntyqu

Optimal Approximation with Sparsely Connected Deep Neural Networks [article]

Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
2018 arXiv   pre-print
Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks providing close-to-optimal approximation rates.  ...  Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called affine systems---can be approximated by deep neural networks with minimal connectivity  ...  are also optimally representable by neural networks.  ... 
arXiv:1705.01714v4 fatcat:xyht6tpa2rehhdozqhth2e6wui

Model compression as constrained optimization, with application to neural nets. Part V: combining compressions [article]

Miguel Á. Carreira-Perpiñán, Yerlan Idelbayev
2021 arXiv   pre-print
Experimentally with deep neural nets, we observe that 1) we can find significantly better models in the error-compression space, indicating that different compression types have complementary benefits,  ...  We formulate this generally as a problem of optimizing the loss but where the weights are constrained to equal an additive combination of separately compressed parts; and we give an algorithm to learn  ...  A basic issue is the representation ability of the compression: given an optimal point in model space (the weight parameters for a neural net), which manifold or subset of this space can be compressed  ... 
arXiv:2107.04380v1 fatcat:yxfpaug3uzam7lhlhaaxebpomm

Reverse engineering learned optimizers reveals known and novel mechanisms [article]

Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein
2021 arXiv   pre-print
Learned optimizers are algorithms that can themselves be trained to solve optimization problems.  ...  Our results help elucidate the previously murky understanding of how learned optimizers work, and establish tools for interpreting future learned optimizers.  ...  a fully connected neural network on the two moons dataset, and (d) training a convolutional neural network on the MNIST dataset.  ... 
arXiv:2011.02159v2 fatcat:n7ciopm3nvfm5ome3ngv3za5qm

Latent Space Arc Therapy Optimization [article]

Noah Bice, Mohamad Fakhreddine, Ruiqi Li, Dan Nguyen, Christopher Kabat, Pamela Myers, Niko Papanikolaou, Neil Kirby
2021 arXiv   pre-print
Traditionally, heuristics such as fluence-map-optimization-informed segment initialization use locally optimal solutions to begin the search of the full arc therapy plan space from a reasonable starting  ...  Volumetric modulated arc therapy planning is a challenging problem in high-dimensional, non-convex optimization.  ...  The canonical example of unsupervised learning with neural networks is deep autoencoders (AE) [18] .  ... 
arXiv:2106.05846v1 fatcat:hgdepv3q2jhz3fux7cyirx3v5q
« Previous Showing results 1 — 15 out of 6,801 results