Filters








183 Hits in 1.5 sec

Coreset-Based Neural Network Compression [article]

Abhimanyu Dubey, Moitreya Chatterjee, Narendra Ahuja
2018 arXiv   pre-print
We propose a novel Convolutional Neural Network (CNN) compression algorithm based on coreset representations of filters.  ...  Additionally these compressed networks when fine-tuned, successfully generalize to other domains as well.  ...  space of convolutional filter weights and sample activations to reduce neural network size, using the long-existing concepts of coresets, coupled with an activation-based pooling technique.  ... 
arXiv:1807.09810v1 fatcat:r5deensjdjhfjcfdxjkrfjaopi

Data-Independent Structured Pruning of Neural Networks via Coresets [article]

Ben Mussay, Daniel Feldman, Samson Zhou, Vladimir Braverman, Margarita Osadchy
2020 arXiv   pre-print
Model compression is crucial for deployment of neural networks on devices with limited computational and memory resources.  ...  Our method is based on the coreset framework and it approximates the output of a layer of neurons/filters by a coreset of neurons/filters in the previous layer and discards the rest.  ...  Data-independent methods [33, 18, 50, 20] compress neural network based on weights' characteristics.  ... 
arXiv:2008.08316v1 fatcat:tz37gaenhfgznpjouf5ahe4jvu

Data-Independent Neural Pruning via Coresets [article]

Ben Mussay, Margarita Osadchy, Vladimir Braverman, Samson Zhou, Dan Feldman
2020 arXiv   pre-print
Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources.  ...  Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy.  ...  We provide theoretical compression rates for some of the most popular neural activation functions summarized in Table 1. 2 RELATED WORK 2.1 CORESETS Our compression algorithm is based on a data summarization  ... 
arXiv:1907.04018v3 fatcat:4ibedlrnb5dx7c7g4yqonuva3i

Deep Active Learning over the Long Tail [article]

Yonatan Geifman, Ran El-Yaniv
2017 arXiv   pre-print
This paper is concerned with pool-based active learning for deep neural networks.  ...  Motivated by coreset dataset compression ideas, we present a novel active learning algorithm that queries consecutive points from the pool using farthest-first traversals in the space of neural activation  ...  Of course, by adapting the coreset view, it would be very interesting to prove approximation guarantees for neural networks using existing or new techniques.  ... 
arXiv:1711.00941v1 fatcat:dk2jjdw6gvdezpb7twzthpjzwe

A Unified Approach to Coreset Learning [article]

Alaa Maalouf and Gilad Eini and Ben Mussay and Dan Feldman and Margarita Osadchy
2021 arXiv   pre-print
Furthermore, our approach applied to deep network pruning provides the first coreset for a full deep network, i.e., compresses all the network at once, and not layer by layer or similar divide-and-conquer  ...  To address these limitations, we propose a generic, learning-based algorithm for construction of coresets.  ...  summarize the input training set, and (b) model compression, i.e., learning a coreset of all training parameters of a deep neural network at once (useful for model pruning).  ... 
arXiv:2111.03044v1 fatcat:42kfnnt4urc6hmcdhf7rhwt4iq

Data Summarization via Bilevel Optimization [article]

Zalán Borsos, Mojmír Mutný, Marco Tagliasacchi, Andreas Krause
2021 arXiv   pre-print
In contrast to existing approaches, our framework does not require model-specific adaptations and applies to any twice differentiable model, including neural networks.  ...  In this work, we propose a generic coreset construction framework that formulates the coreset selection as a cardinality-constrained bilevel optimization problem.  ...  Neural Networks For building small coresets (< 1000) for neural networks, we find that construction via the Neural Tangent Kernel proxy is more effective-the experiments in the following sections concern  ... 
arXiv:2109.12534v1 fatcat:f5yewtrb3nehfcs2s5i6jxbfgy

Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds [article]

Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
2019 arXiv   pre-print
We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's  ...  properties of neural networks.  ...  CONCLUSION We presented a coresets-based neural network compression algorithm for compressing the parameters of a trained fully-connected neural network in a manner that approximately preserves the network's  ... 
arXiv:1804.05345v6 fatcat:xhhopz6fxvcs7fktvegywi7vra

Improving and Understanding Variational Continual Learning [article]

Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui, Richard E. Turner
2019 arXiv   pre-print
The improvements are achieved by establishing a new best practice approach to mean-field variational Bayesian neural networks. We then look at the solutions in detail.  ...  Indeed, VCL with coresets is such a mixed approach. Similarly, Progress & Compress [18] use concepts from Progressive Neural Networks [17] and EWC [10] .  ...  Although some of the previous inference based approaches are closely related to approximate inference in a Bayesian neural network, we optimise the variational objective function.  ... 
arXiv:1905.02099v1 fatcat:q7qzj5qaibfxzhebg4ypa4akxq

Seeker: Synergizing Mobile and Energy Harvesting Wearable Sensors for Human Activity Recognition [article]

Cyan Subhra Mishra, Jack Sampson, Mahmut Taylan Kandemir, Vijaykrishnan Narayanan
2022 arXiv   pre-print
However, the computation and power demands of Deep Neural Network (DNN)-based inference pose significant challenges for nodes in an energy-harvesting wireless sensor network (EH-WSN).  ...  Further, for those inferences unfinished because of harvested energy constraints, it leverages an activity aware coreset (AAC) construction to efficiently communicate compact features to the host device  ...  However, since clustering based coreset construction is more expensive than the DP-means based coreset construction, it is not always possible to build a recoverable coreset at the edge. 2) Activity  ... 
arXiv:2204.13106v1 fatcat:ryfubvnskrbarjj7kwdalia2f4

A Novel Sequential Coreset Method for Gradient Descent Algorithms [article]

Jiawei Huang, Ruomin Huang, Wenjie Liu, Nikolaos M. Freris, Hu Ding
2021 arXiv   pre-print
Coreset is a popular data compression technique that has been extensively studied before.  ...  In this paper, based on the "locality" property of gradient descent algorithms, we propose a new framework, termed "sequential coreset", which effectively avoids these obstacles.  ...  Coresets for robust training of deep neural networks against noisy labels.  ... 
arXiv:2112.02504v1 fatcat:ca6ik4vfgfgqrmoqottujcbhwa

Coresets via Bilevel Optimization for Continual Learning and Streaming [article]

Zalán Borsos, Mojmír Mutný, Andreas Krause
2020 arXiv   pre-print
We show how our framework can efficiently generate coresets for deep neural networks, and demonstrate its empirical benefits in continual learning and in streaming settings.  ...  In this work, we propose a novel coreset construction via cardinality-constrained bilevel optimization.  ...  Coresets for Neural Networks While our proposed coreset framework is generally applicable to any twice differentiable model, we showcase it for the challenging case of deep neural networks.  ... 
arXiv:2006.03875v2 fatcat:twbqv4z37ffzhdpwpjzefhay7i

Class Incremental Online Streaming Learning [article]

Soumya Banerjee, Vinay Kumar Verma, Toufiq Parag, Maneesh Singh, Vinay P. Namboodiri
2021 arXiv   pre-print
A wide variety of methods have been developed to enable lifelong learning in conventional deep neural networks.  ...  as a stream of data with the following restrictions: (i) each instance comes one at a time and can be seen only once, and (ii) the input data violates the i.i.d assumption, i.e., there can be a class-based  ...  Bayesian Neural Network Bayesian neural networks [30] are discriminative models, which extend the standard deep neural networks with Bayesian inference.  ... 
arXiv:2110.10741v1 fatcat:s4dmxkfe2raipb46a23cfr3zti

Bayesian Structure Adaptation for Continual Learning [article]

Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
2020 arXiv   pre-print
We present a novel Bayesian approach to continual learning based on learning the structure of deep neural networks, addressing the shortcomings of both these approaches.  ...  Two notable directions among the recent advances in continual learning with neural networks are (i) variational Bayes based regularization by learning priors from previous tasks, and, (ii) learning the  ...  by another deep neural network with parameters φ).  ... 
arXiv:1912.03624v2 fatcat:bk4nd7dazjhizhaj2dbzogo23i

Wasserstein Measure Coresets [article]

Sebastian Claici and Aude Genevay and Justin Solomon
2020 arXiv   pre-print
We address this oversight by introducing Wasserstein measure coresets, an extension of coresets which by definition takes into account generalization.  ...  Classical coresets, however, neglect the underlying data distribution, which is often continuous.  ...  ., 2005) , and neural network compression (Baykal et al., 2018) .  ... 
arXiv:1805.07412v2 fatcat:dhvysolyzzaujnzut3dwizc7ou

Continual Model-Based Reinforcement Learning with Hypernetworks [article]

Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti
2021 arXiv   pre-print
, and does competitively with baselines that remember an ever increasing coreset of past experience.  ...  Effective planning in model-based reinforcement learning (MBRL) and model-predictive control (MPC) relies on the accuracy of the learned dynamics model.  ...  HyperCRL also outperforms other continual learning baselines, either regularization-based (SI, EWC), or replay-based (Coreset).  ... 
arXiv:2009.11997v2 fatcat:nwf3ximskrhytj3pbvxqz33xsa
« Previous Showing results 1 — 15 out of 183 results