Filters








11,511 Hits in 5.7 sec

Dataset Pruning: Reducing Training Data by Examining Generalization Influence [article]

Shuo Yang, Zeke Xie, Hanyu Peng, Min Xu, Mingming Sun, Ping Li
2022 arXiv   pre-print
To answer these, we propose dataset pruning, an optimization-based sample selection method that can (1) examine the influence of removing a particular set of training samples on model's generalization  ...  The empirically observed generalization gap of dataset pruning is substantially consistent with our theoretical expectations.  ...  Few-shot learning aims at improving the performance given limited training data, while dataset pruning aims at reducing the training data without hurting the performance much.  ... 
arXiv:2205.09329v1 fatcat:wa6fvwj4kfgbvgy2z2wb2o5yf4

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
2021 Pattern Recognition  
Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance.  ...  We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks.  ...  Each generated 2D dataset consists of 10 0 0 training samples per class.  ... 
doi:10.1016/j.patcog.2021.107899 fatcat:tgdzfm37p5b2tkpcqfpgb5u2ra

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning [article]

Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
2020 arXiv   pre-print
Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance.  ...  We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks.  ...  Each generated 2D dataset consists of 1000 training samples per class.  ... 
arXiv:1912.08881v2 fatcat:igbedbkz5vb7ldcfluk7qkbxle

Weight Pruning and Uncertainty in Radio Galaxy Classification [article]

Devina Mohan, Anna Scaife
2021 arXiv   pre-print
We also examine the effect of principled data augmentation and find that it improves upon the baseline but does not compensate for the observed effect fully.  ...  , and that this pruning increases the predictive uncertainty in the model.  ...  We examine the effect of principled data augmentation and find that the cold posterior effect observed in our work reduces slightly with data augmentation.  ... 
arXiv:2111.11654v2 fatcat:j764hbsbyze3tiw7yzt6f4aepu

Task dependent Deep LDA pruning of neural networks [article]

Qing Tian, Tal Arbel, James J. Clark
2020 arXiv   pre-print
Moreover, we examine our approach's potential in network architecture search for specific tasks and analyze the influence of our pruning on model robustness to noises and adversarial attacks.  ...  Experimental results on datasets of generic objects (ImageNet, CIFAR100) as well as domain specific tasks (Adience, and LFWA) illustrate our framework's superior performance over state-of-the-art pruning  ...  For fair comparison, adversarial examples are generated against a third ResNet50 model trained with the same data.  ... 
arXiv:1803.08134v6 fatcat:mck6iyjs5bbr7afgpokwkoq72e

Pruning Classification Rules with Reference Vector Selection Methods [chapter]

Karol Grudziński, Marek Grochowski, Włodzisław Duch
2010 Lecture Notes in Computer Science  
Training two classifiers, the C4.5 decision tree and the Non-Nested Generalized Exemplars (NNGE) covering algorithm, on datasets that have been reduced earlier with the EkP instance compressor leads to  ...  Similar results have been observed with other popular instance filters used for data pruning.  ...  In the first experiment the influence of data pruning on classification generalization has been examined (Tables 2 and 3 ).  ... 
doi:10.1007/978-3-642-13208-7_44 fatcat:gh73d42n7jesloff2goj754cmq

PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition [article]

Cheng-I Jeff Lai, Yang Zhang, Alexander H. Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, James Glass
2021 arXiv   pre-print
PARP is inspired by our surprising observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks.  ...  baseline pruning methods.  ...  Because of the small dataset size, empirical risk minimization generally does not yield good results. Speech SSL instead assumes there is a much larger unannotated dataset x ∈ D 0 .  ... 
arXiv:2106.05933v2 fatcat:tu3zghjiird23nxhja33euzufy

Pruning CNN's with linear filter ensembles [article]

Csanád Sándor, Szabolcs Pável, Lehel Csató
2020 arXiv   pre-print
To counter the limitation imposed by the network size, we use pruning to reduce the network size and – implicitly – the number of floating point operations (FLOPs).  ...  We evaluate our method on a fully connected network, as well as on the ResNet architecture trained on the CIFAR-10 dataset.  ...  ← ResNet architectures on the CIFAR-10 data Beside the synthetic toy data, we also evaluate our pruning method on the ResNet [6] architectures trained on CIFAR-10 [9] dataset.  ... 
arXiv:2001.08142v2 fatcat:poilc32tlrgg3igcrc35xrq6he

Pruned Graph Scattering Transforms

Vassilis N. Ioannidis, Siheng Chen, Georgios B. Giannakis
2020 International Conference on Learning Representations  
The present work addresses some limitations of GSTs by introducing a novel so-termed pruned (p)GST approach.  ...  The resultant pruning algorithm is guided by a graph-spectrum-inspired criterion, and retains informative scattering features on-the-fly while bypassing the exponential complexity associated with GSTs.  ...  This work was supported by Mitsubishi Electric Research Laboratories, the Doctoral Dissertation Fellowship of the University of Minnesota, and the NSF grants 171141, and 1500713.  ... 
dblp:conf/iclr/IoannidisCG20 fatcat:ah6uxn3mjban5ny3rc75mnjite

Feature Selection for Multivariate Time Series via Network Pruning [article]

Kang Gu, Soroush Vosoughi, Temiloluwa Prioleau
2021 arXiv   pre-print
In recent years, there has been an ever increasing amount of multivariate time series (MTS) data in various domains, typically generated by a large family of sensors such as wearable devices.  ...  We evaluated the proposed NFS model on four real-world MTS datasets and found that it achieves comparable results with state-of-the-art methods while providing the benefit of feature selection.  ...  The evaluation on four real-world MTS datasets with varying dimensionality has shown the flexibility and effectiveness of NFS on various downstream networks.  ... 
arXiv:2102.06024v3 fatcat:mmezxaslvnf3jln4ywptfcvb2e

Intelligent Intuitionistic Fuzzy with Elephant Swarm Behaviour Based Rule Pruning for Early Detection of Alzheimer in Heterogeneous Multidomain Datasets

2019 International journal of recent technology and engineering  
The objective of this proposed work is to handle the hesitancy of uncertainty by introducing intelligent intuitionistic fuzzy classifier, which inhibits irrelevant rule generation by acquiring the knowledge  ...  Using elephant swarm search behavior, the rules generated by intuitionistic fuzzy are finetuned to ovoid overfitting problem and thus it eliminates the irrelevant rules.  ...  Optimized Rule Pruning of Intuitionistic fuzzy Rule using Elephant Search Algorithm One of the general techniques in rule-based classifiers are rule pruning whose task is to reduce the size of the discovered  ... 
doi:10.35940/ijrte.d9472.118419 fatcat:vjjeg6c7pzah7aeovzak6p5wdi

Functionality-Oriented Convolutional Filter Pruning [article]

Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen
2019 arXiv   pre-print
Such an interpretable pruning approach not only offers outstanding computation cost optimization over previous filter pruning methods, but also interprets filter pruning process.  ...  As significant redundancies inevitably present in such a structure, many works have been proposed to prune the convolutional filters for computation cost reduction.  ...  Acknowledgments This work was supported in part by NSF CNS1717775.  ... 
arXiv:1810.07322v2 fatcat:r4prx7hknve2tho2yqn2rrcbx4

Pruning via Iterative Ranking of Sensitivity Statistics [article]

Stijn Verdenius, Maarten Stol, Patrick Forré
2020 arXiv   pre-print
That is, while already providing the computational benefits of pruning in the training process from the start.  ...  However, in this work we show that by applying the sensitivity criterion iteratively in smaller steps - still before training - we can improve its performance without difficult implementation.  ...  Acknowledgments and Disclosure of Funding This work was supported, funded and supervised by, (a) the University of Amsterdam, in cooperation with (b) the Amsterdam-based company BrainCreators B.V.  ... 
arXiv:2006.00896v2 fatcat:gg6jgkujn5dr5cm4d2blic2tvq

Pruning variable selection ensembles [article]

Chunxia Zhang, Yilei Wu, Mu Zhu
2017 arXiv   pre-print
By taking stability selection (abbreviated as StabSel) as an example, some experiments are conducted with both simulated and real-world data to examine the performance of the novel algorithm.  ...  Experimental results demonstrate that pruned StabSel generally achieves higher selection accuracy and lower false discovery rates than StabSel and several other benchmark methods.  ...  In the generation stage, it applies the base learner lasso (or randomized lasso) repeatedly to subsamples randomly drawn from the training data.  ... 
arXiv:1704.08265v1 fatcat:z5xm5hm65vgybn4rqajogvs5pe

Beyond neural scaling laws: beating power law scaling via data pruning [article]

Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos
2022 arXiv   pre-print
a high-quality data pruning metric that ranks the order in which training examples should be discarded to achieve any pruned dataset size.  ...  Overall, our work suggests that the discovery of good data-pruning metrics may provide a viable path forward to substantially improved neural scaling laws, thereby reducing the resource costs of modern  ...  We therefore examined whether data-pruning can be effective for both reducing the amount of fine-tuning data and the amount of pre-training data.  ... 
arXiv:2206.14486v2 fatcat:p35rlxom2beyvnc4sbqjbxabxu
« Previous Showing results 1 — 15 out of 11,511 results