Filters








1,227 Hits in 2.8 sec

A Lightweight Neural Network for Inferring ECG and Diagnosing Cardiovascular Diseases from PPG [article]

Yuenan Li, Xin Tian, Qiang Zhu, Min Wu
2021 arXiv   pre-print
A neural network is designed to jointly infer ECG and diagnose cardiovascular diseases (CVDs) from photoplethysmogram (PPG).  ...  We analyze the latent connection between PPG and ECG as well as the CVDs-related features of PPG learned by the neural network, aiming at obtaining clinical insights from data.  ...  After pruning the kernels at the first convolutional layer and the ECG generation layer of the full network, we replaced the cascaded FEMs and FTMs by the recursive ones and then fine-tuned the network  ... 
arXiv:2012.04949v2 fatcat:als777jgybhu5ik2tthh5ugep4

Binary Stochastic Filtering: feature selection and beyond [article]

Andrii Trelin, Aleš Procházka
2020 arXiv   pre-print
Furthermore, the method is easily generalizable for neuron pruning and selection of regions of importance for spectral data.  ...  Although such regularization is frequently used in neural networks to achieve sparsity of weights or unit activations, it is unclear how it can be employed in the feature selection problem.  ...  Figure 2 : 2 Visualization of pruning with BSF. Neurons and BSF units are drawn in circles and squares respectively. Weights of BSF are shown as saturation of squares fill.  ... 
arXiv:2007.03920v1 fatcat:idq2dnefpjabvoa4jej2w3oiti

Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks [article]

Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
2020 arXiv   pre-print
The enormous inference cost of deep neural networks can be scaled down by network compression. Pruning is one of the predominant approaches used for deep network compression.  ...  The proposed technique gets rid of the additional retraining cycles by pruning the least important channels in a structured fashion at fixed intervals during the actual training phase.  ...  Conclusion Convolutional Neural Networks are crucial for many computer vision tasks and require energy efficient implementation for low-resource settings.  ... 
arXiv:2002.09958v2 fatcat:msp267opdzbhzcwezphnwtlu4m

An Overview of Neural Network Compression [article]

James O' Neill
2020 arXiv   pre-print
Thus, in recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer.  ...  Hence, this paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations  ...  The parameters induced by using VI-based least squares objective are sparse, improving the generalizability of the student network.  ... 
arXiv:2006.03669v2 fatcat:u2p6gvwhobh53hfjxawzclw7fq

Taxonomy of Saliency Metrics for Channel Pruning [article]

Kaveena Persand, Andrew Anderson, David Gregg
2021 arXiv   pre-print
We find that some of our constructed metrics can outperform the best existing state-of-the-art metrics for convolutional neural network channel pruning.  ...  Pruning unimportant parameters can allow deep neural networks (DNNs) to reduce their heavy computation and memory requirements.  ...  However, for convolutional neural networks, there is one level of pruning granularity -channel pruning -where there is a direct relationship between sub-blocks of the weight tensor and sub-blocks of the  ... 
arXiv:1906.04675v2 fatcat:lszdqa2fczfg7pcqch63oaypzi

BayesNAS: A Bayesian Approach for Neural Architecture Search [article]

Hongpeng Zhou, Minghao Yang, Jun Wang, Wei Pan
2019 arXiv   pre-print
Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture.  ...  As a byproduct, our approach can be applied directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration.  ...  The proposed Algorithm is generic for the weights in fully connected and convolutional neural networks. The training algorithm is iteratively indexed by t. Each iteration contains several epochs.  ... 
arXiv:1905.04919v2 fatcat:ni2qu2cujvbhrnid5wlyoajuvu

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon [article]

Xin Dong, Shangyu Chen, Sinno Jialin Pan
2017 arXiv   pre-print
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems.  ...  In this paper, we propose a new layer-wise pruning method for deep neural networks.  ...  Acknowledgements This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant M4081532.020, Singapore MOE AcRF Tier-2 grant MOE2016-T2-2-060, and Singapore MOE AcRF Tier-1 grant  ... 
arXiv:1705.07565v2 fatcat:gagmkieo5bbftgbxsdc2u7y27m

CirCNN

Caiwen Ding, Geng Yuan, Xiaolong Ma, Yipeng Zhang, Jian Tang, Qinru Qiu, Xue Lin, Bo Yuan, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu (+4 others)
2017 Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture - MICRO-50 '17  
Large-scale deep neural networks (DNNs) are both compute and memory intensive.  ...  Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning; 2) the increased training complexity; and 3) the lack of rigorous guarantee  ...  waste for small-scale neural networks and additional chips for large-scale ones.  ... 
doi:10.1145/3123939.3124552 dblp:conf/micro/DingLWLLZWQBYMZ17 fatcat:yghqzgu65feuzjujvhvx2penie

Taxonomy of Saliency Metrics for Channel Pruning

Kaveena Persand, Andrew Anderson, David Gregg
2021 IEEE Access  
We find that some of our constructed metrics can outperform the best existing state-of-the-art metrics for convolutional neural network channel pruning.  ...  Pruning unimportant parameters can allow deep neural networks (DNNs) to reduce their heavy computation and memory requirements.  ...  However, for convolutional neural networks, there is one level of pruning granularity -channel pruning -where there is a direct relationship between sub-blocks of the weight tensor and sub-blocks of the  ... 
doi:10.1109/access.2021.3108545 fatcat:x6qbdcetujfg5jzdvpn4ivqy6e

Reducing Parameters of Neural Networks via Recursive Tensor Approximation

Kyuahn Kwon, Jaeyong Chung
2022 Electronics  
Large-scale neural networks have attracted much attention for surprising results in various cognitive tasks such as object detection and image classification.  ...  This process factorizes a given network, yielding a deeper, less dense, and weight-shared network with good initial weights, which can be fine-tuned by gradient descent.  ...  Acknowledgments: The authors also would like to thank the reviewers and editors for their reviews of this research. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/electronics11020214 fatcat:jgyhnb4punftbi5qcv4i3ndebu

AI in Game Playing: Sokoban Solver [article]

Anand Venkatesan, Atishay Jain, Rakesh Grewal
2018 arXiv   pre-print
Games serve as a good breeding ground for trying and testing these algorithms in a sandbox with simpler constraints in comparison to real life.  ...  In this project, we aim to develop an AI agent that can solve the classical Japanese game of Sokoban using various algorithms and heuristics and compare their performances through standard metrics.  ...  a convolutional neural network.  ... 
arXiv:1807.00049v1 fatcat:iy7uno3kijhd7hkudasmyoi3xm

Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach [article]

Malena Reiners and Kathrin Klamroth and Michael Stiglmayr
2020 arXiv   pre-print
Overparameterization and overfitting are common concerns when designing and training deep neural networks, that are often counteracted by pruning and regularization strategies.  ...  Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.  ...  This work has been partially supported by EFRE (European fund for regional development) project EFRE-0400216.  ... 
arXiv:2008.13590v1 fatcat:6yaagh7adbhrdlsw53t67uajxu

Gradual Channel Pruning While Training Using Feature Relevance Scores for Convolutional Neural Networks

Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
2020 IEEE Access  
The enormous inference cost of deep neural networks can be mitigated by network compression. Pruning connections is one of the predominant approaches used for network compression.  ...  The proposed technique eliminates the need for additional retraining by pruning the least important channels in a structured manner at fixed intervals during the regular training phase.  ...  CONCLUSION Convolutional Neural Networks are crucial for many computer vision tasks and require energy efficient implementation for low-resource settings.  ... 
doi:10.1109/access.2020.3024992 fatcat:xub7ht2agnhtxl7i6wtjjwzfwa

AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results [article]

Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng (+73 others)
2020 arXiv   pre-print
The goal is to devise a network that reduces one or several aspects such as runtime, parameter count, FLOPs, activations, and memory consumption while at least maintaining PSNR of MSRResNet.  ...  This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.  ...  Acknowledgements We thank the AIM 2020 sponsors: HUAWEI, MediaTek, Google, NVIDIA, Qualcomm, and Computer Vision Lab (CVL) ETH Zurich. A Teams and affiliations  ... 
arXiv:2009.06943v1 fatcat:2s7k5wsgsjgo5flnqaby26cn64

A Recursive Ensemble Learning Approach with Noisy Labels or Unlabeled Data

Yuchen Wang, Yang Yang, Yun-Xia Liu, Anil Anthony Bharath
2019 IEEE Access  
Meanwhile, we provide guidelines for how to choose the most suitable among many candidate neural networks, with a pruning strategy that provides convenience.  ...  INDEX TERMS Noisy labels, pruning strategy, semi-supervised learning, ensemble learning, deep learning, neural networks.  ...  And Du et al. [18] also explored the question of ''how many samples are needed to learn a convolutional neural network?''  ... 
doi:10.1109/access.2019.2904403 fatcat:43s3pigfbvcbvdqe7sg66szkny
« Previous Showing results 1 — 15 out of 1,227 results