Filters








238 Hits in 5.3 sec

Dense Pruning of Pointwise Convolutions in the Frequency Domain [article]

Mark Buckler, Neil Adit, Yuwei Hu, Zhiru Zhang, Adrian Sampson
2021 arXiv   pre-print
They are seemingly incompatible: the vast majority of operations in depthwise separable CNNs are in pointwise convolutional layers, but pointwise layers use 1x1 kernels, which do not benefit from frequency  ...  Our key insights are that 1) pointwise convolutions commute with frequency transformation and thus can be computed in the frequency domain without modification, 2) each channel within a given layer has  ...  Methodology This section describes our technique for computing pointwise convolutions in a DCT-based frequency space, how pruning can be applied in the frequency domain, and then how this pruning can be  ... 
arXiv:2109.07707v1 fatcat:2hlka5xx4bh2ncije3onbtjjky

To prune, or not to prune: exploring the efficacy of pruning for model compression [article]

Michael Zhu, Suyog Gupta
2017 arXiv   pre-print
We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint.  ...  ., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size.  ...  Acknowledgments The authors thank Huizhong Chen, Volodymyr Kysenko, David Chen, SukHwan Lim, Raziel Alvarez, and Thang Luong for helpful discussions.  ... 
arXiv:1710.01878v2 fatcat:kzsphmwc4rdvdmubyhnlpvplcy

Taxonomy of Saliency Metrics for Channel Pruning [article]

Kaveena Persand, Andrew Anderson, David Gregg
2021 arXiv   pre-print
We find that some of our constructed metrics can outperform the best existing state-of-the-art metrics for convolutional neural network channel pruning.  ...  We perform an in-depth experimental investigation of more than 300 saliency metrics.  ...  Acknowledgement This work was supported with the financial support of the Science Foundation Ireland grant. This work was also supported, in part, by Arm Research.  ... 
arXiv:1906.04675v2 fatcat:lszdqa2fczfg7pcqch63oaypzi

Taxonomy of Saliency Metrics for Channel Pruning

Kaveena Persand, Andrew Anderson, David Gregg
2021 IEEE Access  
We find that some of our constructed metrics can outperform the best existing state-of-the-art metrics for convolutional neural network channel pruning.  ...  In particular, we demonstrate the importance of reduction and scaling when pruning groups of weights. We also propose a novel scaling method based on the number of weights transitively removed.  ...  ACKNOWLEDGEMENT This work was supported by Science Foundation Ireland grant 13/RC/2094 to Lero -The Irish Software Research Centre. This work was also partly supported by Arm Research.  ... 
doi:10.1109/access.2021.3108545 fatcat:x6qbdcetujfg5jzdvpn4ivqy6e

Faster Pedestrian Recognition Using Deformable Part Models

Alessandro Preziosi, Antonio Prioletti, Luca Castangia
2016 Zenodo  
These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation.  ...  Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location.  ...  Speed Computing convolutions in the frequency domain as in [6] makes our algorithm faster than most state-of-the-art DPM implementations, without any loss in precision.  ... 
doi:10.5281/zenodo.1126758 fatcat:2hqfv7ci3bckvg3bnxbp57lj3i

Real-time Denoising and Dereverberation with Tiny Recurrent U-Net [article]

Hyeong-Seok Choi, Sungjin Park, Jie Hwan Lee, Hoon Heo, Dongsuk Jeon, Kyogu Lee
2021 arXiv   pre-print
In addition, we combine the small-sized model with a new masking method called phase-aware β-sigmoid mask, which enables simultaneous denoising and dereverberation.  ...  The number of parameters of state-of-the-art models, however, is often too large to be deployed on devices for real-world applications.  ...  Each 1D-CNN block is a sequence of pointwise convolution and depthwise convolution similar to [9] , except the first layer, which uses the standard convolution operation without a preceding pointwise  ... 
arXiv:2102.03207v3 fatcat:u45c2d5lqrajvixt4hftemv5fq

MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis [article]

Sergei Belousov
2021 arXiv   pre-print
In recent years, the use of Generative Adversarial Networks (GANs) has become very popular in generative image modeling.  ...  In our work, we focus on the performance optimization of style-based generative models.  ...  Due to the linearity of the convolution operator, the result of the sequentially applied depthwise and pixelwise convolutions are equal to the result of the applied dense convolution: w dense = w dw *  ... 
arXiv:2104.04767v2 fatcat:thyedvq2vvce5jtuwsdtwd2znm

A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis [article]

Damien Ferbach, Christos Tsirigotis, Gauthier Gidel, Avishek Bose
2022 arXiv   pre-print
Recent work by demonstrates that the SLTH can also be extended to translation equivariant networks – i.e. CNNs – with the same level of overparametrization as needed for SLTs in dense networks.  ...  In this paper, we generalize the SLTH to functions that preserve the action of the group G – i.e.  ...  In addition, the authors thank Riashat Islam, Manuel Del Verme, Mandana Samiei, and Andjela Mladenovic for their generous sharing of computational resources.  ... 
arXiv:2206.04270v1 fatcat:m5ngdmsy5zbnde6okzivnaypna

Exposing Hardware Building Blocks to Machine Learning Frameworks [article]

Yash Akhauri
2020 arXiv   pre-print
In this thesis, we explore how niche domains can benefit vastly if we look at neurons as a unique boolean function of the form f:B^I→ B^O, where B = {0,1}.  ...  Fundamentally, realizing such topologies on hardware asserts a strict limit on the 'fan-in' bits of a neuron due to the doubling of permutations possible with every increment in input bit-length.  ...  magnitude of weights to prune a dense network.  ... 
arXiv:2004.05898v1 fatcat:g5a5fly4szfkdlw5kppysx2kia

AugShuffleNet: Improve ShuffleNetV2 via More Information Communication [article]

Longqing Ye
2022 arXiv   pre-print
Evaluated on the CIFAR-10 and CIFAR-100 datasets, AugShuffleNet consistently outperforms ShuffleNetV2 in terms of accuracy, with less computational cost, fewer parameter count.  ...  Based on ShuffleNetV2, we build a more powerful and efficient model family, termed as AugShuffleNets, by introducing higher frequency of cross-layer information communication for better model performance  ...  MobilenetV1 [8] utilizes depth-wise convolution and pointwise convolution to construct a lightweight model for mobile platforms.  ... 
arXiv:2203.06589v1 fatcat:ohdkaek45neyxgrbisl2q2t44m

Granular Motor State Monitoring of Free Living Parkinson's Disease Patients via Deep Learning [article]

Kamer A. Yuksel, Jann Goschenhofer, Hridya V. Varma, Urban Fietzek, Franz M.J. Pfister
2019 arXiv   pre-print
We introduce a novel network architecture, a post-training scheme and a custom loss function that accounts for label noise to improve the results of our previous work in this domain and to establish a  ...  In this work, we propose the use of a wrist-worn smart-watch, which is equipped with 3D motion sensors, for estimating the motor fluctuation severity of PD patients in a free-living environment.  ...  In this work, we have only applied the first two phases by continuing on sparse training on the 75% pruned connections in each convolutional layer after the initial convergence with the dense network.  ... 
arXiv:1911.06913v2 fatcat:qwrv6cnyhvehlaut7vqfpuaksi

Weight Pruning via Adaptive Sparsity Loss [article]

George Retsinas, Athena Elafrou, Georgios Goumas, Petros Maragos
2020 arXiv   pre-print
Pruning neural networks has regained interest in recent years as a means to compress state-of-the-art deep neural networks and enable their deployment on resource-constrained devices.  ...  Key to our end-to-end network pruning approach is the formulation of an intuitive and easy-to-implement adaptive sparsity loss that is used to explicitly control sparsity during training, enabling efficient  ...  Many architectural improvements have been devised to improve the cost efficiency of CNNs either by replacing a costly convolutional layer with a set of cheaper convolutions (e.g. pointwise or grouped convolutions  ... 
arXiv:2006.02768v1 fatcat:4ujkdudgdvf2thkivopj2gxopq

Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators [article]

Marko Stamenovic, Nils L. Westhausen, Li-Chia Yang, Carl Jensen, Alex Pawlicki
2021 arXiv   pre-print
Although efficient speech enhancement is an active area of research, our work is the first to apply block pruning to SE and the first to address SE model compression in the context of microNPU's.  ...  Using weight pruning, we show that we are able to compress an already compact model's memory footprint by a factor of 42x from 3.7MB to 87kB while only losing 0.1 dB SDR in performance.  ...  Acknowledgments and Disclosure of Funding This work was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2177/1 -Project  ... 
arXiv:2111.02351v2 fatcat:3sfua4wchbbfxkdxklwu4akzfi

Finding Storage- and Compute-Efficient Convolutional Neural Networks

Daniel Becking, Simon Wiedemann, Klaus-Robert Müller
2020 Zenodo  
Convolutional neural networks (CNNs) have taken the spotlight in a variety of machine learning applications.  ...  Here, a \(\lambda\)-operator balances the entropy constraint and thus the compression gain of the resulting network. We validated the effectiveness of EC2T in a variety of experiments.  ...  Figure 2 . 2 : 22 Kernel and filter domain of a standard convolution with K = 3. Figure 2 . 3 : 23 Kernel and filter domain of a pointwise convolution.  ... 
doi:10.5281/zenodo.5501151 fatcat:zjh4kngadrgtdgzniphqrvfndq

MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks [article]

Syuan-Hao Sie, Jye-Luen Lee, Yi-Ren Chen, Chih-Cheng Lu, Chih-Cheng Hsieh, Meng-Fan Chang, Kea-Tiong Tang
2020 arXiv   pre-print
Convolutional neural networks (CNNs) play a key role in deep learning applications.  ...  However, the large storage overheads and the substantial computation cost of CNNs are problematic in hardware accelerators.  ...  MobileNet [4] uses depthwise convolution and pointwise convolution to reduce the number of parameters in networks.  ... 
arXiv:2010.12861v1 fatcat:wevemb5vsbbtrdzip4p5pedeja
« Previous Showing results 1 — 15 out of 238 results