Filters








6,371 Hits in 4.5 sec

On Compressing Deep Models by Low Rank and Sparse Decomposition

Xiyu Yu, Tongliang Liu, Xinchao Wang, Dacheng Tao
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Low-rank approximation and pruning for sparse structures play a vital role in many compression works. However, weight filters tend to be both low-rank and sparse.  ...  Here we propose a unified framework integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions.  ...  Acknowledgement This research was supported by Australian Research Council Projects FT-130101457, DP-140102164, LP-150100671.  ... 
doi:10.1109/cvpr.2017.15 dblp:conf/cvpr/YuLWT17 fatcat:3cykddhnvjfhzi43jvutl2buja

Convolutional neural networks compression with low rank and sparse tensor decompositions [article]

Pavel Kaloshin
2020 arXiv   pre-print
The motivation for such approximation is based on the assumption that low-rank and sparse terms allow eliminating two different types of redundancy and thus yield a better compression rate.  ...  Namely, we propose to approximate the convolutional layer weight with a tensor, which can be represented as a sum of low-rank and sparse components.  ...  Low rank and sparse decomposition Matrix or tensor decomposition can be thought as an approach to decrease redundancy in the representation by projecting it onto some restricted domain -for low rank decomposition  ... 
arXiv:2006.06443v1 fatcat:jervtncmu5bsvittktzabmo6b4

PSM-nets: Compressing Neural Networks with Product of Sparse Matrices

Luc Giffon, Stephane Ayache, Hachem Kadri, Thierry Artieres, Ronan Sicre
2021 2021 International Joint Conference on Neural Networks (IJCNN)  
Among the many approaches proposed to tackle this problem, low-rank tensor decompositions are largely investigated to compress deep neural networks.  ...  Such techniques rely on a low-rank assumption of the layer weight tensors that does not always hold in practice.  ...  ACKNOWLEDGMENT This work was funded in part by the French national research agency (grant number ANR16-CE23-0006). This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-AD011011766)  ... 
doi:10.1109/ijcnn52387.2021.9533408 fatcat:kuwm25vnf5antg5pr3ecu7z6zi

Compressing by Learning in a Low-Rank and Sparse Decomposition Form

Kailing Guo, Xiaona Xie, Xiangmin Xu, Xiaofen Xing
2019 IEEE Access  
Most existing low-rank or sparse compression methods compress the networks by approximating pre-trained models.  ...  Experiments on several common datasets demonstrate our model is superior to other network compression methods based on low-rankness or sparsity.  ...  The work [22] approximates pre-trained network by low-rank and sparse decomposition for net compression, but they need to set the sparse rate layer by layer tediously.  ... 
doi:10.1109/access.2019.2947846 fatcat:ejlmjaip7bddjjvcqerhj5pryi

Literature Review of Deep Network Compression

Ali Alqahtani, Xianghua Xie, Mark W. Jones
2021 Informatics  
In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks.  ...  We consider not only pruning methods but also quantization methods, and low-rank factorization methods.  ...  Low-rank factorization has been utilized for model compression and acceleration to achieve further speedup and obtain small CNN models. Rigamonti et al.  ... 
doi:10.3390/informatics8040077 fatcat:u2dzzibapnf2dbjdqkvgl3pztu

MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression [article]

Jie Zhang, Xiaolong Wang, Dawei Li, Shalini Ghosh, Abhishek Kolagunda, Yalin Wang
2019 arXiv   pre-print
State-of-the-art deep model compression methods exploit the low-rank approximation and sparsity pruning to remove redundant parameters from a learned hidden layer.  ...  Extensive experiments on large-scale convolutional neural networks demonstrate that MICIK is superior over state-of-the-art model compression approaches with 16X parameter reduction on VGG-16 and 6X on  ...  We first review the basic low-rank and sparse decomposition for a single layer.  ... 
arXiv:1902.00918v1 fatcat:c2t4t7qshrcrhh4t4ivrxpbkem

The Compression Techniques Applied on Deep Learning Model

Haoyuan He, Lingxuan Huang, Zisen Huang, Tiantian Yang
2022 Highlights in Science Engineering and Technology  
In this paper, a low-rank decomposition algorithm is evaluated based on sparse parameters and rank using the extended BIC for tuning parameter selection.  ...  This paper reviews deep learning-based deep neural network compression techniques and introduces the key operational points of knowledge extraction and network model on the learning performance of Resolution-Aware  ...  The sparse and low rank hyperspectral model is given by Formula 1 as follows: 𝑌 = 𝑆𝐴 𝑇 + 𝑋 + 𝜀 . ( ) 1 The sparse and low rank unmixing problem is given by formula (2): Further understanding the  ... 
doi:10.54097/hset.v4i.920 fatcat:sgysa7rdzbh65hlb4pgi4bnwbq

Foreword to the Special Issue on Hyperspectral Remote Sensing and Imaging Spectroscopy

S. Prasad, W. Liao, M. He, J. Chanussot
2018 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing  
Wang et al. present an approach to hyperspectral image restoration based on total variation regularized low-rank tensor decomposition.  ...  In Zhang et al. image fusion of multispectral and hyperspectral imagery is undertaken via a spatial-spectral graph-regularized low-rank tensor decomposition.  ...  Wang et al. present an approach to hyperspectral image restoration based on total variation regularized low-rank tensor decomposition.  ... 
doi:10.1109/jstars.2018.2820938 fatcat:pqu6zhrl3rc3tm7tqpi4p4t34m

Recent Advances in Efficient Computation of Deep Convolutional Neural Networks [article]

Jian Cheng, Peisong Wang, Gang Li, Qinghao Hu, Hanqing Lu
2018 arXiv   pre-print
Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design and hardware  ...  As for hardware implementation of deep neural networks, a batch of accelerators based on FPGA/ASIC have been proposed in recent years.  ...  To further reduce complexity, [80] proposed a Block-Term Decomposition (BTD) method based on low-rank and group sparse decomposition.  ... 
arXiv:1802.00939v2 fatcat:5mchdjcrc5czhgracihs4jvbmq

Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges

Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang
2018 IEEE Signal Processing Magazine  
Acknowledgments We would like to thank the reviewers and broader community for their feedback on this survey.  ...  This research is supported by National Science Foundation of China, grant number 61401169. The corresponding author of this article is Pan Zhou.  ...  redundant and noncritical ones. 2) Low-rank factorization: Low-rank factorization-based techniques use matrix/tensor decomposition to estimate the informative parameters of the deep convolutional neural  ... 
doi:10.1109/msp.2017.2765695 fatcat:fztx3axyxfeadebs5snmid2o24

Tensor Methods for Generating Compact Uncertainty Quantification and Deep Learning Models [article]

Chunfeng Cui, Cole Hawkins, Zheng Zhang
2019 arXiv   pre-print
By exploiting possible low-rank tensor factorization, many high-dimensional model-based or data-driven problems can be solved to facilitate decision making or machine learning.  ...  To enable the deployment of deep learning on resource-constrained hardware platforms, tensor methods can be used to significantly compress an over-parameterized neural network model or directly train a  ...  • Low-rank Compression [46] : one can also compress the weight matrix or convolution filters by low-rank matrix or tensor decomposition.  ... 
arXiv:1908.07699v1 fatcat:vq4grphvtfan5m2np3stsinp6m

A Survey of Model Compression and Acceleration for Deep Neural Networks [article]

Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang
2020 arXiv   pre-print
Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance.  ...  However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements  ...  TABLE II COMPARISONS II BETWEEN DIFFERENT LOW-RANK MODELS AND THEIR BASELINES ON ILSVRC-2012. Model TOP-5 Accuracy Speed-up Compression Rate AlexNet 80.03% 1. 1.  ... 
arXiv:1710.09282v9 fatcat:frwedew2gfe3rjif5ds75jqay4

SlimNets: An Exploration of Deep Model Compression and Acceleration [article]

Ini Oguntola, Subby Olubeko, Christopher Sweeney
2018 arXiv   pre-print
This work evaluates and compares three distinct methods for deep model compression and acceleration: weight pruning, low rank factorization, and knowledge distillation.  ...  With increased focus on deploying deep neural networks on resource constrained devices like smart phones, there has been a push to evaluate why these models are so resource hungry and how they can be made  ...  ACKNOWLEDGMENTS We would like to thank the 6.883 staff at MIT for their instrumental feedback on this project.  ... 
arXiv:1808.00496v1 fatcat:cmcfyslxiza4nehd3beqmulwli

2019 Index IEEE Transactions on Computational Imaging Vol. 5

2019 IEEE Transactions on Computational Imaging  
., +, TCI Sept. 2019 395-408 Efficient Dynamic Parallel MRI Reconstruction for the Low-Rank Plus Sparse Model.  ...  Lou, Y., +, TCI Sept. 2019 437-449 Efficient Dynamic Parallel MRI Reconstruction for the Low-Rank Plus Sparse Model.  ... 
doi:10.1109/tci.2019.2959176 fatcat:g7nuyesverg2xbjwbzuyp6ovyy

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
., +, TIP 2020 565-578 Hyperspectral Images Denoising via Nonconvex Regularized Low-Rank and Sparse Matrix Decomposition.  ...  ., +, TIP 2020 565-578 Hyperspectral Images Denoising via Nonconvex Regularized Low-Rank and Sparse Matrix Decomposition.  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m
« Previous Showing results 1 — 15 out of 6,371 results