6,384 Hits in 3.5 sec

Exploring Structural Sparsity in Neural Image Compression [article]

Shanzhi Yin, Chao Li, Wen Tan, Youneng Bao, Yongsheng Liang, Wei Liu
2022 arXiv   pre-print
In this paper, we explore the structural sparsity in neural image compression network to obtain real-time acceleration without any specialized hardware design or algorithm.  ...  Neural image compression have reached or out-performed traditional methods (such as JPEG, BPG, WebP).  ...  As far as we know, this may be the first work exploring the structural sparsity in neural image compression for real-time interference acceleration.  ... 
arXiv:2202.04595v4 fatcat:onlhyxmyffdwhknhvw6hzcmsjy

Survey on Deep Learning-Based Point Cloud Compression

Maurice Quach, Jiahao Pang, Dong Tian, Giuseppe Valenzise, Frederic Dufaux
2022 Frontiers in Signal Processing  
Point clouds are becoming essential in key applications with advances in capture technologies leading to large volumes of data. Compression is thus essential for storage and transmission.  ...  Current open questions in point cloud compression, existing solutions and perspectives are identified and discussed.  ...  statement must describe the contributions of individual authors referred to by their initials and, in doing so, all authors agree to be accountable for the content of the work.  ... 
doi:10.3389/frsip.2022.846972 doaj:efaf611e79344f78ab943340b1e56141 fatcat:umnadvlgz5ep5bfnxr2w3uvrqe

Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication [article]

Guy Isely, Christopher J. Hillar, Friedrich T. Sommer
2010 arXiv   pre-print
The new algorithm can explain how neural populations in the brain that receive subsampled input through fiber bottlenecks are able to form coherent response properties.  ...  We verify that the new algorithm performs efficient data compression on par with the recent method of compressive sampling.  ...  To explore the performance of ACS on natural images we train ACS models on compressed image patches from whitened natural images.  ... 
arXiv:1011.0241v1 fatcat:jkqbu2yg3zduhjegopuq2pdbhe

AMC: AutoML for Model Compression and Acceleration on Mobile Devices [article]

Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, Song Han
2019 arXiv   pre-print
In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy.  ...  Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets.  ...  CIFAR dataset consists of 50k training and 10k testing 32 × 32 tiny images in ten classes. We split the training images into 45k/5k train/validation.  ... 
arXiv:1802.03494v4 fatcat:o5mbywco6bhgfpdpzlfr3rhsqe

AMC: AutoML for Model Compression and Acceleration on Mobile Devices [chapter]

Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, Song Han
2018 Lecture Notes in Computer Science  
Reward= -Error*log(FLOP) Agent: DDPG Action: Compress with Sparsity ratio at (e.g. 50%) Embedding st=[N,C,H,W,i...]  ...  We achieved state-ofthe-art model compression results in a fully automated way without any human efforts.  ...  As the layers in deep neural networks are correlated in an unknown way, determining the compression policy is highly non-trivial.  ... 
doi:10.1007/978-3-030-01234-2_48 fatcat:2rmjdaogf5ap7fg2n3mf5jpnbi

Improving Deep Neural Network Sparsity through Decorrelation Regularization

Xiaotian Zhu, Wengang Zhou, Houqiang Li
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
To this end, many works are dedicated to compressing deep neural networks.  ...  Adding group LASSO regularization is one of the most effective model compression methods since it generates structured sparse networks.  ...  The deep neural network compression is first explored by [Han et al., 2015] . They find that removing most of the DNN parameters according to their magnitude does not affect the network performance.  ... 
doi:10.24963/ijcai.2018/453 dblp:conf/ijcai/ZhuZL18 fatcat:nvkuvyjuwbfkveytejuurrg5hq

Only Train Once: A One-Shot Neural Network Training And Pruning Framework [article]

Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, Xiao Tu
2021 arXiv   pre-print
Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices.  ...  optimization problem and propose a novel optimization algorithm, Half-Space Stochastic Projected Gradient (HSPG), to solve it, which outperforms the standard proximal methods on group sparsity exploration  ...  We now intuitively illustrate the strength of HSPG on group sparsity exploration.  ... 
arXiv:2107.07467v2 fatcat:cbsetynjo5cu3ojulf7azddlz4

Neural Sparse Representation for Image Restoration [article]

Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu, Thomas S. Huang
2020 arXiv   pre-print
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks, including image super-resolution, image denoising, and image compression artifacts removal  ...  Our method structurally enforces sparsity constraints upon hidden neurons.  ...  Deep convolutional neural networks for image restoration extend the sparse coding based methods with repeatedly cascaded structures.  ... 
arXiv:2006.04357v1 fatcat:pwvwi2jv2bf6zaoguli4stwbg4

Neural Network Compression Via Sparse Optimization [article]

Tianyi Chen, Bo Ji, Yixin Shi, Tianyu Ding, Biyi Fang, Sheng Yi, Xiao Tu
2020 arXiv   pre-print
application onto model compression is rarely well explored.  ...  The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications.  ...  VGG16 on CIFAR10 We first compress VGG16 on CIFAR10, which contains 50, 000 images of size 32 × 32 in training set and 10, 000 in test set.  ... 
arXiv:2011.04868v2 fatcat:iojhwxhda5gi5ipoz2zo66hxxu

Computation on Sparse Neural Networks: an Inspiration for Future Hardware [article]

Fei Sun, Minghai Qin, Tianyun Zhang, Liu Liu, Yen-Kuang Chen, Yuan Xie
2020 arXiv   pre-print
We observe that the search for the sparse structure can be a general methodology for high-quality model explorations, in addition to a strategy for high-efficiency model execution.  ...  Neural network models are widely used in solving many challenging problems, such as computer vision, personalized recommendation, and natural language processing.  ...  With a much lower achievable sparsity level in the sparse neural networks, early sparse accelerators integrate elementwise weight sparsity through compressed storage and computing skip of zero weights  ... 
arXiv:2004.11946v1 fatcat:2lnbtmi4grb65nxcxab4kz6pvy

Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression [article]

Shihui Yin, Gaurav Srivastava, Shreyas K. Venkataramanaiah, Chaitali Chakrabarti, Visar Berisha, Jae-sun Seo
2018 arXiv   pre-print
However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored.  ...  Deep learning algorithms have shown tremendous success in many recognition tasks; however, these algorithms typically include a deep neural network (DNN) structure and a large number of parameters, which  ...  INTRODUCTION Deep neural networks (DNNs) have seen great success in many cognitive applications such as image classification [1] [2] and speech recognition [3] .  ... 
arXiv:1804.07370v1 fatcat:hirsopx7czbexffugk6a3iixmm

Mining the Weights Knowledge for Optimizing Neural Network Structures [article]

Mengqiao Han, Xiabi Liu, Zhaoyang Hai, Xin Duan
2021 arXiv   pre-print
Knowledge embedded in the weights of the artificial neural network can be used to improve the network structure, such as in network compression.  ...  Inspired by how learning works in the mammalian brain, we mine the knowledge contained in the weights of the neural network toward automatic architecture learning in this paper.  ...  Neural Architecture Search. NAS was proposed to explore the space of potential models automatically to optimize the structure in conjunction with the weights [17] . As highlighted by Xie et al.  ... 
arXiv:2110.05954v1 fatcat:whn263b6nvhpvcfv4jxhx5pdhe

Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure [article]

Hamed Hakkak
2018 arXiv   pre-print
However, conventional models of compression techniques utilize crafted features [2,3,12] and explore specialized areas for exploration and design of large spaces in terms of size, speed, and accuracy,  ...  Model-based compression is an effective, facilitating, and expanded model of neural network models with limited computing and low power.  ...  In this paper, this is done by introducing a methodology for learning the reduced data structure structures using reinforcement learning.  ... 
arXiv:1807.02886v1 fatcat:jw66hxc3zjfefml3asyfpx3q5y

Self-Supervised Generative Adversarial Compression

Chong Yu, Jeff Pool
2020 Neural Information Processing Systems  
Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (  ...  In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods.  ...  In contrast, the fine-grained compression strategy works well for all tasks we explored, even when constrained to a structured 2:4 pattern.  ... 
dblp:conf/nips/YuP20 fatcat:bo7ta4xyzbfdxjnbrs7y3d66iy

Compressing deep quaternion neural networks with targeted regularization [article]

Riccardo Vecchi, Simone Scardapane, Danilo Comminiello, Aurelio Uncini
2020 arXiv   pre-print
In recent years, hyper-complex deep networks (e.g., quaternion-based) have received increasing interest with applications ranging from image reconstruction to 3D audio processing.  ...  To this end, we investigate two extensions of ℓ_1 and structured regularization to the quaternion domain.  ...  The resulting quaternion-valued neural networks (QVNNs) have been successfully applied to, among others, image classification [1, 4] , image coloring and forensics [5] , natural language processing  ... 
arXiv:1907.11546v2 fatcat:lsl25qguxzavtnsbzf6q7jvohm
« Previous Showing results 1 — 15 out of 6,384 results