Filters








85,642 Hits in 3.3 sec

Natural Compression for Distributed Deep Learning [article]

Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik
2020 arXiv   pre-print
Modern deep learning models are often trained in parallel over a collection of distributed machines to reduce training time.  ...  For applications requiring more aggressive compression, we generalize NC to natural dithering, which we prove is exponentially better than the common random dithering technique.  ...  Sections 3 and 4 21 Natural Compression for Distributed Deep Learning C.2 D Details and Proofs for Section 5 24 E Limitations and Extensions 30 0 40 80 120 160 200 240 280 Epochs 0.5 1.0  ... 
arXiv:1905.10988v2 fatcat:cwk3l74cfjag5nt6v6nqlleuci

Deep learning for sequence modelling : applications in natural languages and distributed compressive sensing [article]

Hamid Palangi
2017
We present a deep learning approach to distributed compressive sensing and show that it addresses the above three questions and is almost as fast as greedy methods during reconstruction.  ...  The underlying data in many machine learning tasks have a sequential nature.  ...  We proposed a deep learning approach to distributed compressive sensing.  ... 
doi:10.14288/1.0343522 fatcat:q2rjegqaqjegnj7akwjk5cgxia

Reducing the Representation Error of GAN Image Priors Using the Deep Decoder [article]

Max Daniels, Paul Hand, Reinhard Heckel
2020 arXiv   pre-print
For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution  ...  The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior.  ...  For example, a deep decoder may smooth out regions of an image where there should be edges, as the Deep Decoder does not know from learning that edges may be natural.  ... 
arXiv:2001.08747v1 fatcat:s5gw32xf4zdbtbidu5hjwpn7uu

Invertible generative models for inverse problems: mitigating representation error and dataset bias [article]

Muhammad Asim, Max Daniels, Oscar Leong, Ali Ahmed, Paul Hand
2020 arXiv   pre-print
better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.  ...  We additionally compare performance for compressive sensing to unlearned methods, such as the deep decoder, and we establish theoretical bounds on expected recovery error in the case of a linear invertible  ...  For compressive sensing on in-distribution images, invertible priors can have lower recovery errors than Deep Decoder, GANs with low dimensional latent representations, and Lasso, across a wide range of  ... 
arXiv:1905.11672v4 fatcat:hgpfoh6frfa4thyxvhmqjzqomi

State-of-the-art Techniques in Deep Edge Intelligence [article]

Ahnaf Hannan Lodhi, Barış Akgün, Öznur Özkasap
2020 arXiv   pre-print
The major research avenues in DEI have been consolidated under Federated Learning, Distributed Computation, Compression Schemes and Conditional Computation.  ...  However, the centralization of computational resources and the need for data aggregation have long been limiting factors in the democratization of Deep Learning applications.  ...  Deep Gradient Compression (DGC) by [43] has reported that 99.9% of the gradients exchanged in distributed SGD is redundant to successfully employ gradient compression for distributed training.  ... 
arXiv:2008.00824v3 fatcat:aofzt6tfbzhvdhkuqlp6njvh6e

Deep Metric Learning with Data Summarization [chapter]

Wenlin Wang, Changyou Chen, Wenlin Chen, Piyush Rai, Lawrence Carin
2016 Lecture Notes in Computer Science  
We present Deep Stochastic Neighbor Compression (DSNC), a framework to compress training data for instance-based methods (such as k-nearest neighbors).  ...  In particular, compressing the data in a deep feature space makes DSNC robust against label noise and issues such as within-class multi-modal distributions.  ...  As a result, we also penalize the distribution of the compressed samples to encourage a multi-modal distribution for each label.  ... 
doi:10.1007/978-3-319-46128-1_49 fatcat:yksr2yobkrg2dgcrh7pw5lfktm

EdgeAI: A Vision for Deep Learning in IoT Era [article]

Kartikeya Bhardwaj, Naveen Suda, Radu Marculescu
2019 arXiv   pre-print
The significant computational requirements of deep learning present a major bottleneck for its large-scale adoption on hardware-constrained IoT-devices.  ...  distributed inference.  ...  INTRODUCTION D EEP learning has indeed pushed the frontiers of progress for many computer vision, speech recognition, and natural language processing applications.  ... 
arXiv:1910.10356v1 fatcat:6df62csanbcldaf5q6y47wymt4

Quality Assessment of Deep-Learning-Based Image Compression

Giuseppe Valenzise, Andrei Purica, Vedad Hulusic, Marco Cagnazzo
2018 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)  
In particular, images compressed at low bitrate appear more natural than JPEG 2000 coded pictures, according to a no-reference naturalness measure.  ...  We also show experimentally that the PSNR metric is to be avoided when evaluating the visual quality of deep-learning-based methods, as their artifacts have different characteristics from those of DCT  ...  deep-learning compressed images; iii) we assess the naturalness of deep-learning compressed images, using an opinion-and distortion-unaware metric.  ... 
doi:10.1109/mmsp.2018.8547064 dblp:conf/mmsp/ValenzisePHC18 fatcat:5tgexyrz4zeulitynymnysoxui

Smart Data: Where the Big Data Meets the Semantics

Trong H. Duong, Hong Q. Nguyen, Geun S. Jo
2017 Computational Intelligence and Neuroscience  
For the variety, integrating heterogeneous data sources requires effective methods for providing well-defined ontologies and natural language processing.  ...  The approach in the article "N-Gram-Based Text Compression" by V. H. Nguyen et al. presents an efficient method for compressing texts (in Vietnamese) by usinggram dictionaries.  ...  Neural network algorithms offer advantages for deep learning and exploit the whole, rather than parts, of the data.  ... 
doi:10.1155/2017/6925138 pmid:28337216 pmcid:PMC5346392 fatcat:hwnpux4xjjgirohrkwvlvbibqe

End-to-end lossless compression of high precision depth maps guided by pseudo-residual [article]

Yuyang Wu, Wei Gao
2022 arXiv   pre-print
Utilizing the wide spread deep learning environment, we propose an end-to-end learning-based lossless compression method for high precision depth maps.  ...  We leverage the concept of pseudo-residual to guide the generation of distribution for residual and avoid introducing context models.  ...  For non-learned lossless data compression methods, we use ZLIB, GZIP, BZ2 and LZMA. For non-learned lossless image compression methods, we use BPG, PNG, AVIF, WEBP and FILF.  ... 
arXiv:2201.03195v1 fatcat:haaknkzb2bgujplpb3vqk6v26y

New Directions in Distributed Deep Learning: Bringing the Network at Forefront of IoT Design [article]

Kartikeya Bhardwaj, Wei Chen, Radu Marculescu
2020 arXiv   pre-print
iii) Lack of network-aware deep learning algorithms for distributed inference across multiple IoT devices.  ...  We then provide a unified view targeting three research directions that naturally emerge from the above challenges: (1) Federated learning for training deep networks, (2) Data-independent deployment of  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.  ... 
arXiv:2008.10805v1 fatcat:kwdkpkt2rjcltbphdlraulqncq

On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence [article]

Yi Ma and Doris Tsao and Heung-Yeung Shum
2022 arXiv   pre-print
We believe the two principles are the cornerstones for the emergence of Intelligence, artificial or natural.  ...  More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks  ...  for data distributions with low-dimensional supports.  ... 
arXiv:2207.04630v3 fatcat:yaazh2ok2fdklpv5l3kakdon2u

Homomorphic Parameter Compression for Distributed Deep Learning Training [article]

Jaehee Jang and Byungook Na and Sungroh Yoon
2017 arXiv   pre-print
Distributed training of deep neural networks has received significant research interest, and its major approaches include implementations on multiple GPUs and clusters.  ...  Parallelization can dramatically improve the efficiency of training deep and complicated models with large-scale data.  ...  Under current software and hardware constraints, DNN training demands a massive amount of processing time [1] , naturally leading to the need for distributed deep learning naturally uprose [2, 3, 4,  ... 
arXiv:1711.10123v1 fatcat:s5f3dgrbjjdeldnlnf5h52impq

DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression [article]

Enmao Diao, Jie Ding, Vahid Tarokh
2019 arXiv   pre-print
To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.  ...  We propose a new architecture for distributed image compression from a group of distributed data sources.  ...  Image compression with Deep Learning There exist a variety of classical codecs for lossy image compression.  ... 
arXiv:1903.09887v3 fatcat:pggenmvw65cvvinu5fo2eh4wmy

CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization

Frederick Tung, Greg Mori
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Our proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet by 51fold, GoogLeNet by 10-fold, and ResNet-50 by 15-fold, while preserving the uncompressed network  ...  However, modern deep networks contain millions of learned weights; a more efficient utilization of computation resources would assist in a variety of deployment scenarios, from embedded platforms with  ...  Acknowledgements This work was supported by the Natural Sciences and Engineering Research Council of Canada.  ... 
doi:10.1109/cvpr.2018.00821 dblp:conf/cvpr/TungM18 fatcat:ooq2o22m7badzn5ch2fik35j6i
« Previous Showing results 1 — 15 out of 85,642 results