Filters








32 Hits in 9.5 sec

Learning a Single Tucker Decomposition Network for Lossy Image Compression with Multiple Bits-Per-Pixel Rates [article]

Jianrui Cai, Zisheng Cao, Lei Zhang
2018 arXiv   pre-print
However, existing CNN-based LIC methods usually can only train a network for a specific bits-per-pixel (bpp).  ...  In this paper, we propose to learn a single CNN which can perform LIC at multiple bpp rates.  ...  ACKNOWLEDGMENT We gratefully acknowledge the support from NVIDIA Corporation for providing us the Titan X GPU used in this research.  ... 
arXiv:1807.03470v1 fatcat:ozqw3wnytfazdhvcl6flgwlrx4

A Flexible Lossy Depth Video Coding Scheme Based on Low-rank Tensor Modelling and HEVC Intra Prediction for Free Viewpoint Video [article]

Mansi Sharma, Santosh Kumar
2021 arXiv   pre-print
In this paper, we introduce a novel low-complexity scheme for depth video compression based on low-rank tensor decomposition and HEVC intra coding.  ...  Further, compression of factor matrices with HEVC intra prediction support arbitrary target accuracy by flexible adjustment of bitrate, varying tensor decomposition ranks and quantization parameters.  ...  [62] CNN-based LIC approach removes these limitations by proposing an effective Tucker Decomposition Network (TDNet), which can adjust multiple bits-per-pixel rates of latent image representation within  ... 
arXiv:2104.04678v1 fatcat:qjadbghnqzckjcjkxilqp5tbve

A block-based inter-band predictor using multilayer propagation neural network for hyperspectral image compression [article]

Rui Dusselaar, Manoranjan Paul
2019 arXiv   pre-print
The algorithm also changed from the traditional compression methods encoding images pixel by pixel, the compression process only encodes the weights and the biases vectors of BIP-MLPNN which require few  ...  In this paper, a block-based inter-band predictor (BIP) with multilayer propagation neural network model (MLPNN) is presented by a completely new framework.  ...  This indicated that the BIP-MLPNN can achieve a relative higher compression ratio with very low bit rate.  ... 
arXiv:1902.04191v1 fatcat:xbort7aarbepxd23z2p42si77m

Comprehensive review of hyperspectral image compression algorithms

Yaman Dua, Vinod Kumar, Ravi Shankar Singh
2020 Optical Engineering: The Journal of SPIE  
Storage of these large size images is a critical issue that is handled by compression techniques.  ...  , multitemporal-based, and learning-based algorithms.  ...  Network is trained with some data different from the original image, with the predefined constant learning rate.  ... 
doi:10.1117/1.oe.59.9.090902 fatcat:7tn2yfduzreanbmufpmu5cyzpu

Table of contents

2020 IEEE Transactions on Image Processing  
Gao 3442 Learning a Single Tucker Decomposition Network for Lossy Image Compression With Multiple Bits-per-Pixel Rates ..................................................................................  ...  Kim 710 RYF-Net: Deep Fusion Network for Single Image Haze Removal .......................... A. Dudhane and S.  ... 
doi:10.1109/tip.2019.2940373 fatcat:i7hktzn4wrfz5dhq7hj75u6esa

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
., +, TIP 2020 8842-8854 Learning a Single Tucker Decomposition Network for Lossy Image Compression With Multiple Bits-per-Pixel Rates.  ...  ., +, TIP 2020 8842-8854 Learning a Single Tucker Decomposition Network for Lossy Image Compression With Multiple Bits-per-Pixel Rates.  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m

Lossy source coding

T. Berger, J.D. Gibson
1998 IEEE Transactions on Information Theory  
Index Terms-Data compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding.  ...  Shannon introduced and developed the theory of source coding with a fidelity criterion, also called rate-distortion theory.  ...  Video Compression: Transform-based methods have been a dominant force in image compression at rates below 2 bits/pixel for over 30 years.  ... 
doi:10.1109/18.720552 fatcat:ncecrqlz5beybaxodcipbcm3fq

Learning of Graph Compressed Dictionaries for Sparse Representation Classification

Farshad Nourbakhsh, Eric Granger
2016 Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods  
In this paper, the Graph-Compressed Dictionary Learning (GCDL) technique is proposed to learn compact auxiliary dictionaries for SRC.  ...  GCDL is based on matrix factorization, and allows to maintain a high level of accuracy with compressed dictionaries because it exploits structural information to represent intra-class variations.  ...  Experiment conducted with a high compression rate produces a better accuracy.  ... 
doi:10.5220/0005710403090316 dblp:conf/icpram/NourbakhshG16 fatcat:vpnv6fc245ciditfzje2vost3y

Neural Joint Source-Channel Coding [article]

Kristy Choi, Kedar Tatwawadi, Aditya Grover, Tsachy Weissman, Stefano Ermon
2019 arXiv   pre-print
By adding noise into the latent codes to simulate the channel during training, we learn to both compress and error-correct given a fixed bit-length and computational budget.  ...  However, this decomposition can fall short in the finite bit-length regime, as it requires non-trivial tuning of hand-crafted codes and assumes infinite computational power for decoding.  ...  Acknowledgements We are thankful to Neal Jean, Daniel Levy, Rui Shu, and Jiaming Song for insightful discussions and feedback on early drafts.  ... 
arXiv:1811.07557v3 fatcat:xoft5lj7bndwnj5v3xz5dwf5km

Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions [article]

Yang Wu, Dingheng Wang, Xiaotong Lu, Fan Yang, Guoqi Li, Weisheng Dong, Jianbo Shi
2021 arXiv   pre-print
Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks, with the help of large amounts of training data and new powerful computation resources.  ...  This paper attempts to provide a systematic summary via a comprehensive survey which can serve as a valuable reference and inspire both researchers and practitioners who work on visual recognition problems  ...  (b) CIFAR 10: 60,000 images in 10 classes with 6,000 images per class.(c) CIFAR 100: 60,000 images in 100 classes with 600 images per class.(a) MNIST: 70,000 handwritten digits.  ... 
arXiv:2108.13055v2 fatcat:nf3lymdbvzgl7otl7gjkk5qitq

Efficient image compression and decompression algorithms for OCR systems

Boban Arizanovic, Vladan Vuckovic
2018 Facta universitatis - series Electronics and Energetics  
Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards.  ...  This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the "Nikola Tesla  ...  Compression standards for bi-level images Bi-level images are represented using only 1 bit per each pixel. This bit denotes a black or white color and has a value 0 or 1 depending on the color.  ... 
doi:10.2298/fuee1803461a fatcat:gwwhg7gwlvhwzctbnffsu7u3fa

Table of contents

2020 IEEE Transactions on Image Processing  
Zhang 3596 Learning a Single Tucker Decomposition Network for Lossy Image Compression With Multiple Bits-per-Pixel Rates ................................................................................  ...  Mishiba 4232 Optimized Sensing Matrix for Single Pixel Multi-Resolution Compressive Spectral Imaging ............................. ......................................................................  ... 
doi:10.1109/tip.2019.2940372 fatcat:h23ul2rqazbstcho46uv3lunku

Edge Intelligence: Architectures, Challenges, and Applications [article]

Dianlei Xu, Tong Li, Yong Li, Xiang Su, Sasu Tarkoma, Tao Jiang, Jon Crowcroft, Pan Hui
2020 arXiv   pre-print
We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems  ...  Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence.  ...  compression rate 7.9× smaller Lossy [232] CNN Compressive sensing Training efficiency 6x faster Improved [233] NIN Network pruning On-device customisation 1.24× faster 3% Lossy [234] VGG  ... 
arXiv:2003.12172v2 fatcat:xbrylsvb7bey5idirunacux6pe

Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis [article]

Tal Ben-Nun, Torsten Hoefler
2018 arXiv   pre-print
We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning.  ...  Based on those approaches, we extrapolate potential directions for parallelism in deep learning.  ...  The authors achieved a compression ratio (which also includes 32-bit fixed point quantization) of 846-2,871× for a non-convolutional DNN.  ... 
arXiv:1802.09941v2 fatcat:ne2wiplln5eavjvjwf5to7nwsu

Space-Time Window Reconstruction In Parallel High Performance Numeric Simulations. Application For Cfd (Phd Thesis)

Alin Anton, Ioan Cretu
2011 Zenodo  
This thesis proposes a new concept for dealing with large-scale, numerical simulation data.  ...  Supercomputing today is like riding a barouche with horses that travel orders of magnitude faster than the storage; long distance runs, add hills, and valleys, to the landscape; high performance computing  ...  Considering 8 bits per pixel to be the common practice data rate for visualisation, he proposes a 4:1 compression ratio.  ... 
doi:10.5281/zenodo.15938 fatcat:xh4i6ig7qvfkjapq76phiadpje
« Previous Showing results 1 — 15 out of 32 results