Neural Network based Image Compression for Memory Consumption in Cloud Environment
Indian Journal of Science and Technology
Background/Objectives: The Main aim of this Hybrid Image compression method is that it should provide good picture quality, better compression ratio and also it can remove block artifacts in the reconstructed image. Methods/Statistical analysis: To compress an image using the proposed algorithm, images are first digitized. With the digital Information of an image different types of transformations are applied. In this method wavelet transformations (haar, daubechies wavelet transformations) are
... ransformations) are used. The output of transformation coefficients are quantized into nearest integer values. Vector Quantization takes an important role in quantizing the transformation coefficients. After quantization they are encoded by using any one of the compression encoding techniques. Huffmann encoding is used for compressing Tablet images and Tablet strip images. It is derived from exact frequencies of text. The variable length code table is an output of Huffmann's algorithm. The source symbol is encoded and stored in the above table which is further transferred through the channel for decoding. Findings: Since Unsupervised Neural Network learning algorithms are added in this algorithm increase the picture quality is improved and it removes the problems of block artifacts. Conclusion/Application: Since cloud computing provides elastic services, high performance and scalable large data storage, to facilitate long term storage and efficient transmission Image files are compressed and stored using this hybrid compression algorithm to enhance the performance of recent compression algorithms. The compressed and reconstructed images are evaluated using the error measures like CR (Compression Ratio), PSNR (Peak Signal Ratio). It shows that the above explained algorithm provides better results than the traditional results.