Memory-efficient lossless video compression using temporal extended JPEG-LS and on-line compression
Use of temporal predictors in lossless video coders play a significant role in terms of compression gain, but comes with a cost of significant memory requirement since this approach requires to save at least one frame in buffer for residue calculation. An improvement to standard JPEG-LS based lossless video coding algorithm is proposed in this work which requires very small amount of memory comparing to the regular approach keeping the computational complexity low. To obtain a higher
... a higher compression, a combination of spatial and temporal predictor model has been used where appropriate mode is selected adaptively on a pixel based analysis. Using only one reference frame, the context based temporal coder performs its calculation regarding mode selection and prediction error calculation with already reconstructed pixels. This method eliminates the overhead of transmitting the coding mode in the decoder side. The need for storage space to save the only reference frame is further reduced by introducing on-line lossy compression on that frame. Relevant pixels from the stored reference frame are obtained by partial on-the-fly decompression. The combination of temporally extended context based prediction and on-line compression achieves a significant gain in compression ratio comparing to standard frame-by-frame JPEG-LS video coding keeping the memory requirement low, making it usable as a lightweight lossless video coder for embedded systems.