THE ONE APPROACH OF THE VISION-BASED COMPRESSION
Zybin, Kis Ya
unpublished
Bursting developing of multimedia content requires improving telecommunication technology. Extensive bandwidth enlarging needs additional physical channels, better equipment and greater energy consumption. Therefore, sophisticated methods of video compression is highly appreciated to reduce network load. Effective storing multimedia and transmission of the data over WAN is actual task for telecommunication engineering. Considered second-generation coding for multimedia data transmission. This
more »
... proach exploits properties of the human visual system (HVS) for the coding strategy in order to achieve high compression ratios while maintaining acceptable image quality. Proposed feature-based approach suggests essential improvement of compression rate. Additionally, the method enables flexible adopting data transfer to target device capabilities. Introduction. Nowadays there are two major trends in mass-consuming multimedia that influence telecommunication development. The first tendency is a quality elevation of modern multimedia content. Resolution of photo matrices and frame rates are improved yearly by main HW vendors. So, recent camera standard of 4K (3840x2160) and 8K (7680×4320) UHDTV requires >100MB per a video frame. With frame rate growth (60 fps and more) it leads to increasing requirement of communication channels capacity. Another trend is a widening of Internet access availability caused by mobile devices and global telecommunication market growth. Sharing any multimedia including TV and personal data is highly appreciated by customers. Rapid developing of cloud services can be observed; nobody stores data locally now. Internet-based media devices have been added to the stack of home devices. Internet Protocol Television (IPTV) services offer cable TV-like services over home Internet connections. Every day customers are listening to and viewing the Internet, looking TV, video conferencing, sharing video in social networks. And all these tasks require multimedia streaming. Upcoming 3D TV and 360 TV standards require significantly larger volume of data to transmit. For example, if higher-resolution video or stereoscopic video were introduced, stereoscopy is achieved through 120 fps video synchronized with shuttered glasses, with 60 fps delivered to each eye the communication service should provide this capability without essential investment. Additionally, modern electronic devices typically consume a great deal of power, which means they are expensive over time and wasteful of energy resource. One of the traditional approaches of big data storage and transmission is a data compression. Compression is a process of data redundancy elimination or reducing started with pioneering research of Shannon information theory. State-of-the-art JPEG2000 and MPEG-4 AVC/H.264 are two such examples of coding efficiency. Video compression standard is capable to reduce memory requirements in tens times; a typical MPEG-4 lossy compression video has a compression factor between 20 and 200. Until approximately 1980, the majority of image coding methods relied on techniques based on classical information theory (Huffman compression, LZV compression) to exploit the redundancy in the images in order to achieve compression. The techniques used were pixel-based and did not make any use of the information contained within the image itself. The compression ratios obtained with these techniques were moderate at around 2 to 1. Even with a lossy technique, such as discrete cosine transform, DCT, a higher ratio (greater than 30 to 1) could be achieved only at the expense of image quality. Attempts have recently been made to develop new image-compression techniques that outperform these first-generation image coding techniques considerably improving compression ratio. These methods attempt to identify features within the image and use the features to achieve compression.
fatcat:3xpis5x37feo7p32vpbidrre74