1,400,198 Hits in 2.8 sec

Data Compression [chapter]

Khalid Sayood
2003 Encyclopedia of Information Systems  
Lossless compression preserves all the information in the data being compressed, and the reconstruction is identical to the original data.  ...  The data to be compressed is divided into blocks, and the data in each block is transformed to a set of coefficients.  ...  Another excellent resource on the Internet is the data compression page maintained by Mark Nelson at  ... 
doi:10.1016/b0-12-227240-4/00029-0 fatcat:rcaxa3zuuvamjn7mf4zljqzcc4

Data Compression [chapter]

Shashi Shekhar, Hui Xiong
2008 Encyclopedia of GIS  
The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density.  ...  Data compression has important application in the areas of file storage and distributed systems.  ...  the umbrella of data compression.  ... 
doi:10.1007/978-0-387-35973-1_239 fatcat:n2zi47ejyfecdfvmk3b2d6cxgi

Critical Data Compression [article]

John Scoville
2011 arXiv   pre-print
This results in compressed data similar to the original.  ...  A new approach to data compression is developed and applied to multimedia content.  ...  The entropy limit for data compression established by Shannon applies to the exact ('lossless') compression of any type of data.  ... 
arXiv:1112.5493v1 fatcat:m5g6gm6qc5bjrk2uslhekcz46i

Bicriteria data compression [chapter]

Andrea Farruggia, Paolo Ferragina, Antonio Frangioni, Rossano Venturini
2013 Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms  
We address this goal in three stages: (i) we introduce the novel Bicriteria LZ77-Parsing problem which formalizes in a principled way what data-compressors have traditionally approached by means of heuristics  ...  The goal is to determine an LZ77 parsing which minimizes the space occupancy in bits of the compressed file, provided that the decompression time is bounded by T .  ...  Unfortunately this bounds are unacceptable in practice, because n 2 ≈ 2 64 just for one Gb of data to be compressed.  ... 
doi:10.1137/1.9781611973402.115 dblp:conf/soda/FarruggiaFFV14 fatcat:yo2tu5lte5cejmprgdjuhzp3ui

Image data compression

Mark A. Goldberg
1997 Journal of digital imaging  
The critical question is whether lossy data compression is clinically acceptable.  ...  The practical implications of data compression and the important considerations in choosing a compression scheme will also be discussed.  ... 
doi:10.1007/bf03168640 pmid:9268822 pmcid:PMC3452831 fatcat:ebpapcfw5reshf6rg4db5366qi

Bicriteria data compression [article]

Andrea Farruggia, Paolo Ferragina, Antonio Frangioni, Rossano Venturini
2013 arXiv   pre-print
compressors which achieve effective compression ratio and very efficient decompression speed.  ...  advent of massive datasets (and the consequent design of high-performing distributed storage systems) have reignited the interest of the scientific and engineering community towards the design of lossless data  ...  Unfortunately this bounds are unacceptable in practice, because n 2 ≈ 2 64 just for one Gb of data to be compressed.  ... 
arXiv:1307.3872v1 fatcat:o4ju7xl62vg6toombavk47term

Data Compression for Climate Data

2016 Supercomputing Frontiers and Innovations  
Due to its convenience, the inverse compression ratio, that is, 1 divided by the compression ratio, will also be used at some points in the paper; it indicates the fraction to which data can be compressed  ...  One promising approach is to reduce the amount of data that is stored. In this paper, we take a look at the impact of compression on performance and costs of high performance systems.  ...  Supercomputing Frontiers and Innovations We thank Intel Corporation for funding our Intel Parallel Computing Center to integrate enhanced adaptive compression into Lustre.  ... 
doi:10.14529/jsfi160105 fatcat:tvlgjmcvsjettoed34axwzd2ae

Data compression for sequencing data

Sebastian Deorowicz, Szymon Grabowski
2013 Algorithms for Molecular Biology  
Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression.  ...  Finally, we go back to the question "why compression" and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology.  ...  Data compression in brief Compression techniques are the traditional means of handling huge data.  ... 
doi:10.1186/1748-7188-8-25 pmid:24252160 pmcid:PMC3868316 fatcat:tywll675gzcyblyammlkl62zmy

Data compression in cosmology: A compressed likelihood for Planck data

Heather Prince, Jo Dunkley
2019 Physical Review D  
We apply the massively optimized parameter estimation and data compression technique (MOPED) to the public Planck 2015 temperature likelihood, reducing the dimensions of the data space to one number per  ...  We present CosMOPED, a lightweight and convenient compressed likelihood code implemented in Python.  ...  These compressed data can then be used for parameter inference. Tegmark et al.  ... 
doi:10.1103/physrevd.100.083502 fatcat:zxeczbpvebggpgsurzswj4nkz4

Memory-efficient sensor data compression

Yury Vladimirovich Shevchuk, Sergey Anatoljevich Romanenko
2022 Program systems theory and applications  
We treat scalar data compression in sensor network nodes in streaming mode (compressing data points as they arrive, no pre-compression buffering).  ...  We provide a comparison of known and experimental compression algorithms on 75 sensor data sources.  ...  The large maximum compression ratios are achieved for constant data sources thanks to the streaming compression mode.  ... 
doi:10.25209/2079-3316-2022-13-2-35-63 fatcat:yddgco3jcbcmveyya474i4lb5a

Mixing Strategies in Data Compression

Christopher Mattern
2012 2012 Data Compression Conference  
We propose geometric weighting as a novel method to combine multiple models in data compression. Our results reveal the rationale behind PAQ-weighting and generalize it to a non-binary alphabet.  ...  All of these algorithms belong to the class of statistical data compression algorithms, which share a common structure: The compressor consists of a model and a coder; and it processes the data (a string  ...  Combining multiple models in data compression is highly successful in practice, but more research in this area is needed.  ... 
doi:10.1109/dcc.2012.40 dblp:conf/dcc/Mattern12 fatcat:lumahgcpcrbtjctnozsfg6rizy

Pointwise redundancy in lossy data compression and universal lossy data compression

I. Kontoyiannis
2000 IEEE Transactions on Information Theory  
{ We characterize the achievable pointwise redundancy rates for lossy data compression at a xed distortion level.  ...  Our approach is based on showing that the compression performance of an arbitrary sequence of codes is essentially bounded below by the performance of Shannon's random code.  ...  Introduction Broadly speaking, the objective of lossy data compression is to nd e cient approximate representations for relatively large amounts of data.  ... 
doi:10.1109/18.817514 fatcat:nj4tycao3nahnejyxtl3ucaxum

Compression as Data Transformation

Kiem-Phong Vo
2007 2007 Data Compression Conference (DCC'07)  
Conventional compression techniques exploit general redundancy features in data to compress them.  ...  For example, Huffman or Lempel-Ziv techniques compresses data by statistical modeling or string matching while the Burrows-Wheeler Transform simply sorts data by context to improve compressibility.  ... 
doi:10.1109/dcc.2007.23 dblp:conf/dcc/Vo07 fatcat:d5hlsnkrt5aerap4a42oqtj67q

Xampling: Analog Data Compression

Moshe Mishali, Yonina C. Eldar
2010 2010 Data Compression Conference  
This allows compression together with the sampling stage.  ...  In order to break through the Nyquist barrier so as to compress the signals in the sampling process, one has to combine classic methods from sampling theory together with recent developments in compressed  ...  Data compression exploits redundancy in the input signal in order to reduce the storage volume of the signal.  ... 
doi:10.1109/dcc.2010.39 dblp:conf/dcc/MishaliE10 fatcat:5qt2ykzxdbasreyur72tvwxnce

Pattern-Based Data Compression [chapter]

Ángel Kuri, José Galaviz
2004 Lecture Notes in Computer Science  
Most modern lossless data compression techniques used today, are based in dictionaries.  ...  If some string of data being compressed matches a portion previously seen, then such string is included in the dictionary and its reference is included every time it appears.  ...  By identifying frequent patterns we can proceed to include such patterns in a dictionary achieving data compression. 2 Pattern-based Data Compression.  ... 
doi:10.1007/978-3-540-24694-7_1 fatcat:5yultvkfmfgi7fanf7d2vjbrty
« Previous Showing results 1 — 15 out of 1,400,198 results