scispace - formally typeset
Search or ask a question
Topic

Data compression ratio

About: Data compression ratio is a research topic. Over the lifetime, 7635 publications have been published within this topic receiving 96183 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable- to-block codes designed to match a completely specified source.
Abstract: A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.

5,844 citations

01 Jan 1994
TL;DR: A block-sorting, lossless data compression algorithm, and the implementation of that algorithm and the performance of the implementation with widely available data compressors running on the same hardware are compared.
Abstract: The charter of SRC is to advance both the state of knowledge and the state of the art in computer systems. From our establishment in 1984, we have performed basic and applied research to support Digital's business objectives. Our current work includes exploring distributed personal computing on multiple platforms, networking , programming technology, system modelling and management techniques, and selected applications. Our strategy is to test the technical and practical value of our ideas by building hardware and software prototypes and using them as daily tools. Interesting systems are too complex to be evaluated solely in the abstract; extended use allows us to investigate their properties in depth. This experience is useful in the short term in refining our designs, and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this strategy, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical flavor. Some of it is in established fields of theoretical computer science, such as the analysis of algorithms, computational geometry, and logics of programming. Other work explores new ground motivated by problems that arise in our systems research. We have a strong commitment to communicating our results; exposing and testing our ideas in the research and development communities leads to improved understanding. Our research report series supplements publication in professional journals and conferences. We seek users for our prototype systems among those with whom we have common interests, and we encourage collaboration with university researchers. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Systems Research Center. All rights reserved. Authors' abstract We describe a block-sorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block of input …

2,753 citations

Journal ArticleDOI
TL;DR: LOCO-I as discussed by the authors is a low complexity projection of the universal context modeling paradigm, matching its modeling unit to a simple coding unit, which is based on a simple fixed context model, which approaches the capability of more complex universal techniques for capturing high-order dependencies.
Abstract: LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS.

1,668 citations

Journal ArticleDOI
TL;DR: If pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression.
Abstract: A novel theory is introduced for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates (a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to (b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that, if pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression. Based on previous experimental research it is argued that in most instances the error incurred in image compression should be measured in the integral sense instead of the mean-square sense. >

1,038 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
82% related
Image segmentation
79.6K papers, 1.8M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
81% related
Deep learning
79.8K papers, 2.1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202328
2022101
2021149
2020185
2019215
2018195