scispace - formally typeset
Search or ask a question
Topic

Lossless compression

About: Lossless compression is a research topic. Over the lifetime, 13218 publications have been published within this topic receiving 199941 citations.


Papers
More filters
Proceedings ArticleDOI
10 Dec 2002
TL;DR: A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.
Abstract: We present a novel reversible (lossless) data hiding (embedding) technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.

1,126 citations

Journal ArticleDOI
TL;DR: The CALIC obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature and can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach.
Abstract: We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts.

1,099 citations

Journal ArticleDOI
TL;DR: In this paper, a generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve.
Abstract: We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.

1,058 citations

Journal ArticleDOI
TL;DR: If pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression.
Abstract: A novel theory is introduced for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates (a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to (b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that, if pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression. Based on previous experimental research it is argued that in most instances the error incurred in image compression should be measured in the integral sense instead of the mean-square sense. >

1,038 citations

Journal ArticleDOI
TL;DR: In this article, the authors presented an intensive discussion on two distributed source coding (DSC) techniques, namely Slepian-Wolf coding and Wyner-Ziv coding, and showed that separate encoding is as efficient as joint coding for lossless compression in channel coding.
Abstract: In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding.

819 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
90% related
Image processing
229.9K papers, 3.5M citations
89% related
Convolutional neural network
74.7K papers, 2M citations
87% related
Deep learning
79.8K papers, 2.1M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023299
2022673
2021372
2020435
2019511
2018500