scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A survey on lossless image compression methods

TL;DR: This work is to provide a detailed analysis of state-of-the-art algorithms used for lossless compression of images and to give future research direction based on the analysis to the new researchers.
Abstract: In this paper we are describing some important state-of the-art algorithms used for lossless compression of images. These algorithms are broadly classified as prediction based methods and transform based methods. Motivation behind this work is to provide a detailed analysis of such algorithms and to give future research direction based on the analysis to the new researchers.
Citations
More filters
Journal ArticleDOI
TL;DR: This survey characterizes the benefits and shortcomings of recent efforts of image compression algorithms over WMSN; and provides an open research issue for each compression method; and its potentials to WMSN.

86 citations


Cites methods from "A survey on lossless image compress..."

  • ...Figure 1 Classification of ima prediction-based techniques [12], transform based techniques and multi-resolution based techniques [13]....

    [...]

Proceedings Article
01 Jan 2000
TL;DR: In this article, a single-pass adaptive algorithm that uses context classification and multiple linear predictors, locally optimized on a pixel-by-pixel basis, is proposed to obtain a compression ratio comparable to CALIC while improving on some images.
Abstract: In the past years, there have been several improvements in lossless image compression. All the recently proposed state-of-the-art lossless image compressors can be roughly divided into two categories: single and double-pass compressors. Linear prediction is rarely used in the first category, while TMW [7], a state-of-the-art double-pass image compressor, relies on linear prediction for its performance. We propose a single-pass adaptive algorithm that uses context classification and multiple linear predictors, locally optimized on a pixel-by-pixel basis. Locality is also exploited in the entropy coding of the prediction error The results we obtained on a test set of several standard images are encouraging. On the average, our ALPC obtains a compression ratio comparable to CALIC [20] while improving on some images.

55 citations

Journal ArticleDOI
TL;DR: A survey on the state of the art of WAN optimization or WAN acceleration techniques, and illustrate how these acceleration techniques can improve application performance, mitigate the impact of latency and loss, and minimize bandwidth consumption is provided.
Abstract: Applications, deployed over a wide area network (WAN) which may connect across metropolitan, regional or national boundaries, suffer performance degradation owing to unavoidable natural characteristics of WANs such as high latency and high packet loss rate. WAN optimization, also known as WAN acceleration, aims to accelerate a broad range of applications and protocols over a WAN. In this paper, we provide a survey on the state of the art of WAN optimization or WAN acceleration techniques, and illustrate how these acceleration techniques can improve application performance, mitigate the impact of latency and loss, and minimize bandwidth consumption. We begin by reviewing the obstacles in efficiently delivering applications over a WAN. Furthermore, we provide a comprehensive survey of the most recent content delivery acceleration techniques in WANs from the networking and optimization point of view. Finally, we discuss major WAN optimization techniques which have been incorporated in widely deployed WAN acceleration products - multiple optimization techniques are leveraged by a single WAN accelerator to improve application performance in general.

34 citations

Journal ArticleDOI
TL;DR: This paper proposes a new gradient-based tracking and adapting technique that outperforms some existing methods and aims to design an efficient highly adaptive predictor that can be incorporated in modeling step of image compression systems.
Abstract: In lossless image compression, many prediction methods are proposed so far to achieve better compression performance/complexity trade off. In this paper, we concentrate on some well-known and widely used low-complexity algorithms exploited in many modern compression systems, including MED, GAP, Graham, Ljpeg, DARC, and GBSW. This paper proposes a new gradient-based tracking and adapting technique that outperforms some existing methods. This paper aims to design an efficient highly adaptive predictor that can be incorporated in modeling step of image compression systems. This claim is proved by testing the proposed method upon a wide variety of images with different characteristics. Six special sets of images including face, sport, texture, sea, text, and medical constitute our dataset.

16 citations


Cites background or methods from "A survey on lossless image compress..."

  • ...Sub- Table 1 Summarized information about predictors Ljpeg MED GAP PRV [14] DARC Graham GBSW NEW NANP – 3 7 3 3 3 10 19 NPFP – 3 4 3 2 2 2 2 NAV – 0 2 0 2 2 4 0 NC 1 3 6 2 0 2 4 11 L-NL L NL NL NL NL NL NL NL Switching No Yes Yes Yes No Yes No No FW-BW BW FW FW FW FW FW FW FW Adaptivity None Low Moderate–high Low Moderate Low Moderate–high High NANP number of all needed pixels, NPFP number of pixels in final predictor, NAV number of auxiliary variables, NC number of clauses, L–NL linear or non-linear, FW–BW forward or backward...

    [...]

  • ...Ljpeg MED GAP PRV [14] DARC Graham GBSW NEW...

    [...]

  • ...Most predictive data compression techniques include two major steps [14]: Modeling (prediction): In the first step, essential function is prediction of unknown values....

    [...]

  • ...MED, PRV [14], and Graham classify as low level of adaptivity because they use some specific configuration in some conditions, for example three constant configuration in MED....

    [...]

Journal ArticleDOI
TL;DR: This article describes several techniques and their properties of how to represent data in the multi-scale cell hierarchy of a discrete global grid system (DGGS) or in theMulti-scale hierarchy of an customized wavelet transform (A3H) to be applicable to the Digital Earth framework.
Abstract: Digital Earth frameworks deal with data sets of different types collected from various sources. To effectively store, retrieve, and transmit these data sets, efficient multi-scale data representati...

14 citations


Cites methods from "A survey on lossless image compress..."

  • ...Features in LIDAR data sets can be detected and vectorized using different techniques [3, 39, 31, 8], although imagery data alone can also be used to extract vector data sets [31, 9, 47, 10, 30, 5]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations


"A survey on lossless image compress..." refers background in this paper

  • ...Embedded Zero tree Wavelet (EZW) Embedded zero tree (EZW) algorithm is a very simple and effective algorithm [9]....

    [...]

Journal ArticleDOI
TL;DR: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT), capable of modeling the spatially varying visual masking phenomenon.
Abstract: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.

1,933 citations

Journal ArticleDOI
TL;DR: It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.

1,485 citations

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT), capable of modeling the spatially varying visual masking phenomenon.
Abstract: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich feature set, including resolution and SNR scalability together with a random access property. The algorithm has modest complexity and is extremely well suited to applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.

1,479 citations


"A survey on lossless image compress..." refers methods in this paper

  • ...Embedded Block Coding with Optimized Truncation (EBCOT) The EBCOT algorithm [11] uses a wavelet transform to generate the sub band coefficients which are then quantized and coded....

    [...]