scispace - formally typeset
Topic

Image compression

About: Image compression is a(n) research topic. Over the lifetime, 23088 publication(s) have been published within this topic receiving 369162 citation(s).

...read more

Papers
  More

Journal ArticleDOI: 10.1109/TIP.2003.819861
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

...read more

Topics: Image quality (61%), Subjective video quality (56%), Human visual system model (56%) ...read more

30,333 Citations


Journal ArticleDOI: 10.1109/TCOM.1983.1095851
Peter J. Burt1, Edward H. Adelson2Institutions (2)
Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.

...read more

Topics: Image compression (65%), Image processing (65%), Image texture (65%) ...read more

6,550 Citations


Journal ArticleDOI: 10.1109/76.499834
Amir Said1, William A. Pearlman2Institutions (2)
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

...read more

Topics: Set partitioning in hierarchical trees (67%), Data compression (56%), Entropy encoding (56%) ...read more

5,812 Citations


Open accessBook
David Taubman1, Michael W. MarcellinInstitutions (1)
30 Nov 2001-
Abstract: This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.

...read more

Topics: Image compression (56%)

2,938 Citations


Open accessJournal ArticleDOI: 10.1109/83.862633
S.G. Chang1, Bin Yu, Martin VetterliInstitutions (1)
Abstract: The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

...read more

  • Fig. 8. Illustrating the quantizer.
    Fig. 8. Illustrating the quantizer.
  • Fig. 5. Thresholding for the Laplacian prior, with = 1. (a) Compare the optimal soft-thresholdT ( ; 1) (—), the BayesShrinkthresholdT ( ) ( ), the optimal hard-thresholdT ( ; 1) (- - -), and the thresholdT h (- - ) against the standard deviation on the horizontal axis. (b) Their corresponding risks.
    Fig. 5. Thresholding for the Laplacian prior, with = 1. (a) Compare the optimal soft-thresholdT ( ; 1) (—), the BayesShrinkthresholdT ( ) ( ), the optimal hard-thresholdT ( ; 1) (- - -), and the thresholdT h (- - ) against the standard deviation on the horizontal axis. (b) Their corresponding risks.
  • Fig. 6. Thresholding for the generalized Gaussian prior, with = 1. (a) Compare the approximationT ( ) = = (—) with the optimal thresholdT ( ; ) for = 0:6; 1; 2; 3; 4 ( ). The horizontal axis s the standard deviation, . (b) The optimal risks are in ( ), and the approximation in (—).
    Fig. 6. Thresholding for the generalized Gaussian prior, with = 1. (a) Compare the approximationT ( ) = = (—) with the optimal thresholdT ( ; ) for = 0:6; 1; 2; 3; 4 ( ). The horizontal axis s the standard deviation, . (b) The optimal risks are in ( ), and the approximation in (—).
  • Fig. 10. Comparing the performance of the various methods ongoldhill with = 20. (a) Original. (b) Noisy image, = 20. (c) OracleShrink. (d) SureShrink. (e) BayesShrink. (f) BayesShrinkfollowed by MDLQ compression.
    Fig. 10. Comparing the performance of the various methods ongoldhill with = 20. (a) Original. (b) Noisy image, = 20. (c) OracleShrink. (d) SureShrink. (e) BayesShrink. (f) BayesShrinkfollowed by MDLQ compression.
  • Fig. 7. Schematic for compression-based denoising. Denoising is achieved in the wavelet transform domain by lossy-compression, which involves the design of parametersT; m; and , relating to the zero-zone width, the number of quantization levels, and the quantization binwidth, respectively.
    Fig. 7. Schematic for compression-based denoising. Denoising is achieved in the wavelet transform domain by lossy-compression, which involves the design of parametersT; m; and , relating to the zero-zone width, the number of quantization levels, and the quantization binwidth, respectively.
  • + 6

Topics: Image compression (59%), Data compression (57%), Lossy compression (57%) ...read more

2,707 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202212
2021555
2020653
2019765
2018674
2017758

Top Attributes

Show by:

Topic's top 5 most impactful authors

Chin-Chen Chang

65 papers, 1.2K citations

C.-C. Jay Kuo

40 papers, 477 citations

Touradj Ebrahimi

38 papers, 2.5K citations

Vladimir V. Lukin

37 papers, 523 citations

Feng Wu

28 papers, 773 citations

Network Information
Related Topics (5)
Data compression

43.6K papers, 756.5K citations

94% related
Feature detection (computer vision)

25.6K papers, 516.7K citations

93% related
Image restoration

23.4K papers, 509.5K citations

93% related
Edge detection

25.5K papers, 486.4K citations

92% related
Feature extraction

111.8K papers, 2.1M citations

92% related