Author

# P. Mathieu

Bio: P. Mathieu is an academic researcher from Centre national de la recherche scientifique. The author has contributed to research in topics: Wavelet & Wavelet transform. The author has an hindex of 6, co-authored 11 publications receiving 4261 citations.

##### Papers

More filters

••

TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.

Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations

••

TL;DR: This work introduces a new image coding scheme using lattice vector quantization of wavelet coefficients, and investigates the case of Laplacian sources where surfaces of equal probability are spheres for the L(1) metric (pyramids) for arbitrary lattices.

Abstract: Introduces a new image coding scheme using lattice vector quantization. The proposed method involves two steps: biorthogonal wavelet transform of the image, and lattice vector quantization of wavelet coefficients. In order to obtain a compromise between minimum distortion and bit rate, we must truncate and scale the lattice suitably. To meet this goal, we need to know how many lattice points lie within the truncated area. We investigate the case of Laplacian sources where surfaces of equal probability are spheres for the L/sup 1/ metric (pyramids) for arbitrary lattices. We give explicit generating functions for the codebook sizes for the most useful lattices like Z/sup n/, D/sub n/, E/sub s/, /spl and//sub 16/. >

163 citations

••

03 Apr 1990TL;DR: A two-step scheme for image compression that takes into account psychovisual features in space and frequency domains is proposed, and a progressive transmission scheme is presented, particularly well adapted to progressive transmission.

Abstract: A two-step scheme for image compression that takes into account psychovisual features in space and frequency domains is proposed. A wavelet transform is first used in order to obtain a set of orthonormal subclasses of images; the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains the number of pixels required to describe the image at a constant. Second, according to Shannon's rate-distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise-shaping bit-allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. The wavelet transform is particularly well adapted to progressive transmission. >

160 citations

••

23 Mar 1992TL;DR: The case of Laplacian sources where surfaces of equal probability are spheres for the L/sub 1/ metric (pyramids) for arbitrary lattices is investigated and explicit generating functions for the codebook sizes of the most useful lattices are given.

Abstract: The image coding scheme involves two steps: biorthogonal wavelet transform of the image and pyramidal lattice vector quantization of wavelet coefficients. In order to obtain a compromise between minimum distortion and bit rate, one must truncate and scale the lattice suitably. To meet this goal, one needs to know how many lattice points lie within the truncated area. The case of Laplacian sources where surfaces of equal probability are spheres for the L/sub 1/ metric (pyramids) for arbitrary lattices is investigated. Explicit generating functions for the codebook sizes of the most useful lattices are given. >

31 citations

••

09 May 1995TL;DR: Results on simulated and real images, for data rate of 1.5 to 2 bit/sample, have confirmed the expected performance of the BGAVQ algorithm.

Abstract: This paper proposes an adaptive vector quantization scheme designed for spaceborne raw SAR data compression. This approach is based on the fact that spaceborne raw data are Gaussian distributed, independent, and quite stationary over an interval (in both azimuth and range) which depends on SAR system parameters. The block gain adaptive vector quantization (BGAVQ) is a generalization of the block adaptive quantization (BAQ) algorithm to vectors. It operates as a set of optimum vector quantizers (designed by the LBG algorithm) with different gain settings. The adaptation is particularly efficient since, for a fixed compression ratio, the same codebook is used for any spaceborne SAR data. Results on simulated and real images, for data rate of 1.5 to 2 bit/sample, have confirmed the expected performance of the BGAVQ algorithm.

24 citations

##### Cited by

More filters

•

01 Jan 1998

TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.

Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

••

TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.

Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

••

TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.

Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations

••

CNET

^{1}TL;DR: A simple, nonrigorous, synthetic view of wavelet theory is presented for both review and tutorial purposes, which includes nonstationary signal analysis, scale versus frequency,Wavelet analysis and synthesis, scalograms, wavelet frames and orthonormal bases, the discrete-time case, and applications of wavelets in signal processing.

Abstract: A simple, nonrigorous, synthetic view of wavelet theory is presented for both review and tutorial purposes. The discussion includes nonstationary signal analysis, scale versus frequency, wavelet analysis and synthesis, scalograms, wavelet frames and orthonormal bases, the discrete-time case, and applications of wavelets in signal processing. The main definitions and properties of wavelet transforms are covered, and connections among the various fields where results have been developed are shown. >

2,945 citations

••

TL;DR: An adaptive, data-driven threshold for image denoising via wavelet soft-thresholding derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution widely used in image processing applications.

Abstract: The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.

2,917 citations