scispace - formally typeset
Search or ask a question

Showing papers by "William A. Pearlman published in 1997"


Proceedings ArticleDOI
25 Mar 1997
TL;DR: Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.
Abstract: The SPIHT (set partitioning in hierarchical trees) algorithm by Said and Pearlman (see IEEE Trans. on Circuits and Systems for Video Technology, no.6, p.243-250, 1996) is known to have produced some of the best results in still image coding. It is a fully embedded wavelet coding algorithm with precise rate control and low complexity. We present an application of the SPIHT algorithm to video sequences, using three-dimensional (3D) wavelet decompositions and 3D spatio-temporal dependence trees. A full 3D-SPIHT encoder/decoder is implemented in software and is compared against MPEG-2 in parallel simulations. Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.

333 citations


Proceedings ArticleDOI
29 Jun 1997
TL;DR: In this paper, a low-complexity entropy-coding method was proposed for coding waveform signals. But it is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropycoding process; (2) a new recursive set partitioning entropy coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs.
Abstract: We propose a new low-complexity entropy-coding method to be used for coding waveform signals. It is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropy-coding process; (2) a new recursive set partitioning entropy-coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs. Numerical results with its application for lossy and loss-less image compression show the efficacy of the new method, comparable to the best known methods.

43 citations


Proceedings ArticleDOI
10 Jan 1997
TL;DR: Numerical results with the application of a new low-complexity entropy-coding method for lossy and lossless image compression show the efficacy of the new method comparable to the best known methods.
Abstract: We propose a new low-complexity entropy-coding method to be used for coding waveform signals. It is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropy-coding process; (2) a new recursive set partitioning entropy-coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs. Numerical results with its application for lossy and lossless image compression show the efficacy of the new method, comparable to the best known methods.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

35 citations


Patent
07 Feb 1997
TL;DR: In this article, a data compression method, system and program code are provided which optimize entropy-coding by reducing complexity through alphabet partitioning, and then employing sample-group partitioning in order to maximize data compression on groups of source numbers.
Abstract: A data compression method, system and program code are provided which optimize entropy-coding by reducing complexity through alphabet partitioning, and then employing sample-group partitioning in order to maximize data compression on groups of source numbers. The approach is to employ probabilities to renumber source numbers so that smaller numbers correspond to more probable source numbers. This is followed by group partitioning of the stream of resultant numbers into at least two groups, for example, defining regions of an image, ranges of time, or a linked list of data elements related by spatial, temporal or spatio-temporal dependence. A maximum number (Nm) is found in a group of numbers of the at least two groups and then entropy-coded. A recursive entropy encoding of the numbers of the group is then employed using the maximum number Nm. The process is repeated for each partitioned group. Decoding of the resultant codewords involves the inverse process. Transformation and/or quantization may all be employed in combination with the group-partitioning entropy encoding.

32 citations


Proceedings ArticleDOI
15 Aug 1997
TL;DR: This paper focuses on two recent low complexity algorithms for image compression which exploit data characteristics very efficiently and explains how these recent algorithms utilize these principles.
Abstract: Optimal performance in a data compression scheme is very hard to obtain without inordinate computational complexity. However, there are possibilities of obtaining high performance with low complexity by proper exploitation of the characteristics of the data. Here, we shall concentrate on two recent low complexity algorithms for image compression which exploit data characteristics very efficiently. We shall try to discover principals for realizing high performance with low complexity and explain how these recent algorithms utilize these principles We shall also present image compression result with actual codecs which realize the promise of high compression with low complexity.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

9 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper improves the results obtained by Polyak and Pearlman, presenting filters comparable in compression performance and faster than the biorthogonal 10/18 filters, and combines the approach with the lifting and prediction schemes similar to the ones discussed by Said andPearson.
Abstract: In a previous paper, we presented a method to design perfect reconstruction filters using arbitrary lowpass filter kernels and presented fast filters with compression performance surpassing the well-known 9/7 biorthogonal filters. This paper improves the results obtained by Polyak and Pearlman (see Proc. IEEE International Conference on Image Processing, Santa Barbara, CA, vol.1, p.660-63, 1997), presenting filters comparable in compression performance and faster than the biorthogonal 10/18 filters. Furthermore, we combine our approach with the lifting and prediction schemes similar to the ones discussed by Said and Pearlman (see IEEE Trans. on Image Processing, vol.5, p.1303-10, 1996) in deriving the SS-P filters and later extended by Sweldens (see Appl. Comput. Harm. Anal., vol.3, no.2, p.186-200, 1996), thus obtaining integer to integer transforms whose performance is comparable to one of the S+P filters. At this stage, our algorithms are, however considerably slower. In any case it seems that the flexibility of our method shows some promise to serve as a basis for finding new integer to integer filters.

3 citations


01 Jan 1997
TL;DR: In this paper, a low-complexity entropy-coding method was proposed for coding waveform signals. But it is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropycoding process; (2) a new recursive set partitioning entropy coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs.
Abstract: We propose a new low-complexity entropy-coding method to be used for coding waveform signals. It is based on the combination of two schemes: (1) an alphabet partitioning method to reduce the complexity of the entropy-coding process; (2) a new recursive set partitioning entropy-coding process that achieves rates smaller than first order entropy even with fast Huffman adaptive codecs. Numerical results with its application for lossy and lossless image compression show the efficacy of the new method, comparable to the best known methods.

3 citations


Proceedings ArticleDOI
10 Jan 1997
TL;DR: The rate constrained block matching algorithm (RC-BMA), introduced in this paper jointly minimizes DFD variance and entropy or conditional entropy of motion vectors for determining the motion vectors in low rate video coding applications where the contribution of the motion vector rate to the overall coding rate might be significant.
Abstract: The rate constrained block matching algorithm (RC-BMA), introduced in this paper jointly minimizes DFD variance and entropy or conditional entropy of motion vectors for determining the motion vectors in low rate video coding applications where the contribution of the motion vector rate to the overall coding rate might be significant. The motion vector rate versus DFD variance performance of RC-BMA employing size KxK blocks is shown to be superior to that of the conventional minimum distortion block matching algorithm (MD-BMA) employing size 2Kx2K blocks. Constraining of the entropy or conditional entropy of motion vectors in RC-BMA results in smoother and more organized motion vector fields with respect to those output by MD-BMA. The motion vector rate of RC-BMA can also be fine tuned to a desired level for each frame by adjusting a single parameter.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations