scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
31 May 1998
TL;DR: This work explores the development of tree-based, embedded image coding, their use with different image transformations, the reasons for their effectiveness and low complexity, their flexibility and incorporation into state-of-the-art compression systems.
Abstract: We review first the development of tree-based, embedded image coding. We then explore their use with different image transformations, the reasons for their effectiveness and low complexity, their flexibility and incorporation into state-of-the-art compression systems.

5 citations

Proceedings ArticleDOI
29 Mar 1999
TL;DR: The fast mmLZ is implemented and the results show a improvement in compression of around 5% over the LZW, in the Canterbury Corpus (Arnold and Bell, 1997) with little extra computational cost.
Abstract: Summary form only given. One of the most popular encoders in the literature is the LZ78, which was proposed by Ziv and Lempel (1978). We establish a recursive way to find the longest m-tuple match. We prove the following theorem that shows how to obtain a longest (m+1)-tuple match from the longest m-tuple match. It shows that a (m+1)-tuple match is the concatenation of the first (m-1) words of the m-tuple match with the next longest double match. Therefore, the longest (m+1)-tuple match can be found using the m-tuple match and a procedure to compute the longest double match. Our theorem is as follows. Let A be a source alphabet, let A* be the set of all finite strings of A, and D/spl sub/A*, such that if x/spl isin/D then all prefixes of x belong to D. Let u denote a one-sided infinite sequence. If b/sub 1//sup m/ is the longest m-tuple match in u, with respect to D, then there is a longest (m+1)-tuple match b/spl circ//sub 1//sup m+1/, such that b/spl circ//sub i/=b/sub i/,/spl forall/i/spl isin/{1,...m-1}. We implemented the fast mmLZ and the results show a improvement in compression of around 5% over the LZW, in the Canterbury Corpus (Arnold and Bell, 1997) with little extra computational cost.

5 citations

Proceedings ArticleDOI
10 Dec 1986
TL;DR: The mini-mum-error, minimum correlation (MEMC) filter is applied to the problem of recovering space-invariant, blurred images with additive noise and, as the theory predicts, the images restored through the MEMC filters are sharper and clearer than their minimum mean-squared error counterparts, but slightly noisier in appearance.
Abstract: We treat the linear estimation problem with two simultaneous, competing objectives: minimum mean-squared error and minimum error-signal correlation. The latter objective minimizes the signal component in the error and maximizes the correlation of the estimator with the signal. The problem is solved, both for the scalar and stationary random process cases, as an optimal trade-off which produces a slightly higher mean-squared error and a much larger reduction in error-signal correlation over that of the minimum mean-squared error single objective solution. The optimal trade-off solution, which we call the mini-mum-error, minimum correlation (MEMC) filter is then applied to the problem of recovering space-invariant, blurred images with additive noise. As the theory predicts, the images restored through the MEMC filters are sharper and clearer than their minimum mean-squared error (Wiener) filter counterparts, but slightly noisier in appearance. Most viewers prefer the MEMC restorations to the Wiener ones, despite the noisier appearance.

5 citations

Proceedings ArticleDOI
25 Mar 2008
TL;DR: An extension of the SPECK algorithm is presented that deals with vector samples and is used to encode a group of successive spectral bands extracted from the hyperspectral image original block, showing the proposed encoding method is very competitive, especially at small bit rates.
Abstract: We discuss the use of lattice vector quantizers in conjunction with a quadtree-based sorting algorithm for the compression of multidimensional data sets, as encountered, for example, when dealing with hyperspectral imagery. An extension of the SPECK algorithm is presented that deals with vector samples and is used to encode a group of successive spectral bands extracted from the hyperspectral image original block. We evaluate the importance of codebook choice by showing that a choice of dictionary that better matches the characteristics of the source during the sorting pass has as big an influence in performance as the use of a transform in the spectral direction. Finally, we provide comparison against state-of-the-art encoders, both 2D and 3D ones, showing the proposed encoding method is very competitive, especially at small bit rates.

5 citations

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A low-rate video coding scheme which uses pruned tree-structured vector quantization (PTSVQ) is introduced, which achieves very good performance by exploiting the spatial and the temporal correlations of the images, as well as the regional statistics characteristic of the motion-compensated frame difference signal.
Abstract: A low-rate video coding scheme which uses pruned tree-structured vector quantization (PTSVQ) is introduced. Designed by optimally growing the tree to a large rate and then pruning back to lower rates, PTSVQ is both a multirate and a variable rate coding technique. Since it has more flexibility than full-search VQ and fixed-rate tree-structured VQ, it is able to perform better. Furthermore, the tree structure reduces the computational complexity. Incorporated with the motion compensation and the quadtree decomposition techniques, the system achieves very good performance by exploiting the spatial and the temporal correlations of the images, as well as the regional statistics characteristic of the motion-compensated frame difference signal. >

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations