scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Journal ArticleDOI
TL;DR: A new fast method with great potential for multiresolution pyramid decomposition of signals and images, which enabled us to choose the best filters for set partitioning in hierarchical trees (SPIHT) image compression for the corresponding support sizes.
Abstract: We propose a new fast method with great potential for multiresolution pyramid decomposition of signals and images. The method allows unusual flexibility in choosing a filter for any task involving the multiresolution analysis and synthesis. Using our method, one can choose any low-pass filter for the multiresolution filtering. This method enabled us to choose the best filters for set partitioning in hierarchical trees (SPIHT) image compression for the corresponding support sizes. The compression results for our seven tap filters are better than those of the 9/7 wavelet filters and approximately the same as those of the 10/18 filters, while at the same time our seven tap filters are faster than 10/18 filters.

8 citations

Proceedings ArticleDOI
22 Mar 2006
TL;DR: This paper presents SPIHT and block-wise SPIHT algorithms where full depth first search algorithm is used to agglomerate significant bits at each bitplane to minimize the final memory usage without paying additional overhead cost.
Abstract: This paper presents SPIHT and block-wise SPIHT algorithms where full depth first search algorithm is used to agglomerate significant bits at each bitplane. Search strategies used for SPIHT to date are more or less based on a breadth first search algorithm. The aim of this work is to minimize the final memory usage without paying additional overhead cost. DFS also brings benefits such as resolution scalability and a random access decodable bitstream.

8 citations

Journal ArticleDOI
TL;DR: It is shown that the sequence of discrete Gabor (1946) basis functions with periodic kernel and with a certain inner product on the space of N-periodic discrete functions, satisfies the CS condition and the theory of decomposition upon CS vector sequences is then applied to the Gabor basis functions to produce a fast algorithm for calculation of theGabor coefficients.
Abstract: Certain vector sequences in Hermitian or in Hilbert spaces, can be orthogonalized by a Fourier transform. In the finite-dimensional case, the discrete Fourier transform (DFT) accomplishes the orthogonalization. The property of a vector sequence which allows the orthogonalization of the sequence by the DFT, called circular stationarity (CS), is discussed in this paper. Applying the DFT to a given CS vector sequence results in an orthogonal vector sequence, which has the same span as the original one. In order to obtain coefficients of the decomposition of a vector upon a particular nonorthogonal CS vector sequence, the decomposition is first found upon the equivalent DFT-orthogonalized one and then the required coefficients are found through the DFT. It is shown that the sequence of discrete Gabor (1946) basis functions with periodic kernel and with a certain inner product on the space of N-periodic discrete functions, satisfies the CS condition. The theory of decomposition upon CS vector sequences is then applied to the Gabor basis functions to produce a fast algorithm for calculation of the Gabor coefficients. >

8 citations

Proceedings ArticleDOI
17 May 2004
TL;DR: A novel and computationally inexpensive analytic mean square error (mse) distortion rate (D-R) estimator for SPIHT which generates a nearly exact D-R function for the two- and three-dimensional SPIHT algorithm is presented.
Abstract: In this paper a novel and computationally inexpensive analytic mean square error (MSE) distortion rate (D-R) estimator for SPIHT which generates a nearly exact distortion rate (D-R) function for the 2D and 3D SPIHT algorithm is presented. The analytical formula is derived from the observations that for any bit-plane coder, the slope of the D-R curve is constant for each level of the bit plane. Furthermore the slope of D-R curve reduces by a factor proportional to the level of the bit plane. An application of the derived results is presented in the area of 2D SPIHT transmission employing a binary symmetric channel (BSC) and Reed Solomon (RS) forward error correction (FEC) codes. Utilizing our D-R estimate, we employ unequal error protection (UEP) and equal error protection (EEP) in order to minimize the end to end mean square error (MSE) distortion of the transform domain. UEP yields a significant performance gain relative to EEP only when the average number of parity bits for a group of packets is constrained. When both the source rate and channel code rate varied under a bit budget constraint, optimal UEP yields only a slight improvement over the optimal EEP. A major contribution of this paper is the simple and extremely accurate analytical D-R model which potentially improves upon pre-existing methodologies and applications that rely on an accurate and computationally inexpensive D-R estimate. Another important contribution is that the optimum EEP, which requires almost no header information and can easily be computed using our method, is only slightly worse than the optimum UEP.

8 citations

Proceedings ArticleDOI
18 May 2008
TL;DR: Enhanced versions of the LVQ-SPECK algorithm are presented that further exemplify the codec's good performance when dealing with multidimensional data sets and compare with other state-of-the-art codecs employed to the same task and show how competitive the proposed extensions are.
Abstract: Enhanced versions of the LVQ-SPECK algorithm are presented that further exemplify the codec's good performance when dealing with multidimensional data sets. The two different alternatives might, in fact, be considered stepwise improvements over the original codec. First, an extended-range option is implemented, so that the number of spectral bands being simultaneously encoded is now a multiple of (instead of equal to) the codeword dimension. That in itself provides the means for much better rate allocation among the different bands. Then, use of the discrete wavelet transform over the spectral dimension, generating a wavelet packet decomposition of the original dataset is considered, and a substantial increase in coding performance is obtained, provided by the energy compaction characteristics of the transform. We provide results for the two different options of four-dimensional codebooks investigated, namely the shell-1 and shell-2 of the D4 lattice, further cementing the idea that the latter is much more adept for use in this setting than the former. Finally we present comparison with other state-of-the-art codecs employed to the same task and show how competitive our proposed extensions are.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations