scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
22 Mar 1999
TL;DR: The optimum space-spatial frequency localization property of this transform is utilized in the Embedded Zero Tree Wavelet Coding which has been refined to produce best performance in SPIHT (set partitioning in the hierarchical trees) algorithm for lossy compression and S+P (S-transform and prediction) for lossless compression.
Abstract: Wavelet Transform is known to produce the most effective and computational efficient technique for image compression. The optimum space-spatial frequency localization property of this transform is utilized in the Embedded Zero Tree Wavelet Coding which has been refined to produce best performance in SPIHT (set partitioning in the hierarchical trees) algorithm for lossy compression and S+P (S-transform and prediction) for lossless compression. Using the multi- resolution property of wavelet transform one can also have progressive transmission for preliminary inspection where the criterion for progressiveness could be either fidelity or resolution. The three important points of wavelet based compression algorithms are: (1) partial ordering of transformed magnitudes with order transmission using subset partitioning, (2) refinement bit transmission using ordered bit plane, and (3) use of the self-similarity of the transform coefficients for different scales.

2 citations

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A computationally inexpensive analytic mean square error (MSB) distortion rate estimator for progressive wavelet coders which generates an exact distortion rate (D-R) function for 2-D and 3-D SPIHT algorithm is utilized and can be used to increase image and video quality in CDMA systems.
Abstract: In this paper we consider the problem of faded wireless image and video transmission schemes under energy transmission constraints. An example of such a constrained system experiencing Rayleigh fading is code division multiple access (CDMA) where power control is used. We imposed on the average energy transmitted per an image or image sequence. A computationally inexpensive analytic mean square error (MSB) distortion rate (D-R) estimator for progressive wavelet coders which generates an exact distortion rate (D-R) function for 2-D and 3-D SPIHT algorithm is utilized from our previous research. Using the D-R function, the expected mean square error(MSE) at the receiver is minimized by optimally assigning the energy and parity allocation per each transmission block among the packets. A gain of 1.09 dB relative to optimal equal error protection (EEP) is obtained by varying both energy and parity among the transmission blocks. The results can be used to to increase image and video quality in CDMA systems.

1 citations

Proceedings ArticleDOI
TL;DR: Add spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.
Abstract: We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer In other words, we have achieved full scalability in rate and partial scalability in space and time This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning

1 citations

Proceedings ArticleDOI
04 Oct 1992
Abstract: The theory of stationary processes is applied to homogeneous functional sequences Examples of time and frequency shifts and dilations are considered It is proved that the functional sequences are homogeneous (stationary) and therefore correspond to a one-dimensional stationary vector field The spectral properties of these sequences are explored It is also shown that the Gabor expansion corresponds to a two-dimensional homogeneous vector field The spectral properties of the Gabor (1946) expansion are considered, and the formula for the coefficients of the Gabor decomposition is derived >

1 citations

Journal ArticleDOI
TL;DR: The average of the first error-free run length (FEFRL) is proposed as a simpler performance metric on binary symmetric channels (BSC), which greatly simplifies performance optimization.
Abstract: We investigate and prove the relationships among several commonly used and new performance metrics for embedded image bit streams transmitted over noisy channels. The average of the first error-free run length (FEFRL) is proposed as a simpler performance metric. On binary symmetric channels (BSC), the average FEFRL is obtained in closed form, which greatly simplifies performance optimization. Simulation results justify the merit of the proposed technique.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations