scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
29 Mar 2005
TL;DR: A very fast, low complexity algorithm for resolution-scalable and random access decoding is presented, which outperforms the LTW in Oliver et al. (2003), by up to two times in encoding and up to seven times in decoding.
Abstract: Summary form only given. A very fast, low complexity algorithm for resolution-scalable and random access decoding is presented. The algorithm avoids the multiple passes of bit-plane coding for speed improvement. The decrease in dynamic range of wavelet coefficient magnitudes is efficiently coded. The hierarchical dynamic range coding naturally enables resolution-scalable representation of a wavelet transformed image. The method predicts the dynamic range of energy in each subset based on the dynamic range of energy of a parent set. Speed improvement over SPIHT is up to two times in encoding, and up to four times in decoding. The loss of quality is very small. Our method outperforms the LTW in Oliver et al. (2003), by up to two times in encoding and up to seven times in decoding.

2 citations

Proceedings ArticleDOI
01 Nov 2002
TL;DR: The initial test indicates that ERC-SPIHT gives excellent results in noisy channel conditions and is shown to have superior performance over MPEG-2 with FEC when communicated over a military satellite channel.
Abstract: Error Resilient and Error Concealment 3-D SPIHT (ERC-SPIHT) is a joint source channel coder developed to improve the overall performance against channel bit errors without requiring automatic-repeat-request (ARQ). The objective of this research is to test and validate the properties of two competing video compression algorithms in a wireless environment. The property focused on is error resiliency to the noise inherent in wireless data communication. ERC-SPIHT and MPEG-2 with forward error correction (FEC) are currently undergoing tests over a satellite communication link. The initial test indicates that ERC-SPIHT gives excellent results in noisy channel conditions is shown to have superior performance over MPEG-2 with FEC when communicated over a military satellite channel.

2 citations

Proceedings ArticleDOI
10 Jan 1997
TL;DR: The rate constrained block matching algorithm (RC-BMA), introduced in this paper jointly minimizes DFD variance and entropy or conditional entropy of motion vectors for determining the motion vectors in low rate video coding applications where the contribution of the motion vector rate to the overall coding rate might be significant.
Abstract: The rate constrained block matching algorithm (RC-BMA), introduced in this paper jointly minimizes DFD variance and entropy or conditional entropy of motion vectors for determining the motion vectors in low rate video coding applications where the contribution of the motion vector rate to the overall coding rate might be significant. The motion vector rate versus DFD variance performance of RC-BMA employing size KxK blocks is shown to be superior to that of the conventional minimum distortion block matching algorithm (MD-BMA) employing size 2Kx2K blocks. Constraining of the entropy or conditional entropy of motion vectors in RC-BMA results in smoother and more organized motion vector fields with respect to those output by MD-BMA. The motion vector rate of RC-BMA can also be fine tuned to a desired level for each frame by adjusting a single parameter.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations

Proceedings ArticleDOI
01 Nov 1989
TL;DR: In this paper, an image pyramid incorporating properties of the human visual system is developed and used for compressing images, which is done in two stages: in the first stage quadrature mirror filters (QMFs) are used to decompose the image; in the second stage directional "dome" filters are applied to the subbands generated by the QMFs.
Abstract: An image pyramid incorporating properties of the human visual system is developed and is used for compressing images. Generation of the pyramid is done in two stages: in the first stage quadrature mirror filters (QMFs) are used to decompose the image; in the second stage directional "dome" filters are applied to the subbands generated by the QMFs. The dome filter is designed such that its impulse response function resembles the receptive field of the human cortical cell. Perfect reconstruction is possible by simply interpolating, filtering and adding the various subbands. Optimal quantization of the oriented pyramid components is done based on sensitivity of the visual system. Simulation results are presented for the "lena" image.

2 citations

Proceedings ArticleDOI
20 Nov 2001
TL;DR: The need for ARQ is eliminated by making the 3D SPIHT bitstream more robust and resistant to channel errors, and the reconstructed video is shown to be superior to that of MPEG- 2, with the margin of superiority growing substantially as the channel becomes noisier.
Abstract: Compressed video bitstreams require protection from channel errors in a wireless channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error- correcting (FEC) channel (RCPR) code combined with a single ARQ (Automatic-repeat-request) proved to be an effective means for protecting the bitstream. In this paper, the need for ARQ is eliminated by making the 3D SPIHT bitstream more robust and resistant to channel errors. Packetization of the bitstream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness is demonstrated and combined with channel coding to not only protect the integrity of the packets, but also allow detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. In extensive comparative tests, the reconstructed video is shown to be superior to that of MPEG- 2, with the margin of superiority growing substantially as the channel becomes noisier.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations