scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
07 Dec 1998
TL;DR: A 3D integer wavelet packet transform is described that allows implicit bit shifting of wavelet coefficients to approximate a 3D unitary transformation to achieve good lossy coding performance.
Abstract: We examine progressive lossy to lossless compression of medical volumetric data using three-dimensional (3D) integer wavelet packet transforms and set partitioning in hierarchical trees (SPIHT). To achieve good lossy coding performance, we describe a 3D integer wavelet packet transform that allows implicit bit shifting of wavelet coefficients to approximate a 3D unitary transformation. We also address context modeling for efficient entropy coding within the SPIHT framework. Both lossy and lossless coding performance are better than those previously reported.

17 citations

Proceedings ArticleDOI
27 Mar 2007
TL;DR: 4D-SBHP efficiently encodes 4D image data by the exploitation of the dependencies in all dimensions, while enabling progressive SNR and resolution decompression, and achieves better compression performance on 4-D medical images when compared with 3-D volumetric compression schemes.
Abstract: This paper proposes a low-complexity wavelet-based method for progressive lossy-to-lossless compression of four dimensional (4-D) medical images. The subband block hierarchal partitioning (SBHP) algorithm is modified and extended to four dimensions, and applied to every code block independently. The resultant algorithm, 4D-SBHP, efficiently encodes 4D image data by the exploitation of the dependencies in all dimensions, while enabling progressive SNR and resolution decompression. The resolution scalable and lossy-to-lossless performances are empirically investigated. The experimental results show that our 4-D scheme achieves better compression performance on 4-D medical images when compared with 3-D volumetric compression schemes

17 citations

Proceedings ArticleDOI
07 Oct 2001
TL;DR: This paper proposes several low complexity algorithmic modifications to the SPIHT (set partitioning in hierarchical trees) image coding method of Said and Pearlman (1996).
Abstract: This paper proposes several low complexity algorithmic modifications to the SPIHT (set partitioning in hierarchical trees) image coding method of Said and Pearlman (1996). The modifications exploit universal traits common to the real world images. Approximately 1-2% compression gain (bit rate reduction for a given mean squared error) has been obtained for the images in our test suite by incorporating all of the proposed modifications into SPIHT.

16 citations

Journal ArticleDOI
TL;DR: A novel and computationally inexpensive analytic mean square error (MSE) distortion rate (D-R) estimator for SPIHT which generates a nearly exact distortion rates function for the 2D and 3D SPIHT algorithm is presented.
Abstract: In this letter, a novel and computationally inexpensive analytic mean square error (mse) distortion rate (D-R) estimator for SPIHT which generates a nearly exact D-R function for the two- and three-dimensional SPIHT algorithm is presented. Utilizing our D-R estimate, we employ unequal error protection and equal error protection in order to minimize the end to end MSE distortion of the transform domain. A major contribution of this letter is the simple and extremely accurate analytical D-R model which potentially improves upon pre-existing methodologies and applications that rely on an accurate and computationally inexpensive D-R estimate.

16 citations

Proceedings ArticleDOI
01 Aug 2005
TL;DR: In this article, an information-theoretic approach is used to determine the amount of information that may be safely transferred over a steganographic channel with a passive adversary, where the channel transition probabilities and a detection function are combined.
Abstract: An information-theoretic approach is used to determine the amount of information that may be safely transferred over a steganographic channel with a passive adversary. A steganographic channel, or stego-channel is a pair consisting of the channel transition probabilities and a detection function. When a message is sent, it first encounters a distortion (due to the channel), then is subject to inspection by a passive adversary (using the detection function). This paper presents results on the amount of information that may be transferred over an arbitrary stego-channel with vanishing probabilities of error and detection.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations