scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
22 Jun 2004
TL;DR: This work reduces the computational requirements of the additive noise steganalysis presented by Harmsen and Pearlman by considering the histogram between pairs of channels in RGB images, which is shown to offer computational savings of approximately two orders of magnitude while only slightly decreasing classification accuracy.
Abstract: This work reduces the computational requirements of the additive noise steganalysis presented by Harmsen and Pearlman. The additive noise model assumes that the stegoimage is created by adding a pseudo-noise to a coverimage. This addition predictably alters the joint histogram of the image. In color images it has been shown that this alteration can be detected using a three-dimensional Fast Fourier Transform (FFT) of the histogram. As the computation of this transform is typically very intensive, a method to reduce the required processing is desirable. By considering the histogram between pairs of channels in RGB images, three separate two-dimensional FFTs are used in place of the original three-dimensional FFT. This method is shown to offer computational savings of approximately two orders of magnitude while only slightly decreasing classification accuracy.

20 citations

Proceedings ArticleDOI
09 Jan 1998
TL;DR: Simulation shows that 3D SPIHT with reduced coding latency still achieves coding result comparable to MPEG-2, and exhibits more uniform PSNR fluctuations.
Abstract: In this paper, a modification of the 3D SPIHT, which is the 3D extension to image sequence of 2D SPIHT still image coding, is presented in order to allow more flexibility in choosing the number of frames to be processed at one time by introducing unbalanced tree structure. Simulation shows that 3D SPIHT with reduced coding latency still achieves coding result comparable to MPEG-2, and exhibits more uniform PSNR fluctuations. In additional extension to color video coding is accomplished without explicit rate-allocation, and can be used to any color-plane representation.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

20 citations

Journal ArticleDOI
TL;DR: It is concluded that in a medium rate range below 1 bit/pel/frame where reconstructions for hybrid transform/ DPCM may be unsatisfactory, there is enough margin for improvement to consider more sophisticated coding schemes.
Abstract: We seek to evaluate the efficiency of hybrid transform/ DPCM interframe image coding relative to an optimal scheme that minimizes the mean-squared error in encoding a stationary Gaussian image sequence. The stationary assumption leads us to use the asymptotically optimal discrete Fourier transform (DFT) on the full frame of an image. We encode an actual image sequence with full-frame DFT/DPCM at several rates and compare it to previous interframe coding results with the same sequence. We also encode a single frame at these same rates using a full-frame DFT to demonstrate the inherent coding gains of interframe transform DPCM over intraframe coding. We then generate a pseudorandom image sequence with precise Gauss-Markov statistics and encode it by hybrid full-frame DFT/DPCM at various rates. We compare the signal-to-noise ratios (SNR's) of these reconstructions to the optimal ones calculated from the rate-distortion function. We conclude that in a medium rate range below 1 bit/pel/frame where reconstructions for hybrid transform/ DPCM may be unsatisfactory, there is enough margin for improvement to consider more sophisticated coding schemes.

19 citations

Proceedings ArticleDOI
18 Mar 2005
TL;DR: In this article, a scalable three-dimensional set partitioned embedded block (3D-SPECK) is proposed for hyperspectral image compression, which is an embedded, block-based, wavelet transform coding algorithm of low complexity.
Abstract: Here we propose scalable three-dimensional set partitioned embedded block (3D-SPECK) - an embedded, block-based, wavelet transform coding algorithm of low complexity for hyperspectral image compression. Scalable 3D-SPECK supports both SNR and resolution progressive coding. After wavelet transform, 3D-SPECK treats each subband as a coding block. To generate SNR scalable bitstream, the stream is organized so that the same indexed bit planes are put together across coding blocks and subbands, so that the higher bit planes precede the lower ones. To generate resolution scalable bitstreams, each subband is encoded separately to generate a sub-bitstream. Rate is allocated amongst the sub-bitstreams produced for each block. To decode the image sequence to a particular level at a given rate, we need to encode each subband at a higher rate so that the algorithm can truncate the sub-bitstream to the assigned rate. Resolution scalable 3D-SPECK is efficient for the application of an image server. Results show that scalable 3D-SPECK provides excellent performance on hyperspectral image compression.

18 citations

Proceedings ArticleDOI
10 Sep 2000
TL;DR: This paper describes a low-memory cache efficient hybrid block coder for images in which an image subband decomposition is partitioned into a combination of spatial blocks and subband blocks, which are independently coded.
Abstract: This paper describes a low-memory cache efficient hybrid block coder (HBC) for images in which an image subband decomposition is partitioned into a combination of spatial blocks and subband blocks, which are independently coded. Spatial blocks contain hierarchical trees spanning subband levels, and are each encoded using the SPIHT algorithm. Subband blocks contain a block of coefficients from within a single subband, and are each encoded by the SPECK algorithm. The decomposition may have the dyadic or a wavelet packet structure. Rate is allocated amongst the sub-bitstreams produced for each block and they are packetized. The partitioning structure supports resolution embedding. The final bitstream may be progressive in fidelity or in resolution.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations