scispace - formally typeset
Search or ask a question
Author

William A. Pearlman

Bio: William A. Pearlman is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Data compression & Set partitioning in hierarchical trees. The author has an hindex of 36, co-authored 202 publications receiving 12924 citations. Previous affiliations of William A. Pearlman include Texas A&M University & University of Wisconsin-Madison.


Papers
More filters
Proceedings ArticleDOI
09 Oct 1994
TL;DR: An algorithm for estimating and coding the texture model parameters is presented, and it is shown that the suggested algorithm yields high quality reconstructions at low bit rates.
Abstract: A novel approach for coding textured images is presented. The texture field is assumed to be a realization of a regular homogeneous random field, which can have a mixed spectral distribution. On the basis of a 2D Wold-like decomposition, the field is represented as a sum of purely indeterministic, harmonic, and a countable number of evanescent fields. We present an algorithm for estimating and coding the texture model parameters, and show that the suggested algorithm yields high quality reconstructions at low bit rates. The model and the resulting coding algorithm are seen to be applicable to a wide variety of texture types found in natural images.

4 citations

Proceedings ArticleDOI
24 Oct 2004
TL;DR: An improved method of pixel 'range' coding is proposed to deal with the problem of extracting regions of compressed images with the pixel values within a pre-defined range without having to decompress the whole image.
Abstract: Technical imaging applications such as coding "images" of digital elevation maps, require extracting regions of compressed images with the pixel values within a pre-defined range, without having to decompress the whole image. Previously, we introduced a class of nonlinear transforms which are small modifications of linear transforms to facilitate a search for the regions with pixel values below (above) a given 'threshold', without incurring any penalty in coding efficiency. However, coding efficiency had to be somewhat compromised when searching for regions with a given pixel 'range', especially at high coding rates. In this paper, we propose an improved method of pixel 'range' coding to deal with the aforementioned problem. Results show significant improvements in coding efficiency.

3 citations

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper improves the results obtained by Polyak and Pearlman, presenting filters comparable in compression performance and faster than the biorthogonal 10/18 filters, and combines the approach with the lifting and prediction schemes similar to the ones discussed by Said andPearson.
Abstract: In a previous paper, we presented a method to design perfect reconstruction filters using arbitrary lowpass filter kernels and presented fast filters with compression performance surpassing the well-known 9/7 biorthogonal filters. This paper improves the results obtained by Polyak and Pearlman (see Proc. IEEE International Conference on Image Processing, Santa Barbara, CA, vol.1, p.660-63, 1997), presenting filters comparable in compression performance and faster than the biorthogonal 10/18 filters. Furthermore, we combine our approach with the lifting and prediction schemes similar to the ones discussed by Said and Pearlman (see IEEE Trans. on Image Processing, vol.5, p.1303-10, 1996) in deriving the SS-P filters and later extended by Sweldens (see Appl. Comput. Harm. Anal., vol.3, no.2, p.186-200, 1996), thus obtaining integer to integer transforms whose performance is comparable to one of the S+P filters. At this stage, our algorithms are, however considerably slower. In any case it seems that the flexibility of our method shows some promise to serve as a basis for finding new integer to integer filters.

3 citations

Proceedings ArticleDOI
07 Jan 2002
TL;DR: A new method for partitioning the wavelet coefficients into spatio-temporal blocks to get higher error resilience and to support error concealment and gives higher coding performance in noseless channels than the conventional grouping method of grouping contiguous trees.
Abstract: This paper presents an embedded video compression with error resilience and error concealment using three dimensional SPIHT (3-D SPIHT) algorithm. We use a new method for partitioning the wavelet coefficients into spatio-temporal (s-t) blocks to get higher error resilience and to support error concealment. Instead of grouping adjacent coefficients, we group coefficients in some fixed intervals in the lowest subband to get interleaved trees. All of the sub-blocks correspond to a group of full frames within the image sequence, because each group of interleaved trees has coefficients dispersed over the entire frame group. Then we separate the stream into fixed length packets and encode every one with a channel code. Experiments show that our proposed method brings higher error resilience in noisy channels since the decoded coefficients associated with early decoding error would be spread out to the whole area along with the sequence, and we can conceal the lost coefficients with the surrounding coefficients even if some of substreams are totally missing. In addition to that, our proposed method gives higher coding performance in noseless channels than the conventional grouping method of grouping contiguous trees.

3 citations

Proceedings ArticleDOI
04 Jan 2002
TL;DR: The experimental results show that the proposed motion compensated two-link chain coding technique to effectively encode 2-D binary shape sequences for object-based video coding is better than the CAE technique which is applied in the MPEG-4 verification model.
Abstract: In this paper, we present a motion compensated two-link chain coding technique to effectively encode 2-D binary shape sequences for object-based video coding. This technique consists of a contour motion estimation and compensation algorithm and a two-link chain coding algorithm. The object contour is defined on a 6-connected contour lattice for a smoother contour representation. The contour in the current frame is first predicted by global motion and local motion based on the decoded contour in the previous frame; then, it is segmented into motion success segments, which can be predicted by the global motion or the local motion, and motion failure segments, which can not be predicted by the global and local motion. For each motion failure segment, a two-link chain code, which uses one chain code to represent two consecutive contour links, followed by an arithmetic coder is proposed for efficient coding. Each motion success segment can be represented by the motion vector and its length. For contour motion estimation and compensation, besides the translational motion model, an affine global motion model is proposed and investigated for complex global motion. We test the performance of the proposed technique by several MPEG-4 shape test sequences. The experimental results show that our proposed scheme is better than the CAE technique which is applied in the MPEG-4 verification model.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations