scispace - formally typeset
Search or ask a question

Showing papers by "William A. Pearlman published in 2011"


Book
30 Dec 2011
TL;DR: This book contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals.
Abstract: With clear and easy-to-understand explanations, this book covers the fundamental concepts and coding methods of signal compression, whilst still retaining technical depth and rigor. It contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals. Principles of lossless compression are covered, as are various entropy coding techniques, including Huffman coding, arithmetic coding and Lempel-Ziv coding. Scalar and vector quantization and trellis coding are thoroughly explained, and a full chapter is devoted to mathematical transformations including the KLT, DCT and wavelet transforms. The workings of transform and subband/wavelet coding systems, including JPEG2000 and SBHP image compression and H.264/AVC video compression, are explained and a unique chapter is provided on set partition coding, shedding new light on SPIHT, SPECK, EZW and related methods.

53 citations


Journal ArticleDOI
TL;DR: This paper presents lossy compression algorithms which build on a state-of-the-art codec by incorporating a lattice vector quantizer codebook, therefore allowing it to process multiple samples at one time, and shows that they constitute a viable option for the compression of volumetric datasets with large amounts of data.
Abstract: This paper presents lossy compression algorithms which build on a state-of-the-art codec, the Set Partitioned Embedded Block Coder (SPECK), by incorporating a lattice vector quantizer codebook, therefore allowing it to process multiple samples at one time. In our tests, we employ scenes derived from standard AVIRIS hyperspectral images, which possess 224 spectral bands. The first proposed method, LVQ-SPECK, uses a lattice vector quantizer-based codebook in the spectral direction to encode a number of consecutive bands that is equal to the codeword dimension. It is shown that the choice of orientation codebook used in the encoding greatly influences the performance results. In fact, even though the method does not make use of a 3-D discrete wavelet transform, in some cases it produces results that are comparable to those of other state-of-the-art 3-D codecs. The second proposed algorithm, DWP-SPECK, incorporates the 1-D discrete wavelet transform in the spectral direction, producing a discrete wavelet packet decomposition, and simultaneously encodes a larger number of spectral bands. This method yields performance results that are comparable or superior to those attained by other 3-D wavelet coding algorithms such as 3D-SPECK and JPEG2000 (in its multi-component version). We also look into a novel method for reducing the number of codewords used during the refinement pass in the proposed methods which, for most codebooks, provides a reduction in rate while following the same encoding path of the original methods, thereby improving their performance. We show that it is possible to separate the original codebook used into two distinct classes, and use a flag when sending refinement information to indicate to which class this information belongs. In summary, given the results obtained by our proposed methods, we show that they constitute a viable option for the compression of volumetric datasets with large amounts of data.

13 citations



Book ChapterDOI
01 Oct 2011

1 citations



Book ChapterDOI
01 Oct 2011
TL;DR: In this paper, the authors make a modest start toward the understanding of how to compress realistic sources by presenting the theory and practice of quantization and coding of sources of independent and identically distributed random variables.
Abstract: Introduction In normal circumstances, lossless compression reduces file sizes in the range of a factor of 2, sometimes a little more and sometimes a little less. Often it is acceptable and even necessary to tolerate some loss or distortion between the original and its reproduction. In such cases, much greater compression becomes possible. For example, the highest quality JPEG-compressed images and MP3 audio are compressed about 6 or 7 to 1. The objective is to minimize the distortion, as measured by some criterion, for a given rate in bits per sample or equivalently, minimize the rate for a given level of distortion. In this chapter, we make a modest start toward the understanding of how to compress realistic sources by presenting the theory and practice of quantization and coding of sources of independent and identically distributed random variables. Later in the chapter, we shall explain some aspects of optimal lossy compression, so that we can assess how well our methods perform compared to what is theoretically possible. Quantization The sources of data that we recognize as digital are discrete in value or amplitude and these values are represented by a finite number of bits. The set of these discrete values is a reduction from a much larger set of possible values, because of the limitations of our computers and systems in precision, storage, and transmission speed. We therefore accept the general model of our data source as continuous in value. The discretization process is called quantization.

Book ChapterDOI
01 Oct 2011
TL;DR: In this paper, the authors introduce the concept of distributed source coding (DSC) and discuss the conditions under which DSC is ideally efficient and discuss some practical schemes that attempt to realize rate savings in the DSC paradigm.
Abstract: In this chapter, we introduce the concept that correlated sources need not be encoded jointly to achieve greater efficiency than encoding them independently. In fact, if they are encoded independently and decoded jointly, it is theoretically possible under certain conditions to achieve the same efficiency as when encoded jointly. Such a method for coding correlated sources is called distributed source coding ( DSC ). Figure 14.1 depicts the paradigm of DSC with independent encoding and joint decoding. In certain applications, such as sensor networks and mobile communications, circuit complexity and power drain are too burdensome to be tolerated at the transmission side. DSC shifts complexity and power consumption from the transmission side to the receiver side, where it can be more easily handled and tolerated. The content of this chapter presents the conditions under which DSC is ideally efficient and discusses some practical schemes that attempt to realize rate savings in the DSC paradigm. There has been a plethora of recent work on this subject, so an encyclopedic account is impractical and ill-advised in a textbook. The goal here is to explain the principles clearly and elucidate them with a few examples. Slepian–Wolf coding for lossless compression Consider two correlated, discrete scalar sources X and Y . Theoretically, these sources can be encoded independently without loss using H ( X ) and H ( Y ) bits, respectively, where H ( X ) and H ( Y ) are the entropies of these sources. However, if encoded jointly, both these sources can be reconstructed perfectly using only H ( X, Y ) bits, the joint entropy of these sources.