scispace - formally typeset
Search or ask a question

Showing papers by "William A. Pearlman published in 2005"


Proceedings ArticleDOI
01 Jan 2005
TL;DR: A very fast, low complexity algorithm for resolution scalable and random access decoding is presented that avoids the multiple passes of bit-plane coding for speed improvement.
Abstract: A very fast, low complexity algorithm for resolution scalable and random access decoding is presented. The algorithm avoids the multiple passes of bit-plane coding for speed improvement. The decrease in dynamic ranges of wavelet coefficients magnitudes is efficiently coded. The hierarchical dynamic range coding naturally enables a resolution scalable representation of a wavelet transformed image.

20 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: In this article, a scalable three-dimensional set partitioned embedded block (3D-SPECK) is proposed for hyperspectral image compression, which is an embedded, block-based, wavelet transform coding algorithm of low complexity.
Abstract: Here we propose scalable three-dimensional set partitioned embedded block (3D-SPECK) - an embedded, block-based, wavelet transform coding algorithm of low complexity for hyperspectral image compression. Scalable 3D-SPECK supports both SNR and resolution progressive coding. After wavelet transform, 3D-SPECK treats each subband as a coding block. To generate SNR scalable bitstream, the stream is organized so that the same indexed bit planes are put together across coding blocks and subbands, so that the higher bit planes precede the lower ones. To generate resolution scalable bitstreams, each subband is encoded separately to generate a sub-bitstream. Rate is allocated amongst the sub-bitstreams produced for each block. To decode the image sequence to a particular level at a given rate, we need to encode each subband at a higher rate so that the algorithm can truncate the sub-bitstream to the assigned rate. Resolution scalable 3D-SPECK is efficient for the application of an image server. Results show that scalable 3D-SPECK provides excellent performance on hyperspectral image compression.

18 citations


Proceedings ArticleDOI
01 Aug 2005
TL;DR: In this article, an information-theoretic approach is used to determine the amount of information that may be safely transferred over a steganographic channel with a passive adversary, where the channel transition probabilities and a detection function are combined.
Abstract: An information-theoretic approach is used to determine the amount of information that may be safely transferred over a steganographic channel with a passive adversary. A steganographic channel, or stego-channel is a pair consisting of the channel transition probabilities and a detection function. When a message is sent, it first encounters a distortion (due to the channel), then is subject to inspection by a passive adversary (using the detection function). This paper presents results on the amount of information that may be transferred over an arbitrary stego-channel with vanishing probabilities of error and detection.

16 citations


Journal ArticleDOI
TL;DR: Simulations show that the multilayered protection of 3-D SPIHT outperforms the methods that use single layer protection in terms of average PSNRs and the PSNR ranges, and provides higher averagePSNR's and lower PSNR variances.

11 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: An alternative interpretation of the so-called Occam filter is provided and it is argued that optimal denoising is achieved at the corresponding critical encoding rate rather than at the encoding rates suggested by other compression-based denoisers.
Abstract: In this paper, we elaborate on denoising schemes based on lossy compression. First, we provide an alternative interpretation of the so-called Occam filter and relate it with the complexity-regularized denoising schemes in the literature. Next, we discuss about the 'critical distortion' of a noisy source and argue that optimal denoising is achieved at the corresponding critical encoding rate rather than at the encoding rates suggested by other compression-based denoisers. Finally, we discuss the so-called 'indirect rate distortion problem'. We focus particularly on the high bit-rate encoding of noisy sources and show lossless compression of a denoised source is often very wasteful of bits, and suggest a simple way of determining an appropriate bit-rate for compressing a denoised source economically while retaining its initial denoised quality.

9 citations


Proceedings ArticleDOI
14 Mar 2005
TL;DR: A simple and practical method to estimate online the optimal bit rate and provide a theoretical justification for it is proposed and shows that the proposed scheme provides improved, embedded lossy, and lossless performance competitive with the best results published so far in the literature.
Abstract: We propose an integrated, wavelet based, two-stage coding scheme for lossy, near-lossless and lossless compression of medical volumetric data. The method presented determines the bit-rate while encoding for the lossy layer and without any iteration. It is in the spirit of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by an arithmetic coding of the quantized residual to guarantee a given pixel-wise maximum error bound. We focus on the selection of the optimum bit rate for the lossy coder to achieve the minimum total (lossy plus residual) bit rate in the near-lossless and the lossless cases. We propose a simple and practical method to estimate online the optimal bit rate and provide a theoretical justification for it. Experimental results show that the proposed scheme provides improved, embedded lossy, and lossless performance competitive with the best results published so far in the literature, with an added feature of near-lossless coding.

7 citations


Proceedings ArticleDOI
25 May 2005
TL;DR: This work proposes resolution progressive Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), an embedded wavelet based algorithm for hyperspectral image compression that also supports random Region-Of-Interest (ROI) access.
Abstract: We propose resolution progressive Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), an embedded wavelet based algorithm for hyperspectral image compression. The proposed algorithm also supports random Region-Of-Interest (ROI) access. For a hyperspectral image sequence, integer wavelet transform is applied on all three dimensions. The transformed image sequence exhibits a hierarchical pyramidal structure. Each subband is treated as a code block. The algorithm encodes each code block separately to generate embedded sub-bitstream. The sub-bitstream for each subband is SNR progressive, and for the whole sequence, the overall bitstream is resolution progressive. Rate is allocated amongst the sub-bitstreams produced for each block. We always have the full number of bits possible devoted to that given scale, and only partial decoding is needed for the lower than full scales. The overall bitstream can serve the lossy-to-lossless hyperspectral image compression. Applying resolution scalable 3D-SPECK independently on each 3D tree can generate embedded bitstream to support random ROI access. Given the ROI, the algorithm can identify ROI and reconstruct only the ROI. The identification of ROI is done at the decoder side. Therefore, we only need to encode one embedded bitstream at the encoder side, and different users at the decoder side or the transmission end could decide their own different regions of interest and access or decode them. The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by 3D-SPECK. Results show that the proposed algorithm has excellent performance on hyperspectral image compression.© (2005) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

7 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: Experimental results support the idea that higher degree zerotree coder will have more coding power, and explain why the well-known SPIHT algorithm can code a wider range of zerots than EZW.
Abstract: A degree-k zero tree model is presented, in order to quantify the coding power of zerotrees in wavelet-based image coding. Based on the model, the coding behaviors of modern zerotree based image coders are clearly explained. Also, we explain why the well-known SPIHT algorithm can code a wider range of zerotrees than EZW. Experimental results support our idea that higher degree zerotree coder will have more coding power.

6 citations


Proceedings ArticleDOI
TL;DR: Experimental results with 3D-SPIHT on video sequences show that the presented bi-section idea gives substantial speed improvement with minimal bit overhead.
Abstract: For faster random access of a target image block, a bi-section idea is applied to link image blocks. Conventional methods configure the blocks in linearly linked way, for which the block seek time entirely depends on the location of the block on the compressed bitstream. The block linkage information is configured such that binary search is possible, giving the worst case block seek time of log 2 (n), for n blocks. Experimental results with 3D-SPIHT on video sequences show that the presented idea gives substantial speed improvement with minimal bit overhead.

6 citations


01 Jan 2005
TL;DR: A resolution scalable and random accessible image coding algorithm, PROGRES (Progressive Resolution Decompression), is designed based on predictive dynamic range coding of wavelet coefficients and without bit-plane coding, to explain the superior coding efficiency of SPIHT through its ability to code higher order zerotrees than EZW.
Abstract: Modern wavelet-based image compression methods provide not only higher compression performance, but also the capability to support various features, such as quality (SNR) scalability, resolution scalability, and region-of-interest encoding and decoding. Quality scalability is commonly achieved via bit-plane coding, which also helps to improve compression, since neighboring bits provide convenient and powerful contexts for entropy coding. However, on many important applications (e.g. digital camera), the images always need to have a pre-defined high quality, and any extra effort required for quality scalability is wasted. Furthermore, for compressing a very large size image source, a low time complexity is often the most desirable characteristic of an image coding algorithm. In this thesis, a resolution scalable and random accessible image coding algorithm, PROGRES (Progressive Resolution Decompression), is designed based on predictive dynamic range coding of wavelet coefficients and without bit-plane coding. Avoiding bit-plane coding leads to considerable speed improvement without compromising coding efficiency. The algorithm is designed and implemented for both 2D and 3D image sources. Experiments show that our suggested coding model lessens the computational burden of bit-plane based image coding, both in encoding and decoding time. The PROGRES algorithm combined with the presented fast random access decoding method having O(log2 n) block seek time is suitable for browsing a very large image bitstream. It can seek the requested part in the code-stream very quickly, and then decode them upto desired resolution at high speed. In related work, we introduce the concept of higher order zerotrees in modern wavelet-based coders and quantify their relative coding power. By analyzing two famous zerotree-based image coders, EZW and SPIHT, we are able to explain the superior coding efficiency of SPIHT through its ability to code higher order zerotrees than EZW. We are also able to calculate the bit savings of SPIHT compared to EZW within this framework.

2 citations


Proceedings ArticleDOI
29 Mar 2005
TL;DR: A very fast, low complexity algorithm for resolution-scalable and random access decoding is presented, which outperforms the LTW in Oliver et al. (2003), by up to two times in encoding and up to seven times in decoding.
Abstract: Summary form only given. A very fast, low complexity algorithm for resolution-scalable and random access decoding is presented. The algorithm avoids the multiple passes of bit-plane coding for speed improvement. The decrease in dynamic range of wavelet coefficient magnitudes is efficiently coded. The hierarchical dynamic range coding naturally enables resolution-scalable representation of a wavelet transformed image. The method predicts the dynamic range of energy in each subset based on the dynamic range of energy of a parent set. Speed improvement over SPIHT is up to two times in encoding, and up to four times in decoding. The loss of quality is very small. Our method outperforms the LTW in Oliver et al. (2003), by up to two times in encoding and up to seven times in decoding.