scispace - formally typeset
Search or ask a question

Showing papers by "William A. Pearlman published in 1999"


Journal ArticleDOI
01 Aug 1999
TL;DR: A new adaptive windowing algorithm is proposed for speckle noise suppression which solves the problem of window size associated with the local statistics adaptive filters and is applied to both a simulated SAR image and an ERS-1 SAR image.
Abstract: Speckle noise usually occurs in synthetic aperture radar (SAR) images owing to coherent processing of SAR data. The most well-known image domain speckle filters are the adaptive filters using local statistics such as the mean and standard deviation. The local statistics filters adapt the filter coefficients based on data within a fixed running window. In these schemes, depending on the window size, there exists trade-off between the extent of speckle noise suppression and the capability of preserving fine details. The authors propose a new adaptive windowing algorithm for speckle noise suppression which solves the problem of window size associated with the local statistics adaptive filters. In the algorithm, the window size is automatically adjusted depending on regional characteristics to suppress speckle noise as much as possible while preserving fine details. Speckle noise suppression gets stronger in homogeneous regions as the window size increases succeedingly. In fine detail regions, by reducing the window size successively, edges and textures are preserved. The fixed-window filtering schemes and the proposed one are applied to both a simulated SAR image and an ERS-1 SAR image to demonstrate the excellent performance of the proposed adaptive windowing algorithm for speckle noise.

113 citations


Proceedings ArticleDOI
18 Oct 1999
TL;DR: An application of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm to volumetric medical images, using a 3D wavelet decomposition and a3D spatial dependence tree.
Abstract: This paper focuses on lossless medical image compression methods for 3D volumetric medical images that operate on three-dimensional (3D) reversible integer wavelet transforms. We offer an application of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm to volumetric medical images, using a 3D wavelet decomposition and a 3D spatial dependence tree. The wavelet decomposition is accomplished with integer wavelet filters implemented with the lifting method, where careful scaling and truncations keep the integer precision small and the transform unitary. We have tested our encoder on volumetric medical images using different integer filters and different coding unit sizes. The coding unit sizes of 16 and 8 slices save considerable memory and coding delay from full sequence coding units used in previous works. Results show that, even with these small coding units, our algorithm with certain filters performs as well and sometimes better in lossless coding than previously coding systems using 3D integer wavelet transforms on volumetric medical images.

52 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: The SPIHT image compression algorithm is modified for application to large images with limited processor memory and encoding and decoding of the spatial blocks can be done in parallel for real-time video compression.
Abstract: The SPIHT image compression algorithm is modified for application to large images with limited processor memory. The subband decomposition coefficients are partitioned into small tree-preserving spatial blocks which are each independently coded using the SPIHT algorithm. The bitstreams for each spatial block are assembled into a single final bitstream through one of two packetization schemes. The final bitstream can be embedded in fidelity with a small expense in rate, SPIHT encoding and decoding of the spatial blocks can be done in parallel for real-time video compression.

32 citations


Journal ArticleDOI
TL;DR: A three-dimensional extension of the set partitioning in hierarchical trees (SPIHT) algorithm is utilizing to cascade the resulting 3D SPIHT video coder with the rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel.

20 citations


Journal ArticleDOI
TL;DR: The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage.
Abstract: Constrained storage vector quantization, (CSVQ), introduced by Chan and Gersho (1990, 1991) allows for the stagewise design of balanced tree-structured residual vector quantization codebooks with low encoding and storage complexities On the other hand, it has been established by Makhoul et al (1985), Riskin et al (1991), and by Mahesh et al (see IEEE Trans Inform Theory, vol41, p917-30, 1995) that variable-length tree-structured vector quantizer (VLTSVQ) yields better coding performance than a balanced tree-structured vector quantizer and may even outperform a full-search vector quantizer due to the nonuniform distribution of rate among the subsets of its input space The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage It is demonstrated by simulations on test sets from various synthetic one dimensional (1-D) sources and real-world images that the performance of VLCS-TSVQ, whose codebook storage complexity varies linearly with rate, can come very close to the performance of greedy growth VLTSVQ of Riskin et al and Mahesh et al The dramatically reduced size of the overall codebook allows the transmission of the code vector probabilities as side information for source adaptive entropy coding

15 citations


Proceedings ArticleDOI
18 Oct 1999
TL;DR: This paper replaces the DCT in conjunction with the AGP, the first time LOT and AGP have been combined in a coding method, and presents the principles of the LOT based AGP image codec (LOT-AGP), which may provide a new direction for the implementation of image compression.
Abstract: Transform coding has been the focus of the research for image compression. In previous research, the Amplitude and Group Partitioning (AGP) coding scheme is proved to be a low complexity algorithm with higher performance, clearly one of the state-of-art transform coding techniques. However, the previous AGP is used along with the Discrete Cosine Transform (DCT) and the discrete wavelet transform. In this paper, a different transform, the Lapped Orthogonal Transform (LOT), replaces the DCT in conjunction with the AGP. This is the first time LOT and AGP have been combined in a coding method. The definition and design of the LOT are discussed. An objective metric to measure the performance of transform, coding gain, is calculated for both the DCT and the LOT. The LOT has slightly higher coding gain than the DCT. The principles of the LOT based AGP image codec (LOT-AGP) are presented and a complete codec, encoder and decoder, is implemented in software. The performance of the LOT-AGP is compared with other block transform coding schemes: the baseline JPEG codec and the DCT based AGP image codec (DCT- AGP) by both objective evaluation and subjective evaluation. The Peak Signal to Noise Ratio (PSNR) is calculated for these three coding schemes. The two AGP codecs are much better than the JPEG codec on PSNR, from about 1.7 dB to 3 dB depending on bit rate. The two AGP schemes have PSNR differences only to a small degree. Visually, the LOT-AGP has the best-reconstructed images among these three at all bit rates. In addition, the coding results of two other state-of-art progressive image codecs are cited for further comparison. One is the Set Partitioning in Hierarchical Trees (SPIHT) algorithm with a dyadic wavelet transform, and the other is Tran and Nguyen's method with the generalized LOT transform. The AGP coding and the adaptive Huffman entropy coding of LOT-AGP are less complex and the memory usage is smaller than in these two progressive codecs. Comparing these three codecs, i.e. the LOT-AGP and the two progressive codecs in PSNR small only small differences in PSNR. SPIHT has about 1 dB higher PSNR than the LOT-AGP and Tran and Nguyen's method for the test image Lena. For the test image Barbara, the PSNR of the LOT- AGP is about 0.5 dB higher than that of the SPIHT and 0.5 dB lower than that of Tran and Nguyen's method. This low- complexity and high performance codec may provide a new direction for the implementation of image compression.

9 citations


Proceedings ArticleDOI
29 Mar 1999
TL;DR: The fast mmLZ is implemented and the results show a improvement in compression of around 5% over the LZW, in the Canterbury Corpus (Arnold and Bell, 1997) with little extra computational cost.
Abstract: Summary form only given. One of the most popular encoders in the literature is the LZ78, which was proposed by Ziv and Lempel (1978). We establish a recursive way to find the longest m-tuple match. We prove the following theorem that shows how to obtain a longest (m+1)-tuple match from the longest m-tuple match. It shows that a (m+1)-tuple match is the concatenation of the first (m-1) words of the m-tuple match with the next longest double match. Therefore, the longest (m+1)-tuple match can be found using the m-tuple match and a procedure to compute the longest double match. Our theorem is as follows. Let A be a source alphabet, let A* be the set of all finite strings of A, and D/spl sub/A*, such that if x/spl isin/D then all prefixes of x belong to D. Let u denote a one-sided infinite sequence. If b/sub 1//sup m/ is the longest m-tuple match in u, with respect to D, then there is a longest (m+1)-tuple match b/spl circ//sub 1//sup m+1/, such that b/spl circ//sub i/=b/sub i/,/spl forall/i/spl isin/{1,...m-1}. We implemented the fast mmLZ and the results show a improvement in compression of around 5% over the LZW, in the Canterbury Corpus (Arnold and Bell, 1997) with little extra computational cost.

5 citations


Proceedings ArticleDOI
22 Mar 1999
TL;DR: The optimum space-spatial frequency localization property of this transform is utilized in the Embedded Zero Tree Wavelet Coding which has been refined to produce best performance in SPIHT (set partitioning in the hierarchical trees) algorithm for lossy compression and S+P (S-transform and prediction) for lossless compression.
Abstract: Wavelet Transform is known to produce the most effective and computational efficient technique for image compression. The optimum space-spatial frequency localization property of this transform is utilized in the Embedded Zero Tree Wavelet Coding which has been refined to produce best performance in SPIHT (set partitioning in the hierarchical trees) algorithm for lossy compression and S+P (S-transform and prediction) for lossless compression. Using the multi- resolution property of wavelet transform one can also have progressive transmission for preliminary inspection where the criterion for progressiveness could be either fidelity or resolution. The three important points of wavelet based compression algorithms are: (1) partial ordering of transformed magnitudes with order transmission using subset partitioning, (2) refinement bit transmission using ordered bit plane, and (3) use of the self-similarity of the transform coefficients for different scales.

2 citations


01 Jan 1999
TL;DR: The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algo- rithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage.
Abstract: Constrained storage vector quantization, (CSVQ), introduced by Chan and Gersho (2)-(4), allows for the stagewise design of balanced tree-structured residual vector quantization codebooks with low encoding and storage complexities. On the other hand, it has been established in (9), (11), and (12) that variable-length tree-structured vector quantizer (VLTSVQ) yields better coding performance than a balanced tree-structured vec- tor quantizer and may even outperform a full-search vector quantizer due to the nonuniform distribution of rate among the subsets of its input space. The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algo- rithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage. It is demonstrated by simulations on test sets from various synthetic one-dimensional (1-D) sources and real-world images that the performance of VLCS-TSVQ, whose codebook storage complexity varies linearly with rate, can come very close to the performance of greedy growth VLTSVQ of (11) and (12). The dramatically reduced size of the overall codebook allows the transmission of the codevector probabilities as side information for source adaptive entropy coding.