scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Fast and efficient spatial scalable image compression using wavelet lower trees

TL;DR: A new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees that presents state-of-the-art compression performance, while its temporal complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG2000.
Abstract: A new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees. This lower-tree wavelet (LTW) encoder presents state-of-the-art compression performance, while its temporal complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG2000. This fast execution is achieved by means of a simple two-pass coding and one-pass decoding algorithm. On the other hand, its computation does not need additional lists or complex data structures so there is no memory head. A formal description of the algorithm is provided, so that an implementation can be performed straightforwardly. The results show that the codec works faster than SPIHT and JPEG2000 with better performance in terms of rate-distortion metric.
Citations
More filters
01 Jan 2010
TL;DR: This paper presents a systematic analysis of a variety of different ad hoc network topologies in terms of node placement, node mobility and routing protocols through several simulated scenarios.
Abstract: In this paper we examine the behavior of Ad Hoc networks through simulations, using different routing protocols and various topologies. We examine the difference in performance, using CBR application, with packets of different size through a variety of topologies, showing the impact node placement has on networks performance. We show that the choice of routing protocol plays an important role on network’s performance. We also quantify node mobility effects, by looking into both static and fully mobile configurations. Our paper presents a systematic analysis of a variety of different ad hoc network topologies in terms of node placement, node mobility and routing protocols through several simulated scenarios.

58 citations

Journal ArticleDOI
TL;DR: A new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees, which presents state-of-the-art compression performance, whereas its complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG 2000.
Abstract: In this paper, a new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees. The main contribution of the proposed lower-tree wavelet (LTW) encoder is the utilization of coefficient trees, not only as an efficient method of grouping coefficients, but also as a fast way of coding them. Thus, it presents state-of-the-art compression performance, whereas its complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG 2000. Fast execution is achieved by means of a simple two-pass coding and one-pass decoding algorithm. Moreover, its computation does not require additional lists or complex data structures, so there is no memory overhead. A formal description of the algorithm is provided, while reference software is also given. Numerical results show that our codec works faster than SPIHT and JPEG 2000 (up to three times faster than SPIHT and fifteen times faster than JPEG 2000), with similar coding efficiency

48 citations

Posted Content
TL;DR: It is proposed that proper selection of mother wavelet on the basis of nature of images, improve the quality as well as compression ratio remarkably, and the enhanced run length encoding technique is suggested provides better results than RLE.
Abstract: In Image Compression, the researchers’ aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently discrete wavelet transform and wavelet packet has emerged as popular techniques for image compression. The wavelet transform is one of the major processing components of image compression. The result of the compression changes as per the basis and tap of the wavelet used. It is proposed that proper selection of mother wavelet on the basis of nature of images, improve the quality as well as compression ratio remarkably. We suggest the novel technique, which is based on wavelet packet best tree based on Threshold Entropy with enhanced run-length encoding. This method reduces the time complexity of wavelet packets decomposition as complete tree is not decomposed. Our algorithm selects the sub-bands, which include significant information based on threshold entropy. The enhanced run length encoding technique is suggested provides better results than RLE. The result when compared with JPEG-2000 proves to be better.

46 citations


Cites background from "Fast and efficient spatial scalable..."

  • ...Wavelets however, are ill suited to represent oscillatory patterns [13, 14]....

    [...]

Proceedings ArticleDOI
28 Mar 2006
TL;DR: BCWT eliminates several major bottlenecks of existing wavelet-tree-based codecs, namely tree-scanning, bitplane coding and dynamic lists management, and provides desirable features such as low complexity, low memory usage, and resolution scalability.
Abstract: A new approach of backward coding of wavelet trees (BCWT) is presented. Contrary to the common "forward" coding of wavelet trees from the highest level (lowest resolution), the new approach starts coding from the lowest level and goes backward by building a map of maximum quantization levels of descendants. BCWT eliminates several major bottlenecks of existing wavelet-tree-based codecs, namely tree-scanning, bitplane coding and dynamic lists management. Compared to SPIHT, BCWT encodes and decodes up to eight times faster without sacrificing PSNR. At the same time, BCWT provides desirable features such as low complexity, low memory usage, and resolution scalability.

32 citations


Cites background from "Fast and efficient spatial scalable..."

  • ...At the same time, BCWT provides desirable features such as low complexity, low memory usage, and resolution scalability....

    [...]

Journal ArticleDOI
TL;DR: An image coding algorithm, progressive resolution coding (PROGRES), for a high-speed resolution scalable decoding is proposed, designed based on a prediction of the decaying dynamic ranges of wavelet subbands, which validates the suitability of the proposed method to very large scale image encoding and decoding.
Abstract: An image coding algorithm, progressive resolution coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits very similar to that used by uncoded (i.e., not entropy coded) SPIHT. The algorithm bypasses bit-plane coding and complicated list processing of SPIHT in order to obtain a considerable speed improvement, giving up quality scalability, but without compromising coding efficiency. Since each tree of coefficients is separately coded, where the root of the tree corresponds to the coefficient in LL subband, the algorithm is easily extensible to random access decoding. The algorithm is designed and implemented for both 2D and 3D wavelet subbands. Experiments show that the decoding speeds of the proposed coding model are four times and nine times faster than uncoded 2D-SPIHT and 3D-SPIHT, respectively, with almost the same decoded quality. The higher decoding speed gain in a larger image source validates the suitability of the proposed method to very large scale image encoding and decoding. In the Appendix, we explain the syntactic relationship of the proposed PROGRES method to uncoded SPIHT, and demonstrate that, in the lossless case, the bits sent to the codestream for each algorithm are identical, except that they are sent in different order.

30 citations


Cites background from "Fast and efficient spatial scalable..."

  • ...Furthermore, there are increasingly common applications where one may not be able to afford the additional temporal and computational complexity required for rate scalability....

    [...]

  • ...Now, the coded information for the tree with two resolu- tion scales will be where , , , are root coefficients of each subtree....

    [...]

  • ...Therefore, it is a good idea to predict the dynamic range of each subtree based on the dynamic range of a parent tree, as shown in Fig....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


"Fast and efficient spatial scalable..." refers background in this paper

  • ...According to the image size and the bit-rate, it is able to encode an image up to 8.5 times faster than JASPER and 2.5 times faster than SPIHT....

    [...]

  • ...Table 3 shows as our algorithm greatly outperforms SPIHT and JASPER in terms of execution time....

    [...]

  • ...Its compression performance is within the state-of-the-art, outperforming the typically used algorithms (SPIHT is improved in 0.2-0.4 dB, and JPEG 2000 with Lena in 0.35 dB as mean value)....

    [...]

  • ...Lena coding Lena decoding codec\ rate SPIHT JASPER / JPEG 2000 LTW SPIHT JASPER / JPEG 2000 LTW 2 210.4 278.5 92.4 217.0 108.8 85.0 1 119.4 256.1 62.4 132.7 72.3 47.1 0.5 72.3 238.2 46.7 90.7 51.4 27.1 0.25 48.7 223.4 38.9 69.9 38.1 17.4 0.125 36.8 211.3 34.7 59.7 31.1 12.3...

    [...]

  • ...Notice that only SPIHT and JASPER have been compared to LTW because the compiled versions of the rest of coders have not been released....

    [...]

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations


"Fast and efficient spatial scalable..." refers background in this paper

  • ...One of the first efficient wavelet image coders reported in the literature is the EZW [3]....

    [...]

  • ...The Embedded Zero-Tree Wavelet coder (EZW) can be considered as the first Wavelet image coder that broke that trend....

    [...]

Journal ArticleDOI
TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.
Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations


"Fast and efficient spatial scalable..." refers methods in this paper

  • ...The most commonly used dyadic decomposition in image compression is the hierarchical wavelet subband transform [8], therefore an element Cc ji ∈, is called transform coefficient....

    [...]

Journal ArticleDOI
Wim Sweldens1
TL;DR: In this paper, a lifting scheme is proposed for constructing compactly supported wavelets with compactly support duals, which can also speed up the fast wavelet transform and is shown to be useful in the construction of wavelets using interpolating scaling functions.

2,322 citations