scispace - formally typeset

Journal ArticleDOI

Enhancing LTW image encoder with perceptual coding and GPU-optimized 2D-DWT transform

23 Aug 2013-EURASIP Journal on Advances in Signal Processing (Springer International Publishing)-Vol. 2013, Iss: 1, pp 141

TL;DR: This work proposes an optimization of the E_LTW encoder with the aim to increase its R/D performance through perceptual encoding techniques and reduce the encoding time by means of a graphics processing unit-optimized version of the two-dimensional discrete wavelet transform.

AbstractWhen optimizing a wavelet image coder, the two main targets are to (1) improve its rate-distortion (R/D) performance and (2) reduce the coding times. In general, the encoding engine is mainly responsible for achieving R/D performance. It is usually more complex than the decoding part. A large number of works about R/D or complexity optimizations can be found, but only a few tackle the problem of increasing R/D performance while reducing the computational cost at the same time, like Kakadu, an optimized version of JPEG2000. In this work we propose an optimization of the E_LTW encoder with the aim to increase its R/D performance through perceptual encoding techniques and reduce the encoding time by means of a graphics processing unit-optimized version of the two-dimensional discrete wavelet transform. The results show that in both performance dimensions, our enhanced encoder achieves good results compared with Kakadu and SPIHT encoders, achieving speedups of 6 times with respect to the original E_LTW encoder.

Topics: Encoder (58%), Discrete wavelet transform (53%), JPEG 2000 (51%), Wavelet (51%), Set partitioning in hierarchical trees (50%)

...read more

Content maybe subject to copyright    Report

Citations
More filters


Journal ArticleDOI
TL;DR: This contribution focuses on different topics covered by the special issue titled ‘Hardware Implementation of Machine vision Systems’ including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation.
Abstract: This contribution focuses on different topics covered by the special issue titled ‘Hardware Implementation of Machine vision Systems’ including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

5 citations


Cites methods from "Enhancing LTW image encoder with pe..."

  • ...In the article entitled ‘Enhancing LTW image encoder with perceptual coding and GPU-optimized 2D-DWT transform’ [10] by Miguel Martínez-Rach et al....

    [...]


References
More filters

Journal ArticleDOI
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

30,333 citations


Journal ArticleDOI
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

19,033 citations


Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,812 citations


"Enhancing LTW image encoder with pe..." refers background or methods in this paper

  • ...For the coding stage, if the absolute value of a coefficient and all its descendants (considering the classic quad-tree structure from [12]) is lower than a threshold value (2),the entire tree is encoded with a single symbol, which we call LOWER symbol (indicating that all the coefficients in the tree are lower than 2 and so they form a lower-tree)....

    [...]

  • ...Wavelet transforms have reported good performance for image compression, therefore many state-of-the-art image codecs, including the JPEG2000 image coding standard, use the Discrete Wavelet Transform (DWT) [9,12]....

    [...]

  • ...SPIHT [12], an advanced version of EZW, process the wavelet coefficient trees in a more efficient way by partitioning the coefficients depending on their significance....

    [...]


Book
30 Nov 2001
TL;DR: This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.
Abstract: This is nothing less than a totally essential reference for engineers and researchers in any field of work that involves the use of compressed imagery. Beginning with a thorough and up-to-date overview of the fundamentals of image compression, the authors move on to provide a complete description of the JPEG2000 standard. They then devote space to the implementation and exploitation of that standard. The final section describes other key image compression systems. This work has specific applications for those involved in the development of software and hardware solutions for multimedia, internet, and medical imaging applications.

2,938 citations


Journal ArticleDOI
Wim Sweldens1
Abstract: We present the lifting scheme, a new idea for constructing compactly supported wavelets with compactly supported duals. The lifting scheme uses a simple relationship between all multiresolution analyses with the same scaling function. It isolates the degrees of freedom remaining after fixing the biorthogonality relations. Then one has full control over these degrees of freedom to custom design the wavelet for a particular application. The lifting scheme can also speed up the fast wavelet transform. We illustrate the use of the lifting scheme in the construction of wavelets with interpolating scaling functions.

2,261 citations