scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2002"


01 Jan 2002
TL;DR: The JPEG2000 standard as discussed by the authors is an International Standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts (Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International standard at the end of 2000.
Abstract: In 1996, the JPEGcommittee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG2000, has resulted in a comprehensive standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications. r 2002 Elsevier Science B.V. All rights reserved.

664 citations


Journal ArticleDOI
TL;DR: Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.
Abstract: In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted in a comprehensive standard (ISO 15444∣ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG 2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.

528 citations


Journal ArticleDOI
01 Mar 2002
TL;DR: A novel steganographic method based on joint photographic expert-group (JPEG) that has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable.
Abstract: In this paper, a novel steganographic method based on joint photographic expert-group (JPEG) is proposed. The proposed method modifies the quantization table first. Next, the secret message is hidden in the cover-image with its middle-frequency of the quantized DCT coefficients modified. Finally, a JPEG stego-image is generated. JPEG is a standard image and popularly used in Internet. The stego-image will not be suspected if we could apply a JPEG image to data hiding. We compare our method with a JPEG hiding-tool Jpeg-Jsteg. From the experimental results, we obtain that the proposed method has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable. Besides, our method has the same security level as Jpeg-Jsteg.

366 citations


Journal ArticleDOI
TL;DR: An architecture that performs the forward and inverse discrete wavelet transform (DWT) using a lifting-based scheme for the set of seven filters proposed in JPEG2000 using an architecture consisting of two row processors, two column processors, and two memory modules.
Abstract: We propose an architecture that performs the forward and inverse discrete wavelet transform (DWT) using a lifting-based scheme for the set of seven filters proposed in JPEG2000. The architecture consists of two row processors, two column processors, and two memory modules. Each processor contains two adders, one multiplier, and one shifter. The precision of the multipliers and adders has been determined using extensive simulation. Each memory module consists of four banks in order to support the high computational bandwidth. The architecture has been designed to generate an output every cycle for the JPEG2000 default filters. The schedules have been generated by hand and the corresponding timings listed. Finally, the architecture has been implemented in behavioral VHDL. The estimated area of the proposed architecture in 0.18-/spl mu/ technology is 2.8 nun square, and the estimated frequency of operation is 200 MHz.

350 citations


Journal ArticleDOI
07 Nov 2002
TL;DR: A tutorial-style review of the new JPEG2000, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards is provided.
Abstract: JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet.

275 citations


Journal ArticleDOI
TL;DR: The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, has recently reached the International Standard (IS) status. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper provides a comparison of JPEG 2000 with JPEG-LS and MPEG-4 VTC, in addition to older but widely used solutions, such as JPEG and PNG, and well established algorithms, such as SPIHT. Lossless compression efficiency, fixed and progressive lossy rate-distortion performance, as well as complexity and robustness to transmission errors, are evaluated. Region of Interest coding is also discussed and its behavior evaluated. Finally, the set of provided functionalities of each standard is also evaluated. In addition, the principles behind each algorithm are briefly described. The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

149 citations


Journal ArticleDOI
TL;DR: An overview of these quantization methods is provided, including generalized uniform scalar dead-zone quantization and trellis coded quantization (TCQ).
Abstract: Quantization is instrumental in enabling the rich feature set of JPEG 2000. Several quantization options are provided within JPEG 2000. Part I of the standard includes only uniform scalar dead-zone quantization, while Part II allows both generalized uniform scalar dead-zone quantization and trellis coded quantization (TCQ). In this paper, an overview of these quantization methods is provided. Issues that arise when each of these methods are employed are discussed as well.

97 citations


Journal ArticleDOI
TL;DR: It is shown that the visual tool sets in JPEG 2000 are much richer than what is achievable in JPEG, where only spatially invariant frequency weighting can be exploited.
Abstract: The human visual system plays a key role in the final perceived quality of the compressed images. It is therefore desirable to allow system designers and users to take advantage of the current knowledge of visual perception and models in a compression system. In this paper, we review the various tools in JPEG 2000 that allow its users to exploit many properties of the human visual system such as spatial frequency sensitivity, color sensitivity, and the visual masking effects. We show that the visual tool sets in JPEG 2000 are much richer than what is achievable in JPEG, where only spatially invariant frequency weighting can be exploited. As a result, the visually optimized JPEG 2000 images can usually have much better visual quality than the visually optimized JPEG images at the same bit rates. Some visual comparisons between different visual optimization tools, as well as some visual comparisons between JPEG 2000 and JPEG, will be shown.

90 citations


Journal ArticleDOI
TL;DR: The results in this paper show that the Maxshift method can be used to greatly increase the compression efficiency by lowering the quality of the background and that it also makes it possible to receive the ROI before the background, when transmitting the image.
Abstract: This paper describes the functionality in the JPEG 2000 Part 1 standard, for encoding images with predefined regions of interest (ROI) of arbitrary shape. The method described is called the Maxshift method. This method is based on scaling of the wavelet coefficients after the wavelet transformation and quantization. By sufficiently scaling the wavelet coefficients used to reconstruct the ROI, all the information pertaining to the ROI is placed before the information pertaining to the rest of the image (background), in the codestream. By varying the quantization of the image and by truncation of the codestream, different quality for the ROI and for the background can be obtained. A description is also given of how the wavelet coefficients that are used to reconstruct the ROI (ROI mask) can be found. Since the decoder uses only the number of significant bitplanes for each wavelet coefficient to determine whether it should be scaled back, an arbitrary set of wavelet coefficients can be scaled on the encoder side. This means that there is no need to encode or send the shape of the ROI. This paper also describes how this can be used to further enhance the ROI functionality. The results in this paper show that the Maxshift method can be used to greatly increase the compression efficiency by lowering the quality of the background and that it also makes it possible to receive the ROI before the background, when transmitting the image.

89 citations


Proceedings ArticleDOI
28 Oct 2002
TL;DR: An efficient VLSI architecture is proposed to provide a variety of hardware implementations for improving and possibly minimizing the critical path and memory requirements of lifting-based discrete wavelet transforms by flipping conventional lifting structures.
Abstract: Using the lifting scheme to construct VLSI architectures for discrete wavelet transforms outperforms using convolution in many aspects, such as computation complexity and boundary extension. Nevertheless, the critical path of the lifting scheme is potentially longer than that of convolution. Although pipelining can reduce the critical path, it will prolong the latency and require more registers for a 1D architecture as well as larger memory size for a 2D line-based architecture. In this paper, an efficient VLSI architecture is proposed to provide a variety of hardware implementations for improving and possibly minimizing the critical path and memory requirements of lifting-based discrete wavelet transforms by flipping conventional lifting structures. By case studies of a JPEG2000 defaulted filter and an integer filter, the efficiency of the proposed flipping structure is shown.

85 citations


Journal ArticleDOI
TL;DR: The proposed method is based on a seamless integration of the two schemes without compromising their desirable features and makes feasible the deployment of the merits of a BPCS steganography technique in a practical scenario where images are compressed before being transmitted over the network.
Abstract: This letter presents a steganography method based on a JPEG2000 lossy compression scheme and bit-plane complexity segmentation (BPCS) steganography. It overcomes the lack of robustness of bit-plane-based steganography methods with respect to lossy compression of a dummy image: a critical shortcoming that has hampered deployment in a practical scenario. The proposed method is based on a seamless integration of the two schemes without compromising their desirable features and makes feasible the deployment of the merits of a BPCS steganography technique in a practical scenario where images are compressed before being transmitted over the network. Embedding rates of around 15% of the compressed image size were achieved for preembedding 1.0-bpp compressed images with no noticeable degradation in image quality.

Proceedings Article
01 Jan 2002
TL;DR: The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate.
Abstract: This paper details work undertaken on the application of JPEG 2000, the recent ISO/ITU-T image compression standard based on wavelet technology, to region of interest (ROI) coding. The paper briefly outlines the JPEG 2000 encoding algorithm and explains how the packet structure of the JPEG 2000 bit-stream enables an encoded image to be decoded in a variety of ways dependent upon the application. The three methods by which ROI coding can be achieved in JPEG 2000 (tiling; coefficient scaling; and codeblock selection) are then outlined and their relative performance empirically investigated. The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate. Finally, some initial results are presented on the application of ROI coding to face images.

Journal ArticleDOI
S. Lawson1, J. Zhu1
TL;DR: This paper aims in tutorial form to introduce the DWT, to illustrate its link with filters and filterbanks and to illustrate how it may be used as part of an image coding algorithm.
Abstract: The demand for higher and higher quality images transmitted quickly over the Internet has led to a strong need to develop better algorithms for the filtering and coding of such images. The introduction of the JPEG2000 compression standard has meant that for the first time the discrete wavelet transform (DWT) is to be used for the decomposition and reconstruction of images together with an efficient coding scheme. The use of wavelets implies the use of subband coding in which the image is iteratively decomposed into high- and low-frequency bands. Thus there is a need for filter pairs at both the analysis and synthesis stages. This paper aims in tutorial form to introduce the DWT, to illustrate its link with filters and filterbanks and to illustrate how it may be used as part of an image coding algorithm. It concludes with a look at the qualitative differences between images coded using JPEG2000 and those coded using the existing JPEG standard.

Proceedings ArticleDOI
09 Dec 2002
TL;DR: A new JPEG-compliant solution under the proposed framework but with different ECC and watermarking methods is introduced, to demonstrate the practicability of the method.
Abstract: We have introduced a robust and secure digital signature solution for multimedia content authentication, by integrating content feature extraction, error correction coding (ECC), watermarking, and cryptographic hashing into a unified framework. We have successfully applied it to JPEG2000 as well as generic wavelet transform based applications. In this paper, we shall introduce a new JPEG-compliant solution under our proposed framework but with different ECC and watermarking methods. System security analysis as well as system robustness evaluation will also be given to further demonstrate the practicability of our method.

Journal ArticleDOI
TL;DR: An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.
Abstract: This article addresses the problem of improving the efficiency of lossless compression of images with sparse histograms. An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.

Proceedings ArticleDOI
04 Jan 2002
TL;DR: A close look at the runtime performance of the intra-component transform employed in the reference implementations of the JPEG2000 image coding standard and proposes two simple techniques that dramatically reduce the number of cache misses and cut column filtering runtime by a factor of 10.
Abstract: In this paper, we have a close look at the runtime performance of the intra-component transform employed in the reference implementations of the JPEG2000 image coding standard. Typically, wavelet lifting is used to obtain a wavelet decomposition of the source image in a computationally efficient way. However, so far no attention has been paid to the impact of the CPU's memory cache on the overall performance. We propose two simple techniques that dramatically reduce the number of cache misses and cut column filtering runtime by a factor of 10. Theoretical estimates as well as experimental results on a number of hardware platforms show the effectivity of our approach.

Journal ArticleDOI
TL;DR: Experiments indicate that the proposed approach significantly outperforms current compression techniques used in commercial karyotyping systems and JPEG-2000 compression, which does not provide the desirable support for lossless compression of arbitrary ROIs.
Abstract: This paper proposes a new method for chromosome image compression based on an important characteristic of these images: the regions of interest (ROIs) to cytogeneticists for evaluation and diagnosis are well determined and segmented. Such information is utilized to advantage in our compression algorithm, which combines lossless compression of chromosome ROIs with lossy-to-lossless coding of the remaining image parts. This is accomplished by first performing a differential operation on chromosome ROIs for decorrelation, followed by critically sampled integer wavelet transforms on these regions and the remaining image parts. The well-known set partitioning in hierarchical trees (SPIHT) (Said and Perlman, 1996) algorithm is modified to generate separate embedded bit streams for both chromosome ROIs and the rest of the image that allow continuous lossy-to-lossless compression of both (although lossless compression of the former is commonly used in practice). Experiments on two sets of sample chromosome spread and karyotype images indicate that the proposed approach significantly outperforms current compression techniques used in commercial karyotyping systems and JPEG-2000 compression, which does not provide the desirable support for lossless compression of arbitrary ROIs.

Proceedings ArticleDOI
07 Apr 2002
TL;DR: A fast watermarking method that applies to JPEG images that manipulates the quantized DCT coefficients directly and can be implemented in real time is presented.
Abstract: In this paper, we present a fast watermarking method that applies to JPEG images. JPEG is a standard image format supported by virtually all available image software applications. Our method manipulates the quantized DCT coefficients directly and can be implemented in real time. We use texture masking to embed a stronger watermark signal in certain texture areas. Watermark robustness to JPEG compression, additive Gaussian noise and image cropping attacks is studied with the proposed system. The relationship between watermark robustness and watermark position is also studied. A description of the method is included and results are presented along with a discussion of possible improvements.

Patent
Ricardo L. de Queiroz1
22 Jul 2002
TL;DR: In this article, a unique hashing function is derived from a first section of image data contained in the JPEG compressed image in such a way that any changes subsequently made to the first image data is reflected in a different hashing function being derived from signature string is embedded into a next section of the image data.
Abstract: A system and method for authentication of JPEG image data enables the recipient to ascertain whether the received image file originated from a known identified source or whether the contents of the file have been altered in some fashion prior to receipt. A unique hashing function is derived from a first section of image data contained in the JPEG compressed image in such a way that any changes subsequently made to the first section of image data is reflected in a different hashing function being derived from a signature string is embedded into a next section of the image data. Since the embedding of a previous section's integrity checking number is done without modifying the JPEG bit stream, any JPEG decoder can thereafter properly decode the image.

Proceedings ArticleDOI
S. Chatterjee1, C.D. Brooks
07 Nov 2002
TL;DR: An optimized DWT implementation is presented that exhibits speedups of up to 4/spl times/ over the DWT in a JPEG 2000 reference implementation, and demonstrates significant performance improvements of theDWT over a baseline implementation.
Abstract: The discrete wavelet transform (DWT), the technology at the heart of the JPEG 2000 image compression system, operates on user-definable tiles of the image, as opposed to fixed-size blocks of the image as does the discrete cosine transform (DCT) used in JPEG. This difference reduces artificial blocking effects but can severely stress the memory system. We examine the interaction of the DWT and the memory hierarchy, modify the structure of the DWT computation and the layout of the image data to improve cache and translation lookaside buffer (TLB) locality, and demonstrate significant performance improvements of the DWT over a baseline implementation. Our optimized DWT implementation exhibits speedups of up to 4/spl times/ over the DWT in a JPEG 2000 reference implementation.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: A combined JPEG-2000 and spectral correlation method for hyperspectral image compression is presented and shows promising results.
Abstract: In this paper, a combined JPEG-2000 and spectral correlation method for hyperspectral image compression is presented. This compression scheme shows promising results. Also, various spectral decorrelation techniques are compared. The decorrelation using Karhunen-Loeve transform performs the best in terms of PSNR gain. But, since it is computationally expensive, there is no much gain over discrete cosine transform.

Patent
Tooru Suino1
27 Dec 2002
TL;DR: In this article, an image processing apparatus and method for decompressing compressed image data is described, which is obtained by dividing an original image into blocks and compressing each block, and a smoothing unit is used to control the smoothing effect applied to the image based on distance from a block boundary and on an edge amount.
Abstract: An image processing apparatus and method for decompressing compressed image data is described. In one embodiment, the image processing apparatus decompresses compressed image data that is obtained by dividing an original image into blocks and compressing each block. The apparatus may comprise a decompression unit to decompress the compressed image data to provide an image which is a collection of the respective blocks, and a smoothing unit to perform a smoothing operation on the decompressed image to control the smoothing effect applied to the image based on distance from a block boundary and based on an edge amount.

Journal ArticleDOI
TL;DR: The authors found that compression ratios as high as 20:1 can be utilized without affecting lesion detectability and significant differences between the original and the compressed CR images were not recognized up to compression ratio of 50:1 within a confidence level of 99%.
Abstract: The efficient compression of radiographic images is of importance for improved storage and network utilization in support of picture archiving and communication systems (PACS) applications. The DICOM Working Group 4 adopted JPEG2000 as an additional compression standard in Supplement 61 over the existing JPEG. The wavelet-based JPEG2000 can achieve higher compression ratios with less distortion than the Discrete Cosine Transform (DCT)-based JPEG algorithm. However, the degradation of JPEG2000-compressed computed radiography (CR) chest images has not been tested comprehensively clinically. The authors evaluated the diagnostic quality of JPEG2000-compressed CR chest images with compression ratios from 5:1 to 200:1. An ROC (receiver operating characteristic analysis) and t test were performed to ascertain clinical performance using the JPEG2000-compressed images. The authors found that compression ratios as high as 20:1 can be utilized without affecting lesion detectability. Significant differences between the original and the compressed CR images were not recognized up to compression ratio of 50:1 within a confidence level of 99%.

Jin Li1
01 Jan 2002
TL;DR: The mathematics in the coding engine of JPEG 2000, a state-of-the-art image compression system, is reviewed, focusing in depth on the transform, entropy coding and bitstream assembler modules.
Abstract: We briefly review the mathematics in the coding engine of JPEG 2000, a state-of-the-art image compression system. We focus in depth on the transform, entropy coding and bitstream assembler modules. Our goal is to pass the readers a good understanding of the modern scalable image compression technologies without being swarmed by the details.

Proceedings ArticleDOI
07 Apr 2002
TL;DR: The performance of various feature extraction and classification tasks is measured on hyperspectral images coded using the JPEG-2000 Standard and suggests that one need not limit remote sensing systems to lossless compression only, since many common classification tools perform reliably on images compressed to very low bit rates.
Abstract: We present results quantifying the exploitability of compressed remote sensing imagery. The performance of various feature extraction and classification tasks is measured on hyperspectral images coded using the JPEG-2000 Standard. Spectral decorrelation is performed using the Karhunen-Loeve transform and the 9-7 wavelet transform as part of the JPEG-2000 process. The quantitative performance of supervised, unsupervised, and hybrid classification tasks is reported as a function of the compressed bit rate for each spectral decorrelation scheme. The tasks examined are shown to perform with 99% accuracy at rates as low as 0.125 bits/pixel/band. This suggests that one need not limit remote sensing systems to lossless compression only, since many common classification tools perform reliably on images compressed to very low bit rates.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: Two new accelerating schemes are proposed and applied to the prototyping design which turns out to be powerful enough to fulfill the demand of computational requirement of the most advanced digital still camera.
Abstract: Embedded block coding with optimized truncation (EBCOT) is the entropy coding algorithm adopted by the new still image compression standard JPEG 2000. It is composed of a multi-pass fractional bit-plane context scanning along with an arithmetic coding procedure. GPP (general purpose processor) or DSP fails to accelerate this kind of bit-level operation, which is proven to occupy most of the computational time of the JPEG 2000 system. In this paper, two new accelerating schemes are proposed and applied to our prototyping design which turns out to be powerful enough to fulfill the demand of computational requirement of the most advanced digital still camera.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: An efficient codec architecture for context-based adaptive arithmetic coding is proposed, which exhibits low cost, low latency, and high throughput rate and can be programmed for supporting multiple standards such as JPEG, JPEG2000, JBIG, andJBIG2 standards.
Abstract: For next generation image compression standard, context-based arithmetic coding is adopted for improving the compression rate. An efficient and high throughput codec design is strongly required for handling high-resolution images. We propose an efficient codec architecture for context-based adaptive arithmetic coding, which exhibits low cost, low latency, and high throughput rate. In addition, it can be programmed for supporting multiple standards such as JPEG, JPEG2000, JBIG, and JBIG2 standards. It exploits three-pipeline stages architecture. Based on parallel leading zeros detection and bit-stuffing handling, symbols can be encoded and decoded within one cycle. Therefore, the throughput rate can be increased as high as the codec operating clock rate. For 0.35 /spl mu/ 1P4M CMOS technology, both the encoding and decoding rate can run up to 185 M symbol/sec. The AC codec only costs 12 K gate count and 860 /spl mu/m/spl times/860 /spl mu/m layout area. These performances can meet high-resolution real time application requirements.

Patent
18 Oct 2002
TL;DR: In this article, a method and apparatus for transporting portions of a codestream over a communications mechanism is described, which comprises sending a request over a network and receiving tile-parts of a JPEG 2000 compliant codestREAM from the network as a return type as part of a response to the request.
Abstract: A method and apparatus for transporting portions of a codestream over a communications mechanism is described. In one embodiment, the method comprises sending a request over a network and receiving tile-parts of a JPEG 2000 compliant codestream from the network as a return type as part of a response to the request.

Proceedings ArticleDOI
T. Masuzaki1, Hiroshi Tsutsui1, Tomonori Izumi1, Takao Onoye1, Y. Nakamura1 
07 Aug 2002
TL;DR: The proposed scheme successfully reduces computational cost and working memory size of the process down to 29% and 13%, respectively, comparing to a conventional approach in case of 1/16 compression, and hence is suitable to be used in embedded systems.
Abstract: A novel rate control scheme is proposed dedicatedly for JPEG2000 image coding. By predicting bitrate of coded data and updating it adaptively, the proposed scheme can be executed in parallel with the code-block coding of code-block coding such as coefficient bit modeling and arithmetic coding. The proposed scheme successfully reduces computational cost and working memory size of the process down to 29% and 13%, respectively, comparing to a conventional approach in case of 1/16 compression, and hence is suitable to be used in embedded systems.

Journal ArticleDOI
TL;DR: This work addresses the problems of evaluating the execution speed of a wavelet engine on a modern DSP, and describes two implementations, based on the lifting scheme and the filter bank scheme, respectively, and presents experimental results on code profiling.
Abstract: We develop wavelet engines on a digital signal processors (DSP) platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1) evaluating the execution speed of a wavelet engine on a modern DSP; (2) comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3) using the on-board direct memory access (DMA) to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.