scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2004"


Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations


Journal ArticleDOI
TL;DR: A full- and no-reference blur metric as well as a full-reference ringing metric are presented, based on an analysis of the edges and adjacent regions in an image and have very low computational complexity.
Abstract: We present a full- and no-reference blur metric as well as a full-reference ringing metric. These metrics are based on an analysis of the edges and adjacent regions in an image and have very low computational complexity. As blur and ringing are typical artifacts of wavelet compression, the metrics are then applied to JPEG2000 coded images. Their perceptual significance is corroborated through a number of subjective experiments. The results show that the proposed metrics perform well over a wide range of image content and distortion levels. Potential applications include source coding optimization and network resource management.

526 citations


Book
18 Oct 2004
TL;DR: This paper presents VLSI Architectures for Discrete Wavelet Transforms and Coding Algorithms in JPEG 2000, a guide to data compression techniques used in the development of JPEG 2000.
Abstract: Preface1 Introduction to Data Compression2 Source Coding Algorithms3 JPEG-Still Image Compression Standard4 Introduction to Discrete Wavelet Transform5 VLSI Architectures for Discrete Wavelet Transforms6 JPEG 2000 Standard7 Coding Algorithms in JPEG 20008 Code Stream Organization and File Format9 VLSI Architectures for JPEG 200010 Beyond Part 1 of JPEG 2000IndexAbout the Authors

347 citations


30 Mar 2004

131 citations


01 Jan 2004
TL;DR: In this paper, an embedded block-based, image wavelet transform coding algorithm of low complexity, 3D-SPECK, has been proposed for 3D volumetric image data.
Abstract: We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. The embedded coding of Set Partitioned Embedded bloCK (SPECK) algorithm is modified and extended to three dimensions. The resultant algorithm, three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), efficiently encodes 3D volumetric image data by exploiting the dependencies in all dimensions. 3D-SPECK generates embedded bit stream and therefore provides progressive transmission. We describe the use of this coding algorithm in two implementations, including integer wavelet transform as well as floating point wavelet transform, where the former one enables lossy and lossless decompression from the same bit stream, and the latter one achieves better performance in lossy compression. Wavelet packet structure and coefficient scaling are used to make the integer filter transform approximately unitary. The structure of hyperspectral images reveals spectral responses that would seem ideal candidate for compression by 3D-SPECK. We demonstrate that 3D-SPECK, a wavelet domain compression algorithm, can preserve spectral profiles well. Compared with the lossless version of the benchmark JPEG2000 (multi-component), the 3D-SPECK lossless algorithm produces average of 3.0% decrease in compressed file size for Airborne Visible Infrared Imaging Spectrometer images, the typical hyperspectral imagery. We also conduct comparisons of the lossy implementation with other the state-of-the-art algorithms such as three-Dimensional Set Partitioning In Hierarchical Trees (3D-SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to 3D-SPIHT and better than JPEG2000 in compression efficiency.

118 citations


Proceedings ArticleDOI
27 Feb 2004
TL;DR: In this paper, a comparative study of the rate-distortion performance of Motion-JPEG2000 and H.264/AVC using a representative set of video material is performed.
Abstract: Recently, two new international image and video coding standards have been released: the wavelet-based JPEG2000 standard designed basically for compressing still images, and H.264/AVC, the newest generic standard for video coding. As part of the JPEG2000 suite, Motion-JPEG2000 extends JPEG2000 to a range of applications originally associated with a pure video coding standard like H.264/AVC. However, currently little is known about the relative performance of Motion-JPEG2000 and H.264/AVC in terms of coding efficiency on their overlapping domain of target applications requiring the random access of individual pictures. In this paper, we report on a comparative study of the rate-distortion performance of Motion-JPEG2000 and H.264/AVC using a representative set of video material. Our experimental coding results indicate that H.264/AVC performs surprisingly well on individually coded pictures in comparison to the highly sophisticated still image compression technology of JPEG2000. In addition to the rate-distortion analysis, we also provide a brief comparison of the evaluated coding algorithms in terms of complexity and functionality.

106 citations


Book
18 Oct 2004

100 citations


Proceedings ArticleDOI
05 Apr 2004
TL;DR: A semi-fragile watermarking scheme which embeds a watermark in the quantized DCT domain, which is tolerant to JPEG compression to a pre-determined lowest quality factor, but is sensitive to all other malicious attacks, either in spatial or transform domains.
Abstract: With the increasing popularity of JPEG images, a need arises to devise effective watermarking techniques which consider JPEG compression as an acceptable manipulation. In this paper, we present a semi-fragile watermarking scheme which embeds a watermark in the quantized DCT domain. It is tolerant to JPEG compression to a pre-determined lowest quality factor, but is sensitive to all other malicious attacks, either in spatial or transform domains. Feature codes are extracted based on the relative sign and magnitudes of coefficients, and these are invariant due to an important property of JPEG compression. The employment of a nine-neighborhood mechanism ensures that non-deterministic block-wise dependence is achieved. Analysis and experimental results are provided to support the effectiveness of the scheme.

95 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work proposes a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise, and has been successfully applied to many commonly used images, thus demonstrating its generality.
Abstract: Recently, among various data hiding techniques, a new subset, lossless data hiding, has drawn tremendous interest. Most existing lossless data hiding algorithms are, however, fragile in the sense that they can be defeated when compression or other small alteration is applied to the marked image. The method of C. De Vleeschouwer et al. (see IEEE Trans. Multimedia, vol.5, p.97-105, 2003) is the only existing semi-fragile lossless data hiding technique (also referred to as robust lossless data hiding), which is robust against high quality JPEG compression. We first point out that this technique has a fatal problem: salt-and-pepper noise caused by using modulo 256 addition. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. This technique has been successfully applied to many commonly used images (including medical images, more than 1000 images in the CorelDRAW database, and JPEG2000 test images), thus demonstrating its generality. The experimental results show that the visual quality, payload and robustness are acceptable. In addition to medical and law enforcement fields, it has been applied to authenticate losslessly compressed JPEG2000 images.

86 citations


Journal ArticleDOI
TL;DR: A novel high capacity data hiding method based on JPEG that can achieve an impressively high embedding capacity of around 20% of the compressed image size with little noticeable degradation of image quality is proposed.
Abstract: The JPEG image is the most popular file format in relation to digital images. However, up to the present time, there seems to have been very few data hiding techniques taking the JPEG image into account. In this paper, we shall propose a novel high capacity data hiding method based on JPEG. The proposed method employs a capacity table to estimate the number of bits that can be hidden in each DCT component so that significant distortions in the stego-image can be avoided. The capacity table is derived from the JPEG default quantization table and the Human Visual System (HVS). Then, the adaptive least-significant bit (LSB) substitution technique is employed to process each quantized DCT coefficient. The proposed data hiding method enables us to control the level of embedding capacity by using a capacity factor. According to our experimental results, our new scheme can achieve an impressively high embedding capacity of around 20% of the compressed image size with little noticeable degradation of image quality.

85 citations


Proceedings ArticleDOI
04 May 2004
TL;DR: The proposed Wyner-Ziv coder allows independent encoding of each view with low-complexity cameras, and performs centralized decoding with side information from additional views, and yields superior compression performance at low bit-rates.
Abstract: We address the problem of compression for large camera arrays, and propose a distributed solution based on Wyner-Ziv coding. The proposed scheme allows independent encoding of each view with low-complexity cameras, and performs centralized decoding with side information from additional views. Experimental results are given for two light field data sets. The performance of the proposed scheme is compared with independently coding each view using JPEG2000 and a shape-adaptive JPEG-like coder. The Wyner-Ziv coder yields superior compression performance at low bit-rates. In addition, there is a great reduction in encoder complexity when compared to JPEG2000.

Proceedings ArticleDOI
29 Nov 2004
TL;DR: An energy efficient JPEG 2000 image transmission system over point-to-point wireless sensor networks is proposed by jointly adjusting the source coding schemes, channel coding rates, and transmitter power levels in an optimal way.
Abstract: We propose an energy efficient JPEG 2000 image transmission system over point-to-point wireless sensor networks. The objective is to minimize the overall processing and transmission energy consumption with the expected end-to-end QoS guarantee, which is achieved by jointly adjusting the source coding schemes, channel coding rates, and transmitter power levels in an optimal way. The advantages of the proposed system lie in three aspects: adaptivity, optimality, and low complexity. Based on the characteristics of the image content, the estimated channel conditions, and the distortion constraint, the proposed low-complexity joint source channel coding and power control algorithm adjusts the coding and transmission strategies adaptively, which can approximate the optimal solution with a tight bound.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: It is shown that the MOS predictions by the proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images.
Abstract: This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

Journal ArticleDOI
TL;DR: A survey of palette reordering methods is provided, and it is concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding.
Abstract: Palette reordering is a well-known and very effective approach for improving the compression of color-indexed images. In this paper, we provide a survey of palette reordering methods, and we give experimental results comparing the ability of seven of them in improving the compression efficiency of JPEG-LS and lossless JPEG 2000. We concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding. Moreover, we found that the second most effective method is a modified version of Zeng's reordering technique, which was 3%-5% worse than pairwise merging, but much faster.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: The proposed encryption method works with any standard ciphers, incurs no storage overhead, introduces negligible computational cost and maintains all the desirable properties of the original JPEG 2000 codestream such as error resilience and scalability.
Abstract: This paper presents a compliant encryption method for JPEG 2000 codestreams such that the encryption process does not introduce superfluous JPEG2000 markers in the protected codestream, i.e., the protected codestream preserves the syntax of the original codestream. The proposed encryption method works with any standard ciphers, incurs no storage overhead, introduces negligible computational cost and maintains all the desirable properties of the original JPEG 2000 codestream such as error resilience and scalability.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: The on-going JPSEC standardization activity is reviewed to provide a standardized framework for secure imaging, in order to support tools needed to secure digital images, such as content protection, data integrity check, authentication, and conditional access control.
Abstract: In this paper, we first review the on-going JPSEC standardization activity. Its goal is to extend the baseline JPEG 2000 specification to provide a standardized framework for secure imaging, in order to support tools needed to secure digital images, such as content protection, data integrity check, authentication, and conditional access control. We then present two examples of JPSEC tools. The first one is a technique for secure scalable streaming and secure transcoding. It allows the protected JPSEC codestream to be transcoded while preserving the protection, i.e. without requiring unprotecting (e.g. decrypting) the codestream. The second one is a technique for conditional access control. It can be used for access control by resolution or quality, but also by regions of interest.

Journal ArticleDOI
TL;DR: JPG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.
Abstract: Previous studies have evaluated the effect of the new still image compression standard JPEG 2000 using nontask based image quality metrics, i.e., peak-signal-to-noise-ratio (PSNR) for nonmedical images. In this paper, the effect of JPEG 2000 encoder options was investigated using the performance of human and model observers (nonprewhitening matched filter with an eye filter, square-window Hotelling, Laguerre-Gauss Hotelling and channelized Hotelling model observer) for clinically relevant visual tasks. Two tasks were investigated: the signal known exactly but variable task (SKEV) and the signal known statistically task (SKS). Test images consisted of real X-ray coronary angiograms with simulated filling defects (signals) inserted in one of the four simulated arteries. The signals varied in size and shape. Experimental results indicated that the dependence of task performance on the JPEG 2000 encoder options was similar for all model and human observers. Model observer performance in the more tractable and computationally economic SKEV task can be used to reliably estimate performance in the complex but clinically more realistic SKS task. JPEG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.

Book ChapterDOI
19 Aug 2004
TL;DR: A cipher based-on chaotic neural network is proposed, which is used to encrypt JPEG2000 encoded images, which has high security with low cost, and can support direct operations such as image browsing and bit-rate control.
Abstract: In this paper, a cipher based-on chaotic neural network is proposed, which is used to encrypt JPEG2000 encoded images. During the image encoding process, some sensitive bitstreams are selected from different subbands, bit-planes or encoding-passes, and then are completely encrypted. The algorithm has high security with low cost; it can keep the original file format and compression ratio unchanged, and can support direct operations such as image browsing and bit-rate control. These properties make the cipher very suitable for such real-time encryption applications as image transmission, web imaging, mobile and wireless multimedia communication.

Journal ArticleDOI
TL;DR: This work proposes an adaptation of an existing triangular mesh generation method for depth representation that can be encoded efficiently and showed a significant improvement in rendering speed when compared to using separate compression and rendering processes.

Proceedings ArticleDOI
A. Al1, B.P. Rao1, Sudhir S. Kudva1, S. Babu, D. Sumam, Ajit V. Rao 
01 Jan 2004
TL;DR: This paper investigates the scope of the intraframe coder of H.264 for image coding and compares the quality and the complexity of its decoder with the commonly used image codecs (JPEG and JPEG2000).
Abstract: The recently proposed H.264 video coding standard offers significant coding gains over previously defined standards. An enhanced intra-frame prediction algorithm has been proposed in H.264 for efficient compression of I-frames. This paper investigates the scope of the intraframe coder of H.264 for image coding. We compare the quality of this coder and the complexity of its decoder with the commonly used image codecs (JPEG and JPEG2000). Our results demonstrate that H.264 has a strong potential as an alternative to JPEG and JPEG2000.

Journal ArticleDOI
TL;DR: The performance of a model observer in a visual detection task of varying signals in X-ray coronary angiograms is used to optimize JPEG 2000 encoder options through a genetic algorithm procedure and the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task.
Abstract: Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [nonprewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: A new method for partial-scrambling of JPEG 2000 images based on public-key encryption, which provides an easier way of managing the encryption key compared with the secret-key based method and also provides tamper resistance against attacks.
Abstract: A new method for partial-scrambling of JPEG 2000 images based on public-key encryption is proposed. By using public-key encryption, the proposed method provides an easier way of managing the encryption key compared with the secret-key based method and also provides tamper resistance against attacks. Although public-key encryption is usually very time-consuming, the proposed method achieves fast encryption by controlling the number of bytes to be encrypted. An encrypted JPEG 2000 image generated by the proposed method has backward compatibility with a standard JPEG 2000 image, so that it can be decoded using a standard JPEG 2000 decoder. The proposed method also has scalability as to the degree of scrambling on the basis of JPEG 2000 coding units, i.e.. layers. DWT-levels, subbands. or code-blocks.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations.
Abstract: This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

Journal ArticleDOI
TL;DR: It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel.
Abstract: The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM) able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: Experiments show that the performance of JPEG with the integer reversible DCT is very close to that of the original standard JPEG for lossy image coding, and more importantly, with the transform, it can compress images losslessly.
Abstract: JPEG, as an international image coding standard based on DCT and Huffman entropy coder, is still popular in image compression applications although it is lossy JPEG-LS, standardized for lossless image compression, however, employs an encoding technique different from JPEG This paper presents an integer reversible implementation to make JPEG lossless It uses the framework of JPEG, and just converts DCT and color transform to be integer reversible Integer DCT is implemented by factoring the float DCT matrix into a series of elementary reversible matrices and each of them is directly integer reversible Our integer DCT integrates lossy and lossless schemes nicely, and it supports both lossy and lossless compression by the same method Our JPEG can be used as a replacement for the standard JPEG in either encoding or decoding or both Experiments show that the performance of JPEG with our integer reversible DCT is very close to that of the original standard JPEG for lossy image coding, and more importantly, with our transform, it can compress images losslessly

Journal ArticleDOI
TL;DR: This paper presents a 2-dimensional biorthogonal DWT processor design based on the residue number system that is able to fit into a 1,000,000-gate FPGA device and be able to complete a first level 2-D DWT decomposition of a 32/spl times/32-pixel image in 205 /spl mu/s.
Abstract: Discrete wavelet transform has been incorporated as part of the JPEG2000 image compression standard and is used in many consumer imaging products. This paper presents a 2-dimensional biorthogonal DWT processor design based on the residue number system. The symmetric extension scheme is employed to reduce distortion at image boundaries. Hardware complexity reduction and utilization improvement are achieved by hardware sharing. Our implementation results show that the design is able to fit into a 1,000,000-gate FPGA device and is able to complete a first level 2-D DWT decomposition of a 32/spl times/32-pixel image in 205 /spl mu/s.

Journal ArticleDOI
TL;DR: A new method of feature extraction is proposed in order to improve the efficiency of retrieving Joint Photographic Experts Group (JPEG) compressed images and will give each retrieved image a rank to define its similarity to the query image.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: This paper presents the technologies that are currently being developed to accommodate the coding of floating point datasets with JPEG 2000 and shows that these enhancements to the JPEG 2000 coding pipeline lead to better compression results than Part 1 encoding where the floating point data had been retyped as integers.
Abstract: JPEG 2000 Part 10 is a new work part of the ISO/IEC JPEG Committee dealing with the extension of JPEG 2000 technologies to three-dimensional data. One of the issues in Part 10 is the ability to encode floating point datasets. Many Part 10 use cases come from the scientific and engineering communities, where floating point data is often produced either from numerical simulations or from remote sensing instruments. This paper presents the technologies that are currently being developed to accommodate this Part 10 requirement. The coding of floating point datasets with JPEG 2000 requires two changes to the coding pipeline. Firstly, the wavelet transformation stage is optimized to correctly decorrelate data represented with the IEEE 754 floating point standard. Special IEEE 754 floating point values like Infinities and NaN's are signaled beforehand as they do not correlate well with other floating point values. Secondly, computation of distortion measures on the encoder side is performed in floating point space, rather than in integer space, in order to correctly perform rate allocation. Results will show that these enhancements to the JPEG 2000 coding pipeline lead to better compression results than Part 1 encoding where the floating point data had been retyped as integers.

Journal ArticleDOI
TL;DR: The most important parameters of this new standard are described and several "tips and tricks" are presented to help resolve the design tradeoffs that JPEG2000 application developers are likely to encounter in practice.
Abstract: A new and improved image coding standard, called JPEG2000, has been developed. JPEG2000 is the state-of-the-art image coding standard that results from the joint efforts of the International Standards Organization (ISO) and the International Telecommunications Union. In this article, we describe the most important parameters of this new standard and present several "tips and tricks" to help resolve the design tradeoffs that JPEG2000 application developers are likely to encounter in practice. The new standard outperforms the older JPEG standard by approximately 2 dB of peak signal-to-noise ratio (PSNR) for several images across all compression ratios. The JPEG2000's superiority from the previous standard largely depends on the standard's security aspects, interactive protocols and application program interfaces for network access, wireless transmission, wavelet transform, and embedded block coding with optimal truncation (EBCOT).

Proceedings ArticleDOI
19 May 2004
TL;DR: Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet based subband decomposition and improved the quality of the reconstructed medical image in terms of both the peak PSNR and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate.
Abstract: In the paper, an 8/spl times/8 DCT approach is adopted to perform subband decomposition, followed by modified SPIHT data organization and entropy coding. The translation function has the ability to retain the detail characteristics of an image. By means of a simple transformation to gather the DCT spectrum data with the same frequency domain, the translation function exploits all the characteristics of all individual blocks to a global framework. In this scheme, insignificant DCT coefficients that correspond to the same spatial location in the high-frequency subbands can be used to reduce the redundancy by a combined function proposed in associated with the modified SPIHT. Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet based subband decomposition and improved the quality of the reconstructed medical image in terms of both the peak PSNR and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate.