scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 1999"


Book
20 Dec 1999
TL;DR: Covering both image and video compression, this book yields a unique, self-contained reference for practitioners tobuild a basis for future study, research, and development.
Abstract: Multimedia hardware still cannot accommodate the demand for large amounts of visual data Without the generation of high-quality video bitstreams, limited hardware capabilities will continue to stifle the advancement of multimedia technologies Thorough grounding in coding is needed so that applications such as MPEG-4 and JPEG 2000 may come to fruition Image and Video Compression for Multimedia Engineering provides a solid, comprehensive understanding of the fundamentals and algorithms that lead to the creation of new methods for generating high quality video bit streams The authors present a number of relevant advances along with international standards New to the Second Edition A chapter describing the recently developed video coding standard, MPEG-Part 10 Advances Video Coding also known as H264 Fundamental concepts and algorithms of JPEG2000 Color systems of digital video Up-to-date video coding standards and profiles Visual data, image, and video coding will continue to enable the creation of advanced hardware, suitable to the demands of new applications Covering both image and video compression, this book yields a unique, self-contained reference for practitioners tobuild a basis for future study, research, and development

342 citations


Book
01 Sep 1999
TL;DR: This book discusses JPEG Compression Modes, Huffman Coding in JPEG, and Color Representation in PNG, the Representation of Images, and more.
Abstract: Preface. Acknowledgments. 1. Introduction. The Representation of Images. Vector and Bitmap Graphics. Color Models. True Color versus Palette. Compression. Byte and Bit Ordering. Color Quantization. A Common Image Format. Conclusion. 2. Windows BMP. Data Ordering. File Structure. Compression. Conclusion. 3. XBM. File Format. Reading and Writing XBM Files. Conclusion. 4. Introduction to JPEG. JPEG Compression Modes. What Part of JPEG Will Be Covered in This Book? What are JPEG Files? SPIFF File Format. Byte Ordering. Sampling Frequency. JPEG Operation. Interleaved and Noninterleaved Scans. Conclusion. 5. JPEG File Format. Markers. Compressed Data. Marker Types. JFIF Format. Conclusion. 6. JPEG Human Coding. Usage Frequencies. Huffman Coding Example. Huffman Coding Using Code Lengths. Huffman Coding in JPEG. Limiting Code Lengths. Decoding Huffman Codes. Conclusion. 7. The Discrete Cosine Transform. DCT in One Dimension. DCT in Two Dimensions. Basic Matrix Operations. Using the 2-D Forward DCT. Quantization. Zigzag Ordering. Conclusion. 8. Decoding Sequential-Mode JPEG Images. MCU Dimensions. Decoding Data Units. Decoding Example. Processing DCT Coefficients. Up-Sampling. Restart Marker Processing. Overview of JPEG Decoding. Conclusion. 9. Creating Sequential JPEG Files. Compression Parameters. Output File Structure. Doing the Encoding. Down-Sampling. Interleaving. Data Unit Encoding. Huffman Table Generation. Conclusion. 10. Optimizing the DCT. Factoring the DCT Matrix. Scaled Integer Arithmetic. Merging Quantization and the DCT. Conclusion. 11. Progressive JPEG. Component Division in Progressive JPEG. Processing Progressive JPEG Files. Processing Progressive Scans. MCUs in Progressive Scans. Huffman Tables in Progressive Scans. Data Unit Decoding. Preparing to Create Progressive JPEG Files. Encoding Progressive Scans. Huffman Coding. Data Unit Encoding. Conclusion. 12. GIF. Byte Ordering. File Structure. Interlacing. Compressed Data Format. Animated GIF. Legal Problems. Uncompressed GIF. Conclusion. 13. PNG. History. Byte Ordering. File Format. File Organization. Color Representation in PNG. Device-Independent Color. Gamma. Interlacing. Critical Chunks. Noncritical Chunks. Conclusion. 14. Decompressing PNG Image Data. Decompressing the Image Data. Huffman Coding in Deflate. Compressed Data Format. Compressed Data Blocks. Writing the Decompressed Data to the Image. Conclusion. 15. Creating PNG Files. Overview. Deflate Compression Process. Huffman Table Generation. Filtering. Conclusion. Glossary. Bibliography. Index. 0201604434T04062001

190 citations


Journal ArticleDOI
TL;DR: The MZTE scheme is adopted as the baseline technique for the visual texture coding profile in both the MPEG-4 video group and SNHC group and provides much improved compression efficiency and fine-gradual scalabilities, which are ideal for hybrid coding of texture maps and natural images.
Abstract: This paper describes the texture representation scheme adopted for MPEG-4 synthetic/natural hybrid coding (SNHC) of texture maps and images. The scheme is based on the concept of multiscale zerotree wavelet entropy (MZTE) coding technique, which provides many levels of scalability layers in terms of either spatial resolutions or picture quality, MZTE, with three different modes (single-Q, multi-Q, and bilevel), provides much improved compression efficiency and fine-gradual scalabilities, which are ideal for hybrid coding of texture maps and natural images. The MZTE scheme is adopted as the baseline technique for the visual texture coding profile in both the MPEG-4 video group and SNHC group. The test results are presented in comparison with those coded by the baseline JPEG scheme for different types of input images, MZTE was also rated as one of the top five schemes in terms of compression efficiency in the JPEG2000 November 1997 evaluation, among 27 submitted proposals.

87 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: Some of the opportunities offered by the framework of lifting for developing adaptive wavelet transforms to improve performance in the neighbourhood of oriented edges clad with artificial imagery such as text and graphics are explored.
Abstract: In the context of high performance image compression algorithms, such as that emerging as the JPEG 2000 standard, the wavelet transform has demonstrated excellent compression performance with natural images. Like all waveform coding techniques, however, performance suffers in the neighbourhood of oriented edges clad with artificial imagery such as text and graphics. In this paper, we explore some of the opportunities offered by the framework of lifting for developing adaptive wavelet transforms to improve performance under these conditions.

71 citations


Proceedings ArticleDOI
07 Jun 1999
TL;DR: The exact goals of this future standard, which applications it addresses and the current standardisation process are seen, as well as what kind of results are already reachable with the current status of the standard.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new functionality. To address this need in the specific area of still image encoding, a new standard is currently being designed: JPEG2000. We see what the exact goals are of this future standard, which applications it addresses and the current standardisation process. Then we show through the descriptions of two available demonstrations what kind of results are already reachable with the current status of the standard. The first demonstration describes a Java implementation of the future standard, details the advantages of such an implementation and compares the performance of JPEG2000 with that of JPEG. The second demonstration describes how JPEG2000 can be used in domains were the transmission bandwidth is very restricted, taking advantages of new functions such as the definition of regions of interest and progressive transmission.

69 citations


Journal ArticleDOI
TL;DR: Coding techniques that enable progressive transmission when trellis coded quantization (TCQ) is applied to wavelet coefficients are presented and a method for approximately inverting TCQ in the absence of least significant bits is developed.
Abstract: In this work, we present coding techniques that enable progressive transmission when trellis coded quantization (TCQ) is applied to wavelet coefficients. A method for approximately inverting TCQ in the absence of least significant bits is developed. Results are presented using different rate allocation strategies and different entropy coders. The proposed wavelet-TCQ coder yields excellent coding efficiency while supporting progressive modes analogous to those available in JPEG.

46 citations


Journal ArticleDOI
TL;DR: This study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG, and verified other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG.
Abstract: This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.

33 citations


Journal ArticleDOI
TL;DR: This work presents a rate-distortion (RD) optimized JPEG compliant progressive encoder that produces a sequence of scans, ordered in terms of decreasing importance, and can achieve precise rate/distortion control.
Abstract: Among the many different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this work, a rate-distortion (RD) optimized JPEG compliant progressive encoder is presented that produces a sequence of scans, ordered in terms of decreasing importance. Our encoder outperforms an optimized sequential JPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing JPEG compliant encoders, our encoder can achieve precise rate/distortion control. Substantially better compression performance and precise rate control, provided by our progressive JPEG compliant encoding algorithm, are two highly desired features currently sought for the emerging JPEG-2000 standard.

23 citations


Journal ArticleDOI
TL;DR: Experimental results suggest that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel when it is used with bilinear interpolation and either error diffusion or ordered dithering.
Abstract: We describe a procedure by which Joint Photographic Experts Group (JPEG) compression may be customized for gray-scale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are significantly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We describe how the frequency-domain effects of scaling and halftoning may be measured, and how to account for those effects in an iterative design procedure for the JPEG quantization table. We also present experimental results suggesting that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel (with reference to the number of pixels in the original image) when it is used with bilinear interpolation and either error diffusion or ordered dithering. Based on these results, we believe that in terms of the achieved bit rate, the performance of our encoder is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.

22 citations


Proceedings ArticleDOI
15 Mar 1999
TL;DR: This paper takes advantage of perceptual classification to improve the performance of the standard JPEG implementation via adaptive thresholding, while being compatible with the baseline standard.
Abstract: We propose a new technique for transform coding based on rate-distortion (RD) optimized thresholding (i.e. discarding) of wasteful coefficients. The novelty in this proposed algorithm is that the distortion measure is made adaptive. We apply the method to the compression of mixed documents (containing text, natural images, and graphics) using JPEG for printing. Although the human visual system's response to compression artifacts varies depending on the region, JPEG applies the same coding algorithm throughout the mixed document. This paper takes advantage of perceptual classification to improve the performance of the standard JPEG implementation via adaptive thresholding, while being compatible with the baseline standard. A computationally efficient classification algorithm is presented, and the improved performance of the classified JPEG coder is verified. Tests demonstrate the method's efficiency compared to regular JPEG and to JPEG using non-adaptive thresholding. The non-stationary nature of distortion perception is true for most signal classes and the same concept can be used elsewhere.

17 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: It has been demonstrated that the proposed algorithms to reduce compression artifacts efficiently with a low computational complexity can be applied to different compression schemes with minor finetuning.
Abstract: Low complexity postprocessing algorithms to reduce compression artifacts by using a robust nonlinear filtering approach and a table lookup method are investigated in this research. We first formulate the compression artifact reduction problem as a robust estimation problem. Under this framework, an enhanced image can be obtained by minimizing a cost function that accounts for the image smoothness as well as the image fidelity constraints. However, unlike traditional methods that adopt a gradient descent method to search for the optimal solution, we determine the approximate solution via the evaluation of a set of nonlinear cost functions. This nonlinear filtering process is performed to reduce the computational complexity of the postprocessing operation so that it can be implemented in real time. In the case of video postprocessing, a table lookup method is adopted to further reduce the complexity. The proposed approach is generic and flexible. It can be applied to different compression schemes with minor finetuning. We have tested the developed algorithm on several compressed video or images which are obtained by JPEG 2000 VM, and H.263+. It has been demonstrated that the proposed method can reduce compression artifacts efficiently with a low computational complexity.

Proceedings ArticleDOI
Bart Vanhoof1, M. Peon, Gauthier Lafruit, Jan Bormans, Marc Engels, Ivo Bolsens 
16 May 1999
TL;DR: The OZONE chip is presented, a dedicated hardware solution for an EZT coding coprocessor that performs visual-lossless compression of more than 30 CIF images per second and parallel operation of multiple OZones is supported.
Abstract: Wavelet-based image compression has been adopted in emerging standards such as MPEG-4 and JPEG2000. An embedded zero tree (EZT) coding scheme enables the compression and the quantization of the wavelet coefficients. This paper presents the OZONE chip, a dedicated hardware solution for an EZT coding coprocessor. Realized in a 0.5 /spl mu/m CMOS technology, the OZONE performs visual-lossless compression of more than 30 CIF images per second. Due to its new scalable architecture, parallel operation of multiple OZONEs is supported.

Journal ArticleDOI
TL;DR: Technology for JQT design that takes a pattern recognition approach to the problem, using a database of images to train statistical models of the artifacts introduced through JPEG compression, and uses a model of human visual perception as an error measure.
Abstract: A JPEG Quality Transcoder (JQT) converts a JPEG image file that was encoded with low image quality to a larger JPEG image file with reduced visual artifacts, without access to the original uncompressed image. In this article, we describe technology for JQT design that takes a pattern recognition approach to the problem, using a database of images to train statistical models of the artifacts introduced through JPEG compression. In the training procedure for these models, we use a model of human visual perception as an error measure. Our current prototype system removes 32.2% of the artifacts introduced by moderate compression, as measured on an independent test database of linearly coded images using a perceptual error metric. This improvement results in an average PSNR reduction of 0.634 dB.

Proceedings ArticleDOI
19 May 1999
TL;DR: A focused-procedure based upon a collection of image processing algorithms that serve to identify regions-of-interest (ROIs), over a digital image is developed, so that the JPEG version allows the result of the compression to be formatted into a file compatible for standard JPEG decoding.
Abstract: We have developed a focused-procedure based upon a collection of image processing algorithms that serve to identify regions-of-interest (ROIs), over a digital image. To loci of these ROIs are quantitatively compared with ROIs identified by human eye fixations or glimpses while subjects were looking at the same digital images. The focused- procedure is applied to adjust and adapt the compression ratio over a digital image: - high resolution and poor compression for ROIs; low resolution and strong compression for the major expanse of the entire image. In this way, an overall high compression ratio can be achieved, while at the same time preserving, important visual information within particularly relevant regions of the image. We have bundled the focused-procedures with JPEG, so that the JPEG version allows the result of the compression to be formatted into a file compatible for standard JPEG decoding. Thus, once the image has been compressed, it can be read without difficulty.

Proceedings ArticleDOI
K. Hamamoto1
05 Sep 1999
TL;DR: This paper attempts standardizing JPEG quantization table for medical ultrasonic echo images by a statistical method, and results reveal that the proposed method achieves a lower bit rate compared with the JPEG standard under the same image quality.
Abstract: Storing digital medical images is standardized by the DICOM report. Lossy pulse-echo ultrasonic image compression by a JPEG baseline system is permitted by it. The purpose of this paper is to reduce the data volume and to achieve a low bit rate in the digital representation of pulse-echo ultrasonic images without perceived loss of image quality. In image compression with a JPEG baseline system, it is possible to control the compression ratio and image quality by controlling quantization values. This paper attempts standardizing JPEG quantization table for medical ultrasonic echo images by a statistical method. Results reveal that the proposed method achieves a lower bit rate compared with the JPEG standard under the same image quality.

Proceedings ArticleDOI
18 Oct 1999
TL;DR: With the proposed integrated method, watermark embedding and retrieval processes can be done very efficiently compared with existing watermarking schemes.
Abstract: In this research, we propose an approach to combine the image compression and the image watermarking schemes in an effective way. The image coding scheme under our consideration is EBCOT (Embedded Block Coding with Optimized Truncation) which is the basis of JPEG2000 VM (Verification Model). The watermark is embedded when the compressed bit-stream is formed, and can be detected on the fly during image decompression. With the proposed integrated method, watermark embedding and retrieval processes can be done very efficiently compared with existing watermarking schemes. The embedded watermark is robust against various signal processing attacks including compression and filtering while the resulting watermarked image maintains good perceptual quality. Furthermore, the watermark can be detected progressively and ROI (Region of Interest)-based watermarking can be easily accomplished.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: The main idea in SPEAR is to have a number of European experts work together on designing the new standard for coding and compression of still pictures, JPEG 2000, due to become a standard by the end of the year 2000.
Abstract: The main idea in SPEAR is to have a number of European experts work together on designing the new standard for coding and compression of still pictures, JPEG 2000. JPEG stands for Joined Photographic Experts Group, the ISO working group referenced as ISO/IEC JTCI SC29 WGI. JPEG 2000 is referenced as ISO 15444. It is due to become a standard by, the end of the year 2000, with implementations in a large,lumber of domains, thanks to its concept of "Open Standard" which gives a new dimension in terms of flexibility and functionality.

Proceedings ArticleDOI
26 Oct 1999
TL;DR: This paper discusses a low memory image coding algorithm that employs a line-based transform, a technique to exploit the sparseness of non- zero wavelet coefficients in a software-only image decoder, and parallel implementation techniques that take full advantage of lifting filterbank factorizations.
Abstract: The discrete wavelet transform (DWT) has been touted as a very effective tool in many signal processing application, including compression, denoising and modulation For example, the forthcoming JPEG 2000 image compression standard will be based on the DWT However, in order for the DWT to achieve the popularity of other more established techniques (eg, the DCT in compression) a substantial effort is necessary in order to solve some of the related implementation issues Specific issues of interest include memory utilization, computation complexity and scalability In this paper we concentrate on wavelet-based image compression and provide examples, based on our recent work, of how these implementation issues can be addressed in three different environments, namely, memory constrained applications, software-only encoding/decoding, and parallel computing engines Specifically we will discuss (1) a low memory image coding algorithm that employs a line-based transform, (2) a technique to exploit the sparseness of non- zero wavelet coefficients in a software-only image decoder, and (3) parallel implementation techniques that take full advantage of lifting filterbank factorizations

Book ChapterDOI
TL;DR: A new method is proposed that actively uses the JPEG quality level as a parameter in embedding a watermark into an image and can be extracted even when the image is compressed using JPEG.
Abstract: Digital watermarking has been considered as an important technique to protect the copyright of digital content. For a digital watermarking method to be effective, it is essential that a watermark embedded in a still or moving image resists against various attacks ranging from compression, filtering to cropping. As JPEG is a dominant still image compression standard for Internet applications, digital watermarking methods that are robust against the JPEG compression are especially useful. Most digital watermarking methods proposed so far work by modulating pixels/coefficients without considering the quality level of JPEG, which renders watermarks readily removable. In this paper, we propose a new method that actively uses the JPEG quality level as a parameter in embedding a watermark into an image. A useful feature of the new method is that the watermark can be extracted even when the image is compressed using JPEG.

Journal ArticleDOI
TL;DR: The deterioration component of a color image is analyzed and a quantization table using Fibonacci numbers is proposed on the basis of which an analysis method using the 2-D FFT (Fast Fourier Transform) can catch a change of acolor image data by a quantized table change.
Abstract: The DCT (Discrete Cosine Transform) based coding process of full color images is standardized by the JPEG (Joint Photographic Expert Group). The JPEG method is applied widely, for example a color facsimile. The quantization table in the JPEG coding influences image quality. However, detailed research is not accomplished sufficiently about a quantization table. Therefore, we study the relations between quantization table and image quality. We examine first the influence to image quality given by quantization table. Quantization table is grouped into four bands by frequency. When each value of bands is changed, the merit and demerit of color image are examined. At the present time, we analyze the deterioration component of a color image. We study the relationship between the quantization table and the restoration image. Color image is composed of continuoustone level and we evaluate the deterioration component visually. We also analyze it numerically. An analysis method using the 2-D FFT (Fast Fourier Transform) can catch a change of a color image data by a quantization table change. On the basis of these results, we propose a quantization table using Fibonacci numbers.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: This paper focuses on the operational aspect of high-order statistical context modeling, and introduces some fast algorithm techniques that can drastically reduce both time and space complexities of high -order context modeling in the wavelet domain.
Abstract: In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.

Book ChapterDOI
01 Sep 1999
TL;DR: The results show the relationship between the subjective National Imagery Interpretability Rating Scale (NIIRS) and several numerical image quality measures and an insight is provided into factors influencing NIIRS evaluation.
Abstract: The introduction of image distortion during compression is of widespread concern, to the extent that the nature and size of the distortion may influence the choice of codec. The ability to quantify the distortion for particular applications is therefore highly desirable, particularly when options for new compression standards (such as JPEG 2000) are being considered. We report on the performance of several degradation measures that have been evaluated on optical imagery having previously undergone compression by the JPEG, wavelet and VQ codecs. The results show the relationship between the subjective National Imagery Interpretability Rating Scale (NIIRS) and several numerical image quality measures. An insight is provided into factors influencing NIIRS evaluation.

Proceedings ArticleDOI
07 Jun 1999
TL;DR: What the exact goals are of this future standard, which applications it addresses and the current standardisation process are seen are seen, and what kind of results are already reachable with the current status of the standard are shown.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new functionality. To address this need in the specific area of still image encoding, a new standard is currently being designed: JPEG2000. We will see in this paper what the exact goals are of this future standard, which applications it addresses and the current standardisation process. Then we will show through the descriptions of two available demonstrations what kind of results are already reachable with the current status of the standard. The first demonstration describes a JAVA implementation of the future standard, details the advantages of such an implementation and compares the performance of JPEG2000 with that of JPEG. The second demonstration describes how JPEG2000 can be used in domains were the transmission bandwidth is very restricted, taking advantages of new functions such as the definition of Regions Of Interest and progressive transmission.

Journal ArticleDOI
TL;DR: A neural network-based technique to compress multispectral SPOT satellite images losslessly that harnesses the pattern recognition property of one-hidden-layer back propagation neural networks to exploit both the spatial and the spectral redundancy of the three-band SPOT images.
Abstract: This paper describes a neural network-based technique to compress multispectral SPOT satellite images losslessly. The technique harnesses the pattern recognition property of one-hidden-layer back propagation neural networks to exploit both the spatial and the spectral redundancy of the three-band SPOT images. The networks are initially trained on samples of the SPOT images with a unique network for each of the bands. The resultant trained nonlinear predictors are then used to predict the target SPOT images. Predicted errors are entropy-coded using multi-symbol arithmetic coding. This technique achieves compression ratios of 2.1 times and 3.2 times for urban and rural SPOT images respectively which are above 10% better than using lossless JPEG compression techniques. In comparison with JPEG2000 lossless compression, the proposed technique is 5% better.

Proceedings ArticleDOI
Jie Liang1
15 Mar 1999
TL;DR: The predictive embedded zerotree wavelet (PEZW) codec is introduced, an image coder that achieves good coding efficiency and versatile functionality with limited complexity requirement and is currently a proposal to the evolving JPEG2000 standard.
Abstract: We introduce the predictive embedded zerotree wavelet (PEZW) codec, an image coder that achieves good coding efficiency and versatile functionality with limited complexity requirement. Our complexity analysis showed that the memory requirement of this coder is less than 15 k bytes regardless of image sizes. Our simulation results also showed that the coding efficiency of this low complexity coder is competitive with the state of the art of wavelet coders that use whole image buffers. The PEZW coder described has been adopted in MPEG4 as its still texture coding tool and is currently a proposal to the evolving JPEG2000 standard.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: The hybrid compression method presented can remarkably reduce mosquito distortion around the contours of character or graphics and provides good performance for the compound images compared with JPEG.
Abstract: This paper presents a hybrid compression method for a compound image including characters, graphics, and photos. Such kind of image is popular as Web contents or TV game images. We use usually JPEG to compress it, but the DCT compression like JPEG causes strong mosquito distortion around the contours of characters or graphics because of their strong edge intensity. Therefore we need high quality and low bitrate coding scheme for the compound image. Our hybrid compression method is based on region separation and adaptive coding. This method separates synthetic image region and natural image region according to each features, i.e., edge intensity and color deviation. Then our method encodes synthetic region by runlength coding, and natural region by JPEG respectively from the viewpoint of visual quality and coding efficiency. Our hybrid compression method provides good performance for the compound images compared with JPEG. This method can remarkably reduce mosquito distortion around the contours of character or graphics. Experimental results show +5 to +10 dB improvement of the SNR over JPEG.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This paper presents an overview of these two emerging standards and highlights the differences in terms of coding scheme, performance, scope, functionalities, and applications.
Abstract: Two major emerging standards, MPEG-4 and JPEG-2000, have recently adopted wavelet-based coding as the basic framework for coding still images and textures. Although both MPEG-4 and JPEG-2000 are based on wavelet coding, they are different. This paper presents an overview of these two emerging standards and highlights the differences in terms of coding scheme, performance, scope, functionalities, and applications. Both qualitative and quantitative comparisons are provided.

Patent
Ricardo L. de Queiroz1
10 Dec 1999
TL;DR: In this article, a method for compressing digital image data to improve the efficiency of serial data transmission is described, which accomplishes image compression by performing the most complex portions of a standard compression technique on a subset of the originally provided data utilizing a modified two-dimensional discrete cosine transform.
Abstract: A method for compressing digital image data to improve the efficiency of serial data transmission is disclosed More specifically, the present invention accomplishes image compression by performing the most complex portions of a standard compression technique on a subset of the originally provided data utilizing a modified two-dimensional discrete cosine transform The invention includes a fast JPEG compressor using a Haar transform with a conditional transform

Journal ArticleDOI
TL;DR: A Hamiltonian algorithm is applied to optimize JPEG quantization tables to reduce the data volume and to achieve a low bit rate in the digital representation of pulse-echo ultrasonic images without a perceived loss in image quality.
Abstract: Storing digital medical images is standardized by the digital imaging and communications in Medicine (DICOM) report. Lossy pulse-echo ultrasonic image compression by a joint photographic expert group (JPEG) baseline system is permitted by it. Although significant compression is achievable by lossy algorithms, they do not permit the exact recovery of the original image. The objective of this study is to reduce the data volume and to achieve a low bit rate in the digital representation of pulse-echo ultrasonic images without a perceived loss in image quality. In image compression with a JPEG baseline system, it is possible to control the compression ratio and image quality by controlling quantization values. In this paper, we apply the Hamiltonian algorithm to optimize JPEG quantization tables. We construct the evaluation function involving the compression ratio and image quality. Results reveal that it is possible to optimize these quantization values by the Hamiltonian algorithm for lossy pulse-echo ultrasonic image compression.

Proceedings ArticleDOI
22 Mar 1999
TL;DR: This study describes a detailed study of the effect of finite precision computation on wavelet-based image compression, and it uses trellis-coded quantization on the wavelet coefficients, followed by bit-plane coding.
Abstract: This study describes a detailed study of the effect of finite precision computation on wavelet-based image compression. Specifically, we examine how the quality of the final decoded image is affected by various choices that a hardware designed will have to make, such as choice of wavelet (integer or real), fixed-point attributes (number of integer and fractional bits), and compression. The algorithm studied here is that adopted by the JPEG 2000 committee, and it uses trellis-coded quantization on the wavelet coefficients, followed by bit-plane coding.