scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 1995"


Proceedings ArticleDOI
Marc Levoy1
15 Sep 1995
TL;DR: A software simulation that uses JPEG and MPEG-1 compression, and results for a variety of scenes showing polygon-assisted compression looks subjectively better for a fixed network bandwidth than compressing and sending the high-quality rendering.
Abstract: Recent advances in realtime image compression and decompression hardware make it possible for a high-performance graphics engine to operate as a rendering server in a networked environment. If the client is a low-end workstation or set-top box, then the rendering task can be split across the two devices. In this paper, we explore one strategy for doing this. For each frame, the server generates a high-quality rendering and a low-quality rendering, subtracts the two, and sends the difference in compressed form. The client generates a matching low quality rendering, adds the decompressed difference image, and displays the composite. Within this paradigm, there is wide latitude to choose what constitutes a high-quality versus low-quality rendering. We have experimented with textured versus untextured surfaces, fine versus coarse tessellation of curved surfaces, Phong versus Gouraud interpolated shading, and antialiased versus nonantialiased edges. In all cases, our polygon-assisted compression looks subjectively better for a fixed network bandwidth than compressing and sending the high-quality rendering. We describe a software simulation that uses JPEG and MPEG-1 compression, and we show results for a variety of scenes. CR Categories: I.4.2 [Computer Graphics]: Compression — Approximate methods I.3.2 [Computer Graphics]: Graphics Systems — Distributed/network graphics Additional keywords: client-server graphics, JPEG, MPEG, polygon-assisted compression

142 citations


Journal ArticleDOI
TL;DR: The paper describes all the components of the JPEG algorithm including discrete cosine transform, quantization, and entropy encoding including both encoder and decoder architectures.
Abstract: This paper is the first part of a comprehensive survey of compression techniques and standards for multimedia applications. It covers the JPEG compression algorithm which is primarily used for full-color still image applications. The paper describes all the components of the JPEG algorithm including discrete cosine transform, quantization, and entropy encoding. It also describes both encoder and decoder architectures. The main emphasis is given to the sequential mode of operation which is the most typical use of JPEG compression; however, the other three modes of operation, progressive, lossless, and hierarchical JPEG, are described as well. A number of experimental data for both grayscale and color image compression is provided in the paper.

55 citations


Proceedings ArticleDOI
09 May 1995
TL;DR: This paper introduces a novel, image-adaptive, encoding scheme for the baseline JPEG standard, in particular, coefficient thresholding, JPEG quantization matrix (Q-matrix) optimization, and adaptive Huffman entropy-coding are jointly performed to maximize coded still-image quality within the constraints of the baselinejpg syntax.
Abstract: This paper introduces a novel, image-adaptive, encoding scheme for the baseline JPEG standard. In particular, coefficient thresholding, JPEG quantization matrix (Q-matrix) optimization, and adaptive Huffman entropy-coding are jointly performed to maximize coded still-image quality within the constraints of the baseline JPEG syntax. Adaptive JPEG coding has been addressed in earlier works: by Ramchandran and Vetterli (see IEEE Trans. on Image Processing, Special Issue on Image Compression, vol.3, p.700-704, September 1994), where fast rate-distortion (R-D) optimal coefficient thresholding was described, and by Wu and Gersho (see Proc. Inter. Conf. Acoustics, Speech and Signal Processing, vol.5, p.389-392, April 1993) and Hung and Meng (1991), where R-D optimized Q-matrix selection was performed. By formulating an algorithm which optimizes these two operations jointly, we have obtained performance comparable to more complex, "state-of-the-art" coding schemes: for the "Lenna" image at 1 bpp, our algorithm has achieved a PSNR of 39.6 dB. This result represents a gain of 1.7 dB over JPEG with a customized Huffman entropy coder, and even slightly exceeds the published performance of Shapiro's (see IEEE Trans. on Signal Processing, vol.41, p.3445-3462, December 1993) wavelet-based scheme. Furthermore, with the choice of appropriate visually-based error metrics, noticeable subjective improvement has been achieved as well.

43 citations


Journal Article
TL;DR: The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process.
Abstract: Image compression is a necessity for the utilization of large digital images, e.g., digitized aerial color images. The JPEG still-picture compression algorithm is one alternative for carrying out the image compression task. The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process. In our experience, the JPEG algorithm seems to be a good choice for image compression. For color images, it gives a compression ratio of about 1:10 without considerable degradation in the visual or geometric quality of the image.

25 citations


Patent
07 Jun 1995
TL;DR: In this article, the authors present a storage and retrieval system for JPEG images, which includes an apparatus for storing a JPEG image in or on a storage medium with unequal error protection, comprising a separator for separating the JPEG image into Type-I and Type-II information.
Abstract: The present invention is a storage and retrieval system for JPEG images. In one illustrative embodiment the system includes an apparatus for storing a JPEG image in or on a storage medium with unequal error protection, comprising a separator for separating the JPEG image into Type-I and Type-II information, an error correction encoder for encoding the Type-I information with more error protection than the Type-II information, and a storage recorder for recording the encoded Type-I and Type-II information in or on the storage medium. The system further includes an apparatus for reading the encoded Type-I and Type-II information from the storage medium.

19 citations


Proceedings ArticleDOI
03 Mar 1995
TL;DR: CB9 as discussed by the authors is a context-based lossless image compression algorithm that uses codes prediction errors with an adaptive arithmetic code, and it has been developed within an algorithm class that includes (in the order of their development) Sunset, JPEG lossless, sub8xb, and now CaTH (Centering and Tail Handling).
Abstract: The CB9 lossless image compression algorithm is context-based, and codes prediction errors with an adaptive arithmetic code. It has been developed within an algorithm class that includes (in the order of their development) Sunset, JPEG lossless, sub8xb, and now CaTH (Centering and Tail Handling). Lossless compression algorithms using prediction errors are easily modified to introduce a small loss through quantization so that the absolute error for any pixel location does not exceed prescribed value N. In this case, N varies from 1 to 7; the values for which the JPEG group issued a call for contributions. This work describes CB9 and the experiments with near-lossless compression using the JPEG test images. Included are experiments with some image processing operations such as edge-enhancement with the purpose of studying the loss in fidelity from successively performing decompression, followed by an image processing operation, followed by recompression of the new result.

16 citations


Patent
29 Sep 1995
TL;DR: In this paper, a method of fast decompressing a document image compressed using transform coding for scaling and previewing purposes is presented. But the method is particularly efficient using the discrete cosine transform which is used in the JPEG ADCT algorithm.
Abstract: A method of fast decompressing a document image compressed using transform coding for scaling and previewing purposes. A fast algorithm is derived by utilizing a fraction of all available transform coefficients representing the image. The method is particularly efficient using the discrete cosine transform which is used in the JPEG ADCT algorithm. In JPEG ADCT, a very fast and efficient implementation is derived for a resolution reduction factor of 16 to 1 (4 to 1 in each direction) without needing any floating point arithmetic operations.

15 citations


Proceedings ArticleDOI
17 Apr 1995
TL;DR: This work presents a method to significantly improve the performance of software based JPEG decompression, achieving an 80% performance increase decompressing typical JPEG video streams.
Abstract: JPEG picture compression and related algorithms are not only used in still picture compression, but also to a growing degree for moving picture compression in telecommunication applications. Real-time JPEG compression and decompression are crucial in these scenarios. We present a method to significantly improve the performance of software based JPEG decompression. Key to these performance gains are adequate knowledge of the structure of the JPEG coded picture information and transfer of structural information between consecutive processing steps. Our implementation achieved an 80% performance increase decompressing typical JPEG video streams.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

12 citations


Journal ArticleDOI
Robert C. Kidd1
TL;DR: Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quallty that was ostensibly superior to that of existing internationalstandard JPEG, it appears possible that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm.
Abstract: An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quallty that was ostensibly superior to that of existing internationalstandard JPEG, it appears possible that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

11 citations


Proceedings ArticleDOI
07 Jun 1995
TL;DR: An algorithm aiming at the improvement of the image quality at higher compression ratios than that JPEG can handle is proposed, which combines decimation/interpolation and activity classification as the pre-/post-processing, and uses JPEG with optimal Q-tables as the compression engine.
Abstract: We present a JPEG-based image coding system which preprocesses the image by adaptive subsampling based on the local activity levels. As a result, it yields good quality in both smooth and complex areas of the image at high compression ratios. We propose an algorithm aiming at the improvement of the image quality at higher compression ratios than that JPEG can handle. This scheme combines decimation/interpolation and activity classification as the pre-/post-processing, and uses JPEG with optimal Q-tables as the compression engine. It yields better image quality than the original JPEG or the uniform subsampling JPEG. The increased complexity is only minor compared to JPEG itself.

8 citations


Proceedings ArticleDOI
17 Apr 1995
TL;DR: This paper presents a wavelet transform based technique for achieving spatial scalability (within the framework of hierarchical mode) and simulation results confirm the substantial performance improvement and superior subjective quality images using the proposed technique.
Abstract: In this paper, we present scalable image compression algorithms based on wavelet transform. Recently, the International Standard Organization (ISO) has proposed the JPEG standard for still image compression. JPEG standard not only provides the basic features of compression (baseline algorithm) but also provides the framework of reconstructing images in different picture qualities and sizes. These features are referred to as SNR and spatial scalability, respectively. Spatial scalability can be implemented using the hierarchical mode in the JPEG standard. However, the standard does not specify the downsampling filters to be used for obtaining the progressively lower size images. A straightforward implementation would employ mean downsampling filters. However, this filter does not perform very well in extracting the features from the full size image resulting in poor quality images and a lower compression ratio. We present a wavelet transform based technique for achieving spatial scalability (within the framework of hierarchical mode). Our simulation results confirm the substantial performance improvement and superior subjective quality images using the proposed technique. Most importantly, the wavelet based technique does not require any modifications to existing JPEG decoders.

Proceedings ArticleDOI
07 Jun 1995
TL;DR: In this paper, optimal variable quantization techniques for the newly proposed JPEG extensions from ISO/IEC 10918-3 (ITU-T Recommendation T.84) were presented.
Abstract: We present optimal variable quantization techniques for the newly proposed JPEG extensions from ISO/IEC 10918-3 (ITU-T Recommendation T.84). Variable quantization for each block is necessary to transcode from any video compression format (e.g. MPEG). It also gives increased subjective quality and efficient utilization of the channel bandwidth. The selective refinement extension is useful for selecting a part of an image for further refinement. In our experiments, we compare the reconstructed image obtained by variable quantization to that of the JPEG baseline system.

Proceedings ArticleDOI
03 Mar 1995
TL;DR: Experimental results demonstrate that the customized JPEG encoder typically maintains `near visually lossless' image quality at rates below 0.2 bits per pixel (with reference to the final, printed image).
Abstract: We describe a procedure by which JPEG compression may be customized for grayscale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are greatly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We present experimental results demonstrating that the customized JPEG encoder typically maintains `near visually lossless' image quality at rates below 0.2 bits per pixel (with reference to the final, printed image). In terms of the achieved bit rate, this performance is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.

Proceedings ArticleDOI
28 Mar 1995
TL;DR: Two known a posteriori enhancement techniques, transform coefficient adjustment and low-pass filtering, are adapted to ICT restoration based on a quantitative model of distortion statistics, and combined to achieve significant objective and subjective improvement in restored image fidelity compared with original data.
Abstract: Summary form only given; substantially as follows. The NASA/JPL Galileo spacecraft uses lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique for image compression is a block transform technique based on the integer cosine transform (ICT), a derivative of the JPEG image compression standard where a computationally efficient integer cosine transform implementation replaces JPEG's discrete cosine transform (DCT). JPEG and ICT are examples of block transform-based compression schemes. The compression is achieved by quantizing the transformed data. The distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error [Boden, 1995]. For scientific applications such as Galileo, it is particularly desirable to mitigate the quantization distortion in the decompressed image to enhance science return. Galileo's limited computational resources preclude additional processing, hence attempts at data restoration are limited to a posteriori processing. The present authors consider two known a posteriori enhancement techniques, transform coefficient adjustment [Ahumada Jr. and Horng], and low-pass filtering [Reeve III and Lim]. These techniques are adapted to ICT restoration based on a quantitative model of distortion statistics, and combined to achieve significant objective and subjective improvement in restored image fidelity compared with original data.

Proceedings ArticleDOI
27 Apr 1995
TL;DR: This paper addresses both the adaptation to medical image of the JPEG base-line system by optimizing the normalization array and Huffman tables and its extension by adaptation of the block size to the image correlation lengths.
Abstract: This paper addresses both the adaptation to medical image of the JPEG base-line system by optimizing the normalization array and Huffman tables and its extension by adaptation of the block size to the image correlation lengths. Adaptation of the JPEG algorithm to each medical image or modality results in a significant improvement of the decompressed image quality with a low excess of computing cost.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: In this article, the relationship between the compression rate and picture quality following decompression is discussed for the JPEG baseline scheme, and the experimental results show that the iterative factor of the repetitive JPEG operation had no influence upon image quality, while the first compression determined base image quality.
Abstract: For the compression of still images using the Joint Photographic Experts Group (JPEG) scheme, the relationship between the compression rate and picture quality following decompression is discussed. Little has been reported previously on the effect of reiterated operations of image compression. This study describes the relationship between the visual evaluation and the reiterative operation by the JPEG baseline scheme. The experimental results show that (1) the iterative factor of the repetitive JPEG operation had no influence upon image quality, and (2) the first compression determined base image quality.

Proceedings ArticleDOI
25 Oct 1995
TL;DR: In this article, a novel discrete cosine transform (DCT) and fractal transform coding (FTC) hybrid image compression algorithm is proposed which dramatically improves the speed of the FTC coding and JPEG's ability of preserving image details at high compression ratios.
Abstract: A novel discrete cosine transform (DCT) and fractal transform coding (FTC) hybrid image compression algorithm is proposed which dramatically improves the speed of the FTC coding and JPEG's ability of preserving image details at high compression ratios. The overall subjective quality of the whole JPEG decoded image is also heightened.


Book ChapterDOI
01 Jan 1995
TL;DR: Until recently, the Group 3 and Group 4 standards for facsimile transmission were the only international standard methods for the compression of images but these standards deal only with bilevel images and do not address the problem of compressing continuous-tone color or grayscale images.
Abstract: Until recently, the Group 3 and Group 4 standards for facsimile transmission were the only international standard methods for the compression of images. However, these standards deal only with bilevel images and do not address the problem of compressing continuous-tone color or grayscale images.

Proceedings ArticleDOI
Armando Manduca1
20 Sep 1995
TL;DR: A novel scheme for encoding wavelet coefficients, termed embedded zerotree coding, has recently been proposed and yields significantly better compression than more standard methods.
Abstract: Wavelet-based image compression is proving to be a very effective technique for medical images, giving significantly better results than the JPEG algorithm. A novel scheme for encoding wavelet coefficients, termed embedded zerotree coding, has recently been proposed and yields significantly better compression than more standard methods. The authors report the results of experiments comparing such coding to more conventional wavelet compression and to JPEG compression on several types of medical images.

Book ChapterDOI
01 Jan 1995
TL;DR: The bi-level coding algorithm most recently standardized by ISO is the JBIG (Joint Bi-level Imaging Group) algorithm, which is more complex than G3/G4 coding, but offers two compensating advantages.
Abstract: The bi-level coding algorithm most recently standardized by ISO is the JBIG (Joint Bi-level Imaging Group) algorithm[7.1]. JBIG coding is more complex than G3/G4 coding, but offers two compensating advantages.