scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 1996"


Proceedings ArticleDOI
15 Apr 1996
TL;DR: A novel scheme for encoding wavelet coefficients, termed set partitioning in hierarchical trees, has recently been proposed and yields significantly better compression than more standard methods.
Abstract: Wavelet-based image compression is proving to be a very effective technique for medical images, giving significantly better results than the JPEG algorithm. A novel scheme for encoding wavelet coefficients, termed set partitioning in hierarchical trees, has recently been proposed and yields significantly better compression than more standard methods. We report the results of experiments comparing such coding to more conventional wavelet compression and to JPEG compression on several types of medical images.

47 citations


Proceedings ArticleDOI
27 Feb 1996
TL;DR: In this article, the quantization step sizes are adapted to the activity level of the block, and the activity selection is based on an edge-driven quadtree decomposition of the image.
Abstract: Digital image compression algorithms have become increasingly popular due to the need to achieve cost-effective solutions in transmitting and storing images. In order to meet various transmission and storage requirements, the compression algorithm should allow a range of compression ratios, thus providing images of different visual quality. This paper presents a modified JPEG algorithm that provides better visual quality than the Q-factor scaling method commonly used with JPEG implementations. The quantization step sizes are adapted to the activity level of the block, and the activity selection is based on an edge-driven quadtree decomposition of the image. This technique achieves higher visual quality than standard JPEG compression at the same bit rate.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

19 citations


Journal ArticleDOI
TL;DR: A high-performance technique based on Wavelet filter downsampling for achieving spatial scalability within the framework of the hierarchical mode of the JPEG standard is presented.

14 citations


Proceedings ArticleDOI
Hyung-Il Kim1, H.W. Park
16 Sep 1996
TL;DR: Simulation results show that the proposed algorithm reduces the blocking artifacts significantly in the subjective and objective views.
Abstract: A postprocessing algorithm is proposed to reduce the blocking artifacts of joint photographic experts group (JPEG) decompressed images The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular, for highly compressed images because each block is transformed and quantized independently The reduction of these blocking effects has been an essential issue for high quality visual communications The proposed postprocessing algorithm reduces these blocking artifacts efficiently A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images These simulation results show that the proposed algorithm reduces the blocking artifacts significantly in the subjective and objective views

11 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: An alternate proposal is made for a new variant of JPEG coding based on a reversible transformation of the JPEG output of existing compilers, so that existing decoder/encoders can be utilized, and existing JPEG libraries can be modified with no alteration to the quality of full size images after decompression.
Abstract: Image compression codecs should be adapted to the practical reality that most images are first viewed as thumbnail images, before the full size image is accessed. Existing JPEG variants, progressive and hierarchical, could be used for thumbnail based image access, but would involve a total re-engineering of existing libraries. An alternate proposal is made for a new variant of JPEG coding based on a reversible transformation of the JPEG output of existing compilers, so that existing decoder/encoders can be utilized, and existing JPEG libraries can be modified with no alteration to the quality of full size images after decompression. In the proposed scheme, the image code is partitioned into a thumb part and the remainder, or FF part. The thumb part is sufficient for the production of an image thumbnail, while this partition of the code together with the FF partition is required for full featured image reconstruction.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

8 citations


Proceedings ArticleDOI
13 Mar 1996
TL;DR: Simple modifications within the JPEG construct are studied to improve the overall quality of images at low bit rates and a multiresolution-like organization of the 2 X 2 block DCT coefficients is considered and is shown to represent the Haar-based subband/wavelet transform.
Abstract: JPEG, an international standard for still-image compression, is a widely used technique for compressing natural images. Popularity of JPEG stems from its flexibility, reasonable compression rate and ease of implementation. In baseline and progressive modes of JPEG, transform coding based on 8 X 8 block discrete cosine transform (DCT) is used. At high compression ratios (i.e. low bit rates), however, JPEG typically causes blockiness in the reconstructed image. In this paper, we highlight key factors that limit baseline JPEG's performance at low bit rates. Simple modifications within the JPEG construct are studied to improve the overall quality of images at low bit rates. In addition, a multiresolution-like organization of the 2 X 2 block DCT coefficients is considered and is shown to represent the Haar-based subband/wavelet transform.

8 citations


Proceedings ArticleDOI
27 Mar 1996
TL;DR: An empirical study to investigate the effectiveness of hierarchical coding through the hierarchical mode of JPEG from a network perspective and shares the experiences in the implementation of hierarchical JPEG.
Abstract: Hierarchical coding of images and continuous media can be used to effectively control congestion in high-speed networks supporting interactive multimedia communications. However, the tradeoffs of the use of hierarchical coding have not yet been adequately investigated. We have undertaken an empirical study to investigate the effectiveness of hierarchical coding through the hierarchical mode of JPEG from a network perspective. A static analysis of hierarchical JPEG images in comparison to baseline JPEG images in terms of QoS management by packet discarding is provided. We also share our experiences in the implementation of hierarchical JPEG.

6 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed to minimise the distortions that accumulate in the process of subjecting images to multiple JPEG compressions.
Abstract: A new algorithm is proposed to minimise the distortions that accumulate in the process of subjecting images to multiple JPEG compressions.

3 citations


Proceedings ArticleDOI
14 Nov 1996
TL;DR: A novel image-adaptive encoding scheme for the baseline JPEG standard that maximizes the decoded image quality without compromising compatibility with current JPEG decoders and may be applied to other systems that use run-length encoding, including intra- frame MPEG and subband or wavelet coding.
Abstract: We introduce a novel image-adaptive encoding scheme for the baseline JPEG standard that maximizes the decoded image quality without compromising compatibility with current JPEG decoders. Our algorithm jointly optimizes quantizer selection, coefficient 'thresholding', and entropy coding within a rate-distortion (R-D) framework. It unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection by Wu and Gersho, and R-D optimal coefficient thresholding by Ramchandran and Vetterli. By formulating an algorithm which optimizes these two operations jointly, we have obtained performance that is the best in the reported literature for JPEG-compatible coding. In fact the performance of this JPEG coder is comparable to that of more complex 'state-of-the-art' image coding schemes: e.g., for the benchmark 512 by 512 'Lenna' image at a coding rate of 1 bit per pixel, our algorithm achieves a peak signal to noise ratio of 39.6 dB, which represents a gain of 1.7 dB over JPEG using the example Q- matrix with a customized Huffman entropy coder, and even slightly exceeds the published performance of Shapiro's celebrated embedded zerotree wavelet coding scheme. Furthermore, with the choice of appropriate visually-based error metrics, noticeable subjective improvement has been achieved as well. The reason for our algorithm's superior performance can be attributed to its conceptual equivalence to the application of entropy-constrained vector quantization design principles to a JPEG-compatible framework. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intra- frame MPEG and subband or wavelet coding.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: Experimental results show that the Previous closest neighbor prediction method is suitable for inter-frame decorrelation and outperforms lossless JPEG and, to a lesser extent, PCN.
Abstract: Reversible compression of color images is gaining the ever- increasing attention of multimedia publishing industries for collections of works-of-art. In fact, the availability of high-resolution high-quality multispectral scanners demands robust and efficient coding techniques capable to capture inter-band redundancy without destroying the underlying intra-band correlation. Although DPCM schemes (e.g., lossless JPEG) are employed for reversible compression, their straightforward extension to true-color (e.g., RGB, XYZ) image data usually leads to a negligible coding gain or even to a performance penalty with respect to individual coding of each color component. Previous closest neighbor (PCN) prediction has been recently proposed for lossless data compression of multispectral images, in order to take advantage of inter-band data correlation. The basic idea to predict the value of the current pixel in the current band on the basis of the best zero-order predictor on the previously coded band has been applied by extending the set of predictors to those adopted by lossless JPEG. On a variety of color images, one of which acquired directly from a painting by the VASARI Scanner at the Uffizi Gallery with a very high resolution (20 pel/mm, 8 MSB for each of the XYZ color components), experimental results show that the method is suitable for inter-frame decorrelation and outperforms lossless JPEG and, to a lesser extent, PCN.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

1 citations


Proceedings ArticleDOI
TL;DR: This paper designs a powerful highly-parallel optical computing technique to perform the cosine transform and use VLSI for additional control-required functions to significantly save the cell count and improve the performance of the JPEG/MPEG encoders.
Abstract: In this paper, we will discuss how we design high-performance hardware implementation architecture of the JPEG/MPEG encoders using hybrid technologies-analog optics and digital VLSI. A major costly computation of the JPEG/MPEG standard is the 2D discrete cosine transform. We design a powerful highly-parallel optical computing technique to perform the cosine transform and use VLSI for additional control-required functions. It can significantly save the cell count and improve the performance by combining the best features of optics and VLSI.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
15 Apr 1996
TL;DR: The proposed standard for compression of radiology images uses four basic techniques to achieve very high quality reconstructed images: image decomposition into high frequency and low frequency elements, lapped orthogonal discrete cosine transforms, local quantization, and Huffman encoding.
Abstract: Radiology uses large images and series of images that can consume large amounts of storage or of communication bandwidth in the utilization of those images. An update to the standard for compressing radiology images is being considered by the medical imaging compression standards committee. A standard for compression of radiology images is proposed for consideration. The proposed standard uses four basic techniques to achieve very high quality reconstructed images: (a) image decomposition into high frequency and low frequency elements, (b) lapped orthogonal discrete cosine transforms, (c) local quantization, and (d) Huffman encoding. Degenerate forms of the standard include the JPEG standard, already included in the DICOM medical image interchange standard. The proposed standard is a departure from the JPEG standard because of the low quality of the baseline JPEG lossy compression. At the same time, much of the hardware and software that have been used for JPEG compression are applicable to the proposed standard technique. A preprocessing step changes the format of the image to a form that can be processed using JPEG compression. A post-processing step after the JPEG restoration will restore the image. The proposed standard does not permit many techniques that have been used in the past. In particular, decomposition by the level of the significant bits is not permitted, the only transform permitted is the lapped orthogonal discrete cosine transform, the block size of the transform is limited to 8 by 8, and only Huffman coding is allowed. There are many variations that can be used in compression. This proposal allows some variations, but restricts many other variations in the interest of simplicity for the standard. The quality of the compression is very good. The extra complexity in the standard to allow more variations is not warranted.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.