scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1992"


Journal ArticleDOI
TL;DR: The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method, which has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Abstract: A joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG's proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT (discrete cosine transform)-based method is specified for 'lossy' compression, and a predictive method for 'lossless' compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. The author provides an overview of the JPEG standard, and focuses in detail on the Baseline method. >

3,425 citations


Book
31 Dec 1992
TL;DR: This chapter discusses JPEG Syntax and Data Organization, the history of JPEG, and some of the aspects of the Human Visual Systems that make up JPEG.
Abstract: Foreword. Acknowledgments. Trademarks. Introduction. Image Concepts and Vocabulary. Aspects of the Human Visual Systems. The Discrete Cosine Transform (DCT). Image Compression Systems. JPEG Modes of Operation. JPEG Syntax and Data Organization. Entropy Coding Concepts. JPEG Binary Arithmetic Coding. JPEG Coding Models. JPEG Huffman Entropy Coding. Arithmetic Coding Statistical. More on Arithmetic Coding. Probability Estimation. Compression Performance. JPEG Enhancements. JPEG Applications and Vendors. Overview of CCITT, ISO, and IEC. History of JPEG. Other Image Compression Standards. Possible Future JPEG Directions. Appendix A. Appendix B. References. Index.

3,183 citations


Journal ArticleDOI
TL;DR: This work presents two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions.
Abstract: We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling and coding. We present two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions. The MLP method is both progressive and parallelizable. We give results showing that our methods perform significantly better than other currently used methods for lossless compression of high resolution images, including the proposed JPEG standard. We express our results both in terms of the compression ratio and in terms of a useful new measure of compression efficiency, which we call compression gain.

63 citations



Journal ArticleDOI
TL;DR: An iterative algorithm for designing a set of locally optimal codebooks is developed and results demonstrate that this improved decoding technique can be applied in the JPEG baseline system to decode enhanced quality pictures from the bit stream generated by the standard encoding scheme.
Abstract: Transform coding, a simple yet efficient image coding technique, has been adopted by the Joint Photographic Experts Group (JPEG) as the basis for an emerging coding standard for compression of still images. However, for any given transform encoder, the conventional inverse transform decoder is suboptimal. Better performance can be obtained by a nonlinear interpolative decoder that performs table lookups to reconstruct the image blocks from the code indexes. Each received code index of an image block addresses a particular codebook to fetch a component vector. The image block can be reconstructed as the sum of the component vectors for that block. An iterative algorithm for designing a set of locally optimal codebooks is developed. Computer simulation results demonstrate that this improved decoding technique can be applied in the JPEG baseline system to decode enhanced quality pictures from the bit stream generated by the standard encoding scheme. >

40 citations


Proceedings ArticleDOI
24 Mar 1992
TL;DR: In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.
Abstract: The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution. >

38 citations


Proceedings ArticleDOI
24 Mar 1992
TL;DR: A new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression is presented, based on a concept called the variability index, which provides accurate models for pixel prediction errors without requiring explicit transmission of the models.
Abstract: The authors present a new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. They also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general distribution further improves compression. They describe a new compression measurement called compression gain, and give experimental results showing that the using variability index gives significantly better compression than other methods in the literature. >

31 citations


01 Jan 1992
TL;DR: A lossless compression technique that has been applied to bitmaps defining regions of images and shows considerable improvement over previous methods.
Abstract: In this note, we describe a lossless compression technique that has been applied to bitmaps defining regions of images The results show considerable improvement over previous methods The basic technique is to use context-based statistical modeling fed into an arithmetic coder

13 citations


Proceedings ArticleDOI
03 May 1992
TL;DR: A two-chip set has been designed, fabricated and is fully functional which performs the baseline JPEG image compression and decompression algorithm.
Abstract: A two-chip set has been designed, fabricated and is fully functional which performs the baseline JPEG image compression and decompression algorithm. The major functions of the devices include: DCT and IDCT, forward and inverse quantization, Huffman coding and decoding. The devices operate with pixel rates beyond 30 MHz at 70 degrees C and 4.75 V. Each die is less than 10 mm on a side and was implemented in a 1.0 µ CMOS cell-based technology to achieve a 9 man-month design time.

10 citations


Proceedings ArticleDOI
09 Aug 1992
TL;DR: A quantization scheme for discrete cosine transform (DCT) coefficients in the JPEGs baseline sequential method for image compression is proposed in this paper, which is adaptive to the image characteristics and is statistical in nature.
Abstract: A quantization scheme for discrete cosine transform (DCT) coefficients in the Joint Photographic Experts Group's (JPEGs) baseline sequential method for image compression is proposed. The DCT coefficients should be quantized to achieve maximum compression without degrading the visual image quality. The scheme is adaptive to the image characteristics and is statistical in nature. The results are evaluated in terms of compression, root-mean-square error, and subjective visual quality, 8-b/pixel monochrome images of size 512 * 512 have been compressed in the range 0.4-1 b/pixel with good to excellent quality. >

7 citations


Proceedings ArticleDOI
TL;DR: A methodology to study the compression obtained by each step of the three-step baseline sequential algorithm is applied and results, observations, and analysis are presented on simulating the JPEG sequential baseline system.
Abstract: A set of still image compression algorithms developed by the Joint Photographic Experts Group (JPEG) is becoming an international standard. Here we apply a methodology to study the compression obtained by each step of the three-step baseline sequential algorithm. We present results, observations, and analysis on simulating the JPEG sequential baseline system. The primary compression gain comes from run-length coding of zero coefficients. Based on our simulator, a comparison of Huffman coding, WNC arithmetic coding, and the LZW algorithm is also included.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A method of image quality improvement that is accomplished by density transformation after decoding after decoding is proposed and its effects are confirmed.
Abstract: The Joint Photographic Experts Group (JPEG) baseline system, which is scheduled to be standardized in 1992, is applied to character images, and the characteristics of the application are investigated. The JPEG system is suitable for continuous-tone images, however, continuous-tone images are usually accompanied by characters. The image quality of characters is investigated on various magnitudes of quantization tables and the deterioration mechanisms are discussed. A method of image quality improvement that is accomplished by density transformation after decoding is proposed and its effects are confirmed.

Proceedings ArticleDOI
TL;DR: A study of lossless predictive coding techniques specifically optimized for the compression of computer generated color graphics and the performance of the developed schemes is compared with that of the lossless function of the JPEG standard.
Abstract: General purpose image compression algorithms do not fully exploit the redundancy of color graphical images because the statistics of graphics differ substantially from those of other types of images, such as natural scenes or medical images. This paper reports the results of a study of lossless predictive coding techniques specifically optimized for the compression of computer generated color graphics. In order to determine the most suitable color representation space for coding purposes the Karhunen-Loeve (KL) transform was calculated for a set of test images and its energy compaction ability was compared with those of other color spaces, e.g., the RGB, or the YUV signal spaces. The KL transform completely decorrelates the input color data for a given image and provides a lower bound on the color entropy. Based on the color statistics measured on a corpus of test images a set of optimal spatial predictive coders were designed. These schemes process each component channel independently. The prediction error signal was compressed by both lossless textual substitutional codes and statistical codes to achieve distortionless reproduction. The performance of the developed schemes is compared with that of the lossless function of the JPEG standard.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

01 Jan 1992
TL;DR: Simulations show that the technique for reducing blocking while remaining compatible with the JPEG standard results in subjective performance improvements, sacrificing only a marginal increase in bit rate.
Abstract: Transform coding has been chosen for still image compression in the Joint Photographic Experts Group (JPEG) standard. Although transform coding performs superior to many other image compression methods and has fast algorithms for implementation, it is limited by a blocking effect at low bit rates. The blocking effect is inherent in all nonoverlapping transforms. This paper presents a technique for reducing blocking while remaining compatible with the JPEG standard. Simulations show that the system results in subjective performance improvements, sacrificing only a marginal increase in bit rate.

Proceedings ArticleDOI
TL;DR: Preliminary results indicate substantial improvement in performance and bit rates in addition to the mitigation of the tile effect.
Abstract: The tile effect is an artifact which considerably degrades the visual quality of the images coded at bit rates less than 1 bpp. A new algorithm called JPEG/RBC which is based on a two source decomposition of a noncausal model fits closely within the broad framework of the JPEG standard. Preliminary results indicate substantial improvement in performance and bit rates in addition to the mitigation of the tile effect.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.