scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1995"


Journal ArticleDOI
TL;DR: An implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery based on Karhunen-Loeve transformation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images.
Abstract: We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology. >

176 citations


Patent
31 Oct 1995
TL;DR: In this article, a method and apparatus is described whereby the image compression is done with no multiplications while still compatible with a JPEG (Joint Photographic Experts Group) Transform, and other enhancements are made to improve image quality.
Abstract: Transforms such as the DCT are useful for image compression. One close relative of the DCT is preferred for its arithmetic simplicity. A method and apparatus is described whereby the image compression is done with no multiplications while still compatible with a JPEG (Joint Photographic Experts Group) Transform. Other enhancements are made to improve image quality.

110 citations


Proceedings ArticleDOI
30 Oct 1995
TL;DR: The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images and the compression ratios are compared with state-of-the-art algorithms available in the literature.
Abstract: Inspired by theoretical results on universal modeling, a general framework for sequential modeling of gray-scale images is proposed and applied to lossless compression. The model is based on stochastic complexity considerations and is implemented with a tree structure. It is efficiently estimated by a modification of the universal algorithm context. The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images. The compression ratios are compared with those obtained with state-of-the-art algorithms available in the literature, with the results of the comparison, showing the potential of the proposed approach.

88 citations


Journal ArticleDOI
TL;DR: A method for applying arithmetic coding to lossless waveform compression is discussed and a formula for selecting ranges of waveform values is provided.
Abstract: A method for applying arithmetic coding to lossless waveform compression is discussed. Arithmetic coding has been used widely in lossless text compression and is known to produce compression ratios that are nearly optimal when the symbol table consists of an ordinary alphabet. In lossless compression of digitized waveform data, however, if each possible sample value is viewed as a "symbol" the symbol table would be typically very large and impractical. the authors therefore define a symbol to be a certain range of possible waveform values, rather than a single value, and develop a coding scheme on this basis. The coding scheme consists of two compression stages. The first stage is lossless linear prediction, which removes coherent components from a digitized waveform and produces a residue sequence that is assumed to have a white spectrum and a Gaussian amplitude distribution. The prediction is lossless in the sense that the original digitized waveform can be recovered by processing the residue sequence. The second stage, which is the subject of the present paper, is arithmetic coding used as just described. A formula for selecting ranges of waveform values is provided. Experiments with seismic and speech waveforms that produce near-optimal results are included. >

75 citations


Journal ArticleDOI
TL;DR: A tree clustering and a pattern matching algorithm to avoid high sparsity of the tree is proposed and the method is shown to be very efficient in memory size and fast searching for the symbol.
Abstract: The fast Huffman decoding algorithm has been used in JPEG, MPEG and image data compression standards, etc. And code compression is a key element in high speed digital data transport. A major compression is performed by converting the fixed-length codes to variable-length codes through an entropy coding scheme. Huffman coding combined with run-length coding is shown to be a very efficient coding scheme. To speed up the process of search for a symbol in a Huffman tree and to reduce the memory size we have proposed a tree clustering and a pattern matching algorithm to avoid high sparsity of the tree. The method is shown to be very efficient in memory size and fast searching for the symbol. For an experimental video data with Huffman codes extended up to 16 bits in length, i.e. it is used for the standard JPEG, the result of experiments show that the proposed algorithm has a very high speed and performance. The design of the decoder is carried out using silicon-gate CMOS process. >

56 citations


Journal ArticleDOI
TL;DR: The paper describes all the components of the JPEG algorithm including discrete cosine transform, quantization, and entropy encoding including both encoder and decoder architectures.
Abstract: This paper is the first part of a comprehensive survey of compression techniques and standards for multimedia applications. It covers the JPEG compression algorithm which is primarily used for full-color still image applications. The paper describes all the components of the JPEG algorithm including discrete cosine transform, quantization, and entropy encoding. It also describes both encoder and decoder architectures. The main emphasis is given to the sequential mode of operation which is the most typical use of JPEG compression; however, the other three modes of operation, progressive, lossless, and hierarchical JPEG, are described as well. A number of experimental data for both grayscale and color image compression is provided in the paper.

55 citations


Journal ArticleDOI
TL;DR: JPG (Joint Photographic Experts Group) image transmission has been shown to compress images to 10% of the original file size without a noticeable change in the quality of the image, which can be used to optimize teleradiology and telemedicine.
Abstract: Economical applications of teleradiology and telemedicine are limited to the existing telephone network infrastructure, which greatly limits the speed of digital information transfer. Telephone lines are inherently slow, requiring image transmission times to be unacceptably long for large, complex, or numerous images. JPEG (Joint Photographic Experts Group) image transmission has been shown to compress images to 10% of the original file size without a noticeable change in the quality of the image. This study was carried out to assess the quality of medical diagnostic images after JPEG compression and decompression. X-rays, computed tomography scans, and ultrasound samples were compressed and decompressed using JPEG. The compressed JPEG images were indistinguishable from the original images. The JPEG images were approximately 10% of the original file size. This would reduce image transmission times by 90% (eg, an unacceptable time of 50 minutes would be reduced to an acceptable time of 5 minutes). JPEG can be used to optimize teleradiology and telemedicine.

50 citations


Proceedings ArticleDOI
09 May 1995
TL;DR: This paper introduces a novel, image-adaptive, encoding scheme for the baseline JPEG standard, in particular, coefficient thresholding, JPEG quantization matrix (Q-matrix) optimization, and adaptive Huffman entropy-coding are jointly performed to maximize coded still-image quality within the constraints of the baselinejpg syntax.
Abstract: This paper introduces a novel, image-adaptive, encoding scheme for the baseline JPEG standard. In particular, coefficient thresholding, JPEG quantization matrix (Q-matrix) optimization, and adaptive Huffman entropy-coding are jointly performed to maximize coded still-image quality within the constraints of the baseline JPEG syntax. Adaptive JPEG coding has been addressed in earlier works: by Ramchandran and Vetterli (see IEEE Trans. on Image Processing, Special Issue on Image Compression, vol.3, p.700-704, September 1994), where fast rate-distortion (R-D) optimal coefficient thresholding was described, and by Wu and Gersho (see Proc. Inter. Conf. Acoustics, Speech and Signal Processing, vol.5, p.389-392, April 1993) and Hung and Meng (1991), where R-D optimized Q-matrix selection was performed. By formulating an algorithm which optimizes these two operations jointly, we have obtained performance comparable to more complex, "state-of-the-art" coding schemes: for the "Lenna" image at 1 bpp, our algorithm has achieved a PSNR of 39.6 dB. This result represents a gain of 1.7 dB over JPEG with a customized Huffman entropy coder, and even slightly exceeds the published performance of Shapiro's (see IEEE Trans. on Signal Processing, vol.41, p.3445-3462, December 1993) wavelet-based scheme. Furthermore, with the choice of appropriate visually-based error metrics, noticeable subjective improvement has been achieved as well.

43 citations


Journal ArticleDOI
TL;DR: A compression algorithm based on discrete wavelet transforms (DWTs) and arithmetic coding (AC) that satisfies radiological archives requirements and offers total flexibility in the image format.
Abstract: Radiological archives need the images to be compressed at a moderate compression ratio between 10:1 to 20:1 while retaining good diagnostic quality. We have developed a compression algorithm based on discrete wavelet transforms (DWTs) and arithmetic coding (AC) that satisfies those requirements. This new method is superior to the previously developed full frame discrete cosine transform (FFDCT) method, as well as the industrial standard developed by the joint photographic expert group (JPEG). Since DWT is localized in both spatial and scale domains, the error due to quantization of coefficients does not propagate throughout the reconstructed picture as in FFDCT. Because it is a global transformation, it does not suffer the limitation of block transform methods such as JPEG. The severity of the error as measured by the normalized mean square error (NMSE) and maximum difference technique increases very slowly with compression ratio compared to the FFDCT. Normalized nearest neighbor difference (NNND), which is a measure of blockiness, stays approximately constant, while JPEG NNND increases rapidly with compression ratio. Furthermore, DWT has an efficient finite response filter FlR implementation that can be put in parallel hardware. DWT also offers total flexibility in the image format; the size of the image does not have to be a power of two as in the case of FFDCT. >

41 citations


Journal ArticleDOI
TL;DR: A lossless image compression scheme that exploits redundancy both at local and global levels in order to obtain maximum compression efficiency is proposed.
Abstract: The redundancy in digital image representation can be classified into two categories: local and global. In this paper, we present an analysis of two image characteristics that give rise to local and global redundancy in image representation. Based on this study, we propose a lossless image compression scheme that exploits redundancy both at local and global levels in order to obtain maximum compression efficiency. The proposed algorithm segments the image into variable size blocks and encodes them depending on the characteristics exhibited by the pixels within the block. The proposed algorithm is implemented in software and its performance is better than other lossless compression schemes such as the Huffman, the arithmetic, the Lempel-Ziv and the JPEG. >

40 citations


Patent
31 Aug 1995
TL;DR: In this paper, the authors proposed a two-pass approach that can compress an arbitrary image to a predetermined fixed size file, based on the average sum of the absolute value of quantized DCT coefficients per block.
Abstract: The present invention is a fully JPEG compliant two-pass approach that can compress an arbitrary image to a predetermined fixed size file. The compression coding device and method according to the present invention estimates an activity metric based on the average sum of the absolute value of the quantized DCT coefficients per block. Given the activity metric, a mathematical model relating the image activity to the JPEG Q-factor for a given value of the target compression ratio provides an estimated Q-factor value that yields the design target ratio. This mathematical model is developed during a calibration phase which is executed once off line for a given image capturing device. The fact that our activity metric is based on the quantized DCT coefficients allows for an efficient implementation of the presented coding method in either speed or memory bound systems.

Journal Article
TL;DR: The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process.
Abstract: Image compression is a necessity for the utilization of large digital images, e.g., digitized aerial color images. The JPEG still-picture compression algorithm is one alternative for carrying out the image compression task. The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process. In our experience, the JPEG algorithm seems to be a good choice for image compression. For color images, it gives a compression ratio of about 1:10 without considerable degradation in the visual or geometric quality of the image.

Journal ArticleDOI
TL;DR: This paper presents a prediction scheme that partitions an image into blocks and for each block selects a scan from a codebook of scans such that the resulting prediction error is minimized and results compare very favorably with the JPEG lossless compression standard.
Abstract: When applying predictive compression on image data there is an implicit assumption that the image is scanned in a particular order. Clearly, depending on the image, a different scanning order may give better compression. In earlier work, we had defined the notion of a prediction tree (or scan) which defines a scanning order for an image. An image can be decorrelated by taking differences among adjacent pixels along any traversal of a scan. Given an image, an optimal scan that minimizes the absolute sum of the differences encountered can be computed efficiently. However, the number of bits required to encode an optimal scan turns out to be prohibitive for most applications. In this paper we present a prediction scheme that partitions an image into blocks and for each block selects a scan from a codebook of scans such that the resulting prediction error is minimized. Techniques based on clustering are developed for the design of a codebook of scans. Design of both semiadaptive and adaptive codebooks is considered. We also combine the new prediction scheme with an effective error modeling scheme. Implementation results are then given, which compare very favorably with the JPEG lossless compression standard. >

Proceedings ArticleDOI
23 Oct 1995
TL;DR: The proposed algorithm works with a completely different philosophy summarized in the following four key points: a perfect reconstruction hierarchical morphological subband decomposition yielding only integer coefficients, prediction of the absence of significant information across scales using zerotrees of wavelet coefficients, and lossless data compression via adaptive arithmetic coding.
Abstract: In this paper the problem of progressive lossless image coding is addressed. Many applications require a lossless compression of the image data. The possibility of progressive decoding of the bitstream adds a new functionality for those applications using data browsing. In practice, the proposed scheme can be of intensive use when accessing large databases of images requiring a lossless compression (especially for medical applications). The international standard JPEG allows a lossless mode. It is based on an entropy reduction of the data using various kinds of estimators followed by source coding. The proposed algorithm works with a completely different philosophy summarized in the following four key points: 1) a perfect reconstruction hierarchical morphological subband decomposition yielding only integer coefficients, 2) prediction of the absence of significant information across scales using zerotrees of wavelet coefficients, 3) entropy-coded successive-approximation quantization, and 4) lossless data compression via adaptive arithmetic coding. This approach produces a completely embedded bitstream. Thus, it is possible to decode only partially the bitstream to reconstruct an approximation of the original image.

Journal ArticleDOI
TL;DR: This paper addresses the parallel implementation of the JPEG still-image compression standard on the MasPar MP-1, a massively parallel SIMD computer, and develops two novel byte alignment algorithms which are used to efficiently input and output compressed data from the parallel system.

Proceedings ArticleDOI
28 Mar 1995
TL;DR: By developing an entropy-constrained quantization framework, this work shows that previous works do not fully realize the attainable coding gain, and formulates a computationally efficient way that attempts to fully realize this gain for baseline-JPEG-decodable systems.
Abstract: Previous works, including adaptive quantizer selection and adaptive coefficient thresholding, have addressed the optimization of a baseline-decodable JPEG coder in a rate-distortion (R-D) sense. In this work, by developing an entropy-constrained quantization framework, we show that these previous works do not fully realize the attainable coding gain, and then formulate a computationally efficient way that attempts to fully realize this gain for baseline-JPEG-decodable systems. Interestingly, we find that the gains obtained using the previous algorithms are almost additive. The framework involves viewing a scalar-quantized system with fixed quantizers as a special type of vector quantizer (VQ), and then to use techniques akin to entropy-constrained vector quantization (ECVQ) to optimize the system. In the JPEG case, a computationally efficient algorithm can be derived, without training, by jointly performing coefficient thresholding, quantizer selection, and Huffman table customization, all compatible with the baseline JPEG syntax. Our algorithm achieves significant R-D improvement over standard JPEG (about 2 dB for typical images) with performance comparable to that of more complex "state-of-the-art" coders. For example, for the Lenna image coded at 1.0 bits per pixel, our JPEG-compatible coder achieves a PSNR of 39.6 dB, which even slightly exceeds the published performance of Shapiro's wavelet coder. Although PSNR does not guarantee subjective performance, our algorithm can be applied with a flexible range of visually-based distortion metrics.

Journal ArticleDOI
TL;DR: This work proposes and investigates a few lossless compression schemes for RGB color images and presents schemes that exploit interirame correlations, both prediction schemes and error modeling schemes.
Abstract: Although much work has been done toward developing lossless algorithms for compressing image data, most techniques reported have been for two-tone or gray-scale images. It is generally accepted that a color image can be easily encoded by using a gray-scale compression technique on each of the three (say, RGB) color planes. Such an approach, however, fails to take into account the substantial correlations that are present between color planes. Although several lossy compression schemes that exploit such correlations have been reported in the literature, we are not aware of any such techniques for lossless compression. Because of the difference in goals, the best ways of exploiting redundancies for lossy and lossless compression can be, and usually are, very different. We propose and investigate a few lossless compression schemes for RGB color images. Both prediction schemes and error modeling schemes are presented that exploit interirame correlations. Implementation results on a test set of images yield significant improvements.

Proceedings ArticleDOI
03 Mar 1995
TL;DR: CB9 as discussed by the authors is a context-based lossless image compression algorithm that uses codes prediction errors with an adaptive arithmetic code, and it has been developed within an algorithm class that includes (in the order of their development) Sunset, JPEG lossless, sub8xb, and now CaTH (Centering and Tail Handling).
Abstract: The CB9 lossless image compression algorithm is context-based, and codes prediction errors with an adaptive arithmetic code. It has been developed within an algorithm class that includes (in the order of their development) Sunset, JPEG lossless, sub8xb, and now CaTH (Centering and Tail Handling). Lossless compression algorithms using prediction errors are easily modified to introduce a small loss through quantization so that the absolute error for any pixel location does not exceed prescribed value N. In this case, N varies from 1 to 7; the values for which the JPEG group issued a call for contributions. This work describes CB9 and the experiments with near-lossless compression using the JPEG test images. Included are experiments with some image processing operations such as edge-enhancement with the purpose of studying the loss in fidelity from successively performing decompression, followed by an image processing operation, followed by recompression of the new result.

Patent
29 Sep 1995
TL;DR: In this paper, a method of fast decompressing a document image compressed using transform coding for scaling and previewing purposes is presented. But the method is particularly efficient using the discrete cosine transform which is used in the JPEG ADCT algorithm.
Abstract: A method of fast decompressing a document image compressed using transform coding for scaling and previewing purposes. A fast algorithm is derived by utilizing a fraction of all available transform coefficients representing the image. The method is particularly efficient using the discrete cosine transform which is used in the JPEG ADCT algorithm. In JPEG ADCT, a very fast and efficient implementation is derived for a resolution reduction factor of 16 to 1 (4 to 1 in each direction) without needing any floating point arithmetic operations.

Proceedings ArticleDOI
10 Jul 1995
TL;DR: A comparison of compression algorithms using the discrete cosine transform-DCT (JPEG) and discrete wavelettransform-DWT applied to remotely sensed images for optical and SAR images is presented.
Abstract: Presents a comparison of compression algorithms using the discrete cosine transform-DCT (JPEG) and discrete wavelet transform-DWT applied to remotely sensed images. The statistical behaviors of the DCT and DWT are addressed and the implications for the performance of the image compression algorithms are compared for optical and SAR images. These SAR images were despeckled during compression. Qualitative and quantitative results are presented.

Proceedings ArticleDOI
17 Apr 1995
TL;DR: A new method to achieve lossless compression of 2D images based on the discrete cosine transform (DCT) is proposed, which quantizes the high energy DCT coefficients in each block, finds an inverse DCT from only these quantized coefficients, and forms an error residual sequence to be coded.
Abstract: In this paper, a new method to achieve lossless compression of 2D images based on the discrete cosine transform (DCT) is proposed. This method quantizes the high energy DCT coefficients in each block, finds an inverse DCT from only these quantized coefficients, and forms an error residual sequence to be coded. Furthermore, a simple delta modulation scheme is performed on the coefficients that exploits correlation between high energy DCT coefficients in neighboring blocks of an image. The resulting sequence is compressed by using an entropy coder, and simulations show the results to be promising and more effective than just simply performing entropy coding on original raw image data.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A novel approach for the lossless compression of monochrome images using switching theoretic techniques is presented and compares well with JPEG in terms of compression ratio.

Proceedings ArticleDOI
17 Apr 1995
TL;DR: This work presents a method to significantly improve the performance of software based JPEG decompression, achieving an 80% performance increase decompressing typical JPEG video streams.
Abstract: JPEG picture compression and related algorithms are not only used in still picture compression, but also to a growing degree for moving picture compression in telecommunication applications. Real-time JPEG compression and decompression are crucial in these scenarios. We present a method to significantly improve the performance of software based JPEG decompression. Key to these performance gains are adequate knowledge of the structure of the JPEG coded picture information and transfer of structural information between consecutive processing steps. Our implementation achieved an 80% performance increase decompressing typical JPEG video streams.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
Robert C. Kidd1
TL;DR: Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quallty that was ostensibly superior to that of existing internationalstandard JPEG, it appears possible that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm.
Abstract: An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quallty that was ostensibly superior to that of existing internationalstandard JPEG, it appears possible that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed transform coding efficiency method drastically diminishes the blocking effects and enhances the subjective visual quality compared with such existing algorithms as JPEG and LOT.
Abstract: We try to improve transform coding efficiency by alleviating the interblock correlation due to the small size of the block. The proposed method needs minor modification from conventional transform coding techniques such as JPEG, and reduces the information loss in the coding procedure for a given bit rate. Simulation results demonstrate that the method drastically diminishes the blocking effects and enhances the subjective visual quality compared with such existing algorithms as JPEG and LOT. >

Proceedings ArticleDOI
09 May 1995
TL;DR: The paper describes the application of adaptive filters in a two stage lossless data compression algorithm that defines the concept of a reversible filter as opposed to an invertible filter and performs losslessData compression using primarily floating-point operations.
Abstract: The paper describes the application of adaptive filters in a two stage lossless data compression algorithm. The term lossless implies that the original data can be recovered exactly. The first stage of the scheme consists of a lossless adaptive predictor while the second stage performs arithmetic coding. The unique aspects of the paper are: (a) defining the concept of a reversible filter as opposed to an invertible filter; (b) performing lossless data compression using primarily floating-point operations; (c) designing lossless adaptive predictors; (d) using a modified arithmetic coding algorithm that can readily handle inputs consisting of more than 14 bits.

Proceedings ArticleDOI
07 Jun 1995
TL;DR: An algorithm aiming at the improvement of the image quality at higher compression ratios than that JPEG can handle is proposed, which combines decimation/interpolation and activity classification as the pre-/post-processing, and uses JPEG with optimal Q-tables as the compression engine.
Abstract: We present a JPEG-based image coding system which preprocesses the image by adaptive subsampling based on the local activity levels. As a result, it yields good quality in both smooth and complex areas of the image at high compression ratios. We propose an algorithm aiming at the improvement of the image quality at higher compression ratios than that JPEG can handle. This scheme combines decimation/interpolation and activity classification as the pre-/post-processing, and uses JPEG with optimal Q-tables as the compression engine. It yields better image quality than the original JPEG or the uniform subsampling JPEG. The increased complexity is only minor compared to JPEG itself.

Proceedings ArticleDOI
17 Feb 1995
TL;DR: Digital PM images are made without films using a super high definition (SHD) image prototype system, which has more than double the number of pixels and frame frequency than those of HDTV images.
Abstract: Telepathology is aiming at pathological diagnoses based on microscopic images of cell samples through broadband networks. The number of pixels in pathological microscopic (PM) images is said to be approximately 4 to 6 million. In this paper, digital PM images are made without films using a super high definition (SHD) image prototype system, which has more than double the number of pixels and frame frequency than those of HDTV images. First, color distribution and a spatial spectrum are analyzed in order to estimate compression characteristics of the images. In addition, the lossless and lossy JPEG coding characteristics are investigated. In the lossless compression, the PM images have compression ratios which are very close to 1, while the general images have compression ratios around 2. The PM image compression ratios in the lossy JPEG coding, where the L*a*b* color difference is less than 2 to 3, are found to almost equal those of the lossless JPEG (Joint Photographic Coding Experts Group) using arithmetic coding. The PM image coding performance in the lossy JPEG coding is also found to be inferior to that of general images including still life images, portraits, and landscapes.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
N. Chaddha1
30 Oct 1995
TL;DR: A modified JPEG algorithm based on the classification of a block as a kind of text or an image block depending on its DCT coefficients is presented, which achieves upto 3 dB gain in PSNR compared to JPEG on multimedia documents.
Abstract: Block-based compression algorithms have found widespread use in image and video compression standards. Algorithms such as JPEG, however, while very effective in compressing continuous tone images, do not perform well in compressing multimedia documents which contain text and graphics. With a growing number of applications using images with a high content of both continuous tone data and text, e.g. color facsimile data and educational videos, requires the need for a good compression algorithm. In this paper we present a modified JPEG algorithm for compressing such mixed-mode images which occur in multimedia documents. The algorithm is based on the classification of a block as a kind of text or an image block depending on its DCT coefficients. There can be anywhere from 2 to 256 classes. With such a classification, the same compression algorithm can be applied except that different quantization matrices are used for the different type of blocks. The compression algorithm also uses different entropy codes for the different type of blocks. Simulation results show that the modified JPEG algorithm achieves upto 3 dB gain in PSNR compared to JPEG on multimedia documents.

Proceedings ArticleDOI
17 Apr 1995
TL;DR: This paper presents a wavelet transform based technique for achieving spatial scalability (within the framework of hierarchical mode) and simulation results confirm the substantial performance improvement and superior subjective quality images using the proposed technique.
Abstract: In this paper, we present scalable image compression algorithms based on wavelet transform. Recently, the International Standard Organization (ISO) has proposed the JPEG standard for still image compression. JPEG standard not only provides the basic features of compression (baseline algorithm) but also provides the framework of reconstructing images in different picture qualities and sizes. These features are referred to as SNR and spatial scalability, respectively. Spatial scalability can be implemented using the hierarchical mode in the JPEG standard. However, the standard does not specify the downsampling filters to be used for obtaining the progressively lower size images. A straightforward implementation would employ mean downsampling filters. However, this filter does not perform very well in extracting the features from the full size image resulting in poor quality images and a lower compression ratio. We present a wavelet transform based technique for achieving spatial scalability (within the framework of hierarchical mode). Our simulation results confirm the substantial performance improvement and superior subjective quality images using the proposed technique. Most importantly, the wavelet based technique does not require any modifications to existing JPEG decoders.