scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1994"


01 Jan 1994
TL;DR: A block-sorting, lossless data compression algorithm, and the implementation of that algorithm and the performance of the implementation with widely available data compressors running on the same hardware are compared.
Abstract: The charter of SRC is to advance both the state of knowledge and the state of the art in computer systems. From our establishment in 1984, we have performed basic and applied research to support Digital's business objectives. Our current work includes exploring distributed personal computing on multiple platforms, networking , programming technology, system modelling and management techniques, and selected applications. Our strategy is to test the technical and practical value of our ideas by building hardware and software prototypes and using them as daily tools. Interesting systems are too complex to be evaluated solely in the abstract; extended use allows us to investigate their properties in depth. This experience is useful in the short term in refining our designs, and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this strategy, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical flavor. Some of it is in established fields of theoretical computer science, such as the analysis of algorithms, computational geometry, and logics of programming. Other work explores new ground motivated by problems that arise in our systems research. We have a strong commitment to communicating our results; exposing and testing our ideas in the research and development communities leads to improved understanding. Our research report series supplements publication in professional journals and conferences. We seek users for our prototype systems among those with whom we have common interests, and we encourage collaboration with university researchers. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Systems Research Center. All rights reserved. Authors' abstract We describe a block-sorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block of input …

2,753 citations


Journal ArticleDOI
TL;DR: A rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards using a fast dynamic programming recursive structure.
Abstract: We show a rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards. Our optimal algorithm uses a fast dynamic programming recursive structure. The primary advantage of our approach lies in its complete compatibility with standard JPEG and MPEG decoders. >

190 citations


Patent
25 Jul 1994
TL;DR: In this article, the color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially multiplexed image plane, for further compression, transmission and/or storage.
Abstract: Color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially-multiplexed image plane, for further compression, transmission and/or storage (40). At the point of decompression, a demultiplexer (50) separately extracts, from the stored or transmitted image, data to restore each of the color planes. Specific demultiplexing techniques involve correlating information of other planes with the color plane to be demultiplexed. Various techniques of entropy reduction, smoothing and speckle reduction may be used together with standard digital color compression techniques, such as JPEG. Using lossless JPEG about 6:1 data compression is achieved with no losses in subsequent processing after initial compression. Using lossy JPEG substantially higher data compression is achievable, but with proportional loss in perceived image quality.

80 citations


Proceedings ArticleDOI
17 Oct 1994
TL;DR: This work presents a lossless data-compression algorithm which, being oriented specifically for volume data, achieves greater compression performance than generic compression algorithms that are typically available on modern computer systems.

71 citations


Patent
11 Mar 1994
TL;DR: In this article, a threshold selection for the DCT coefficients of an image or video frame is based on optimizing for minimum distortion for a specified maximum target coding bit rate or, equivalently, for minimized coding bits rate for the specified maximum allowable distortion constraint.
Abstract: For encoding signals corresponding to still images or video sequences, respective standards known as JPEG and MPEG have been proposed These standards are based on digital cosine transform (DCT) compression For economy of transmission, DCT coefficients may be "thresholded" prior to transmission, by dropping the less significant DCT coefficients While maintaining JPEG or MPEG compatibility, threshold selection for the DCT coefficients of an image or video frame is based on optimizing for minimum distortion for a specified maximum target coding bit rate or, equivalently, for minimized coding bit rate for a specified maximum allowable distortion constraint In the selection process, a dynamic programming method is used

55 citations


Journal ArticleDOI
TL;DR: This preliminary study suggests that digitized mammograms are very amenable to compression by techniques compatible with the JPEG standard.
Abstract: We have developed a Joint Photographic Experts Group (JPEG) compatible image compression scheme tailored to the compression of digitized mammographic images. This includes a preprocessing step that segments the tissue area from the background, replaces the background pixels with a constant value, and applies a noise-removal filter to the tissue area. The process was tested by performing a just-noticeable difference (JND) study to determine the relationship between compression ratio and a reader's ability to discriminate between compressed and noncompressed versions of digitized mammograms. We found that at compression ratios of 15∶1 and below, image-processing experts are unable to detect a difference, whereas at ratios of 60∶1 and above they can identify the compressed image nearly 100% of the time. The performance of less specialized viewers was significantly lower because these viewers seemed to have difficulty in differentiating between artifact and real information at the lower and middle compression ratios. This preliminary study suggests that digitized mammograms are very amenable to compression by techniques compatible with the JPEG standard. However, this study was not designed to address the efficacy of image compression process for mammography, but is a necessary first step in optimizing the compression in anticipation of more elaborate reader performance (ROC) studies.

55 citations


Patent
30 Nov 1994
TL;DR: Image coding methods and apparatus employing discrete cosine transforms for supressing and/or reducing blocking artifacts using a JPEG file format can be implemented on JPEG hardware slightly modified to provide access to the cosine transform coefficients as mentioned in this paper.
Abstract: Image coding methods and apparatus employing discrete cosine transforms for supressing and/or reducing blocking artifacts using a JPEG file format. The methods can be implemented on JPEG hardware slightly modified to provide access to discrete cosine transform coefficients. Filtering techniques by which an overlap procedure for implementing the inventive methods are also disclosed.

52 citations


Proceedings ArticleDOI
16 Sep 1994
TL;DR: The discrete wavelet transform is incorporated into the JPEG baseline coder for image coding and the discrete cosine transform is replaced by an association of two-channel filter banks connected hierarchically.
Abstract: The discrete wavelet transform is incorporated into the JPEG baseline coder for image coding. The discrete cosinetransform is replaced by an association of two-channel filter banks connected hierarchically. The scanning andquantization schemes are devised and the entropy coder used is exactly the same as used in JPEG. The result isa still image coder that outperforms JPEG while retaining its simplicity and most of its existing building blocks.Objective results and reconstructed images are presented.Keywords: image coding, wavelet transform, JPEG. 1 Introduction The discrete cosine transform (DCT) [1] plays a major role in the popular image data compressors and DCT basedalgorithms are widely available nowadays. In still image compression, the JPEG baseline coder (JPEG) [2] is a "defacto" standard and there are several chips and programs available to perform JPEG compression and decompression.JPEG is based on the DCT, because of the DCT's fast implementation algorithm allied with good performance.

32 citations


Proceedings ArticleDOI
03 Aug 1994
TL;DR: The first stage of a two stage lossless data compression algorithm consists of a lossless adaptive predictor and the second stage employs arithmetic coding.
Abstract: This paper describes the first stage of a two stage lossless data compression algorithm. The first stage consists of a lossless adaptive predictor. The term lossless implies that the original data can be recovered exactly. The second stage employs arithmetic coding. Results are presented for a seismic data base.

30 citations


Proceedings ArticleDOI
13 Apr 1994
TL;DR: Simulated annealing of model parameters was used to find optimum models for an image which is a composite of several standard test images, and a three parameter model was chosen to represent quantization tables.
Abstract: Presents a strategy for generating optimal JPEG quantization tables to approximate a target compression ratio. This uses a model to express the quantization coefficients as functions of compression ratio and their position in the quantization table. Simulated annealing of model parameters was used to find optimum models for an image which is a composite of several standard test images. Models of varying complexity with 1 to 6 parameters were optimized at three compression ratios, and a three parameter model was chosen to represent quantization tables. After further optimizations over a range of compressions, a general model was obtained by expressing each model parameter as a function of the compression. Application to three CCITT test pictures demonstrates the quality of recovered images. >

25 citations


Journal ArticleDOI
21 Jun 1994
TL;DR: A new method to control the bit-rate of the JPEG standard using a fuzzy logic algorithm (FLA) is proposed, and a gain factor is determined to derive an appropriate quantization matrix.
Abstract: The JPEG established the first international standard for continuous-tone still image for both gray scale and color. However, since the JPEG's standard was originally designed for general applications, some modifications must be done for specific applications such as in digital still cameras. We propose a new method to control the bit-rate of the JPEG standard using a fuzzy logic algorithm (FLA). A gain factor is determined to derive an appropriate quantization matrix. Several rules have been prepared to calculate the gain factor. We used 18 test images for simulations and the results show that the mean error is -1.5%, standard deviation is 1.5% and the error range from -3% to 1%. Without the FLA, the error range from -41 to 43%, the mean error is -0.11% and the standard deviation is 24.24%. >

Proceedings ArticleDOI
22 Aug 1994
TL;DR: A technique for image compression using the Discrete Cosine Transform (DCT) method which compared to the classical JPEG gave no blocking effect at the same compression rate.
Abstract: The paper presents a technique for image compression using the Discrete Cosine Transform (DCT) method. In the Joint Photographic Expert Group norm (JPEG), the image is usually compressed using an "universal" quantization matrix. We propose a technique which employs an appropriate distribution model of the DCT coefficients to deduce the quantization matrix from a set of training images. This technique compared to the classical JPEG gave no blocking effect at the same compression rate. >

Journal ArticleDOI
TL;DR: An application of a fuzzy controller to JPEG for (grey-scale) image data compression is presented, and the results indicate that this fuzzy control equipped JPEG is very promising.
Abstract: An application of a fuzzy controller to JPEG for (grey-scale) image data compression is presented. The fuzzy controller conducts the search for a better compromise between compression ratio and image quality automatically in the JPEG model. Therefore, the fuzzy controller equipped JPEG is insensitive to the given initial quality. Simulations are performed on the image Lena and the results indicate that this fuzzy control equipped JPEG is very promising. >

Proceedings ArticleDOI
29 Mar 1994
TL;DR: The authors present experimental results demonstrating that the customized JPEG encoder offers a significant performance advantage over a coder that uses the default quantization and Huffman tables.
Abstract: Describes a procedure by which JPEG compression may be customized for grayscale images that are to be compressed, halftoned, and printed. The technique maintains 100% compatibility with the JPEG standard, and is applicable with any halftoning algorithm. The JPEG quantization table is designed using frequency-domain characteristics of the halftoning patterns and the human visual system, and the Huffman tables are optimized for low-rate coding. The authors present experimental results demonstrating that the customized JPEG encoder offers a significant performance advantage over a coder that uses the default quantization and Huffman tables. The results also show that the customized encoder typically achieves rates in the range 0.13-0.25 bits per pixel (image dependent) with practically no visible compression artifacts in the printed images. >

Journal ArticleDOI
01 Aug 1994
TL;DR: In this article, the authors present scalable compression algorithms for image browsing using the progressive and hierarchical modes in the JPEG standard, which are referred to as SNR and spatial scalability, respectively.
Abstract: We present scalable compression algorithms for image browsing. Recently, the International Standards Organization (ISO) has proposed the JPEG standard for still image compression. JPEG standard not only provides the basic feature of compression (baseline algorithm) but also provides the framework for reconstructing images in different picture qualities and sizes. These features are referred to as SNR and spatial scalability, respectively. SNR scalability and spatial scalability can be implemented using the progressive and hierarchical modes in the JPEG standard. In this paper, we implement and investigate the performance of the progressive and hierarchical coding modes of the JPEG standard and compare their performance with the baseline algorithm. >

Proceedings ArticleDOI
Chaddha1, Agrawal1, Gupta1, Meng1
15 May 1994
TL;DR: This paper analyzes the JPEG compression algorithm, the international standard for compressing continuous tone images, and shows how higher compression ratios can be achieved with minimal loss in image quality.
Abstract: To meet various transmission and storage constraints, it is desirable that a compression algorithm allows a range of compression ratios. In this paper we analyze the JPEG compression algorithm, the international standard for compressing continuous tone images, and show how higher compression ratios can be achieved with minimal loss in image quality. We compare three methods to achieve variable compression with JPEG. Through a comprehensive analysis using 8 standard monochrome images from the USC database, we propose a method that offers substantial visual quality advantages for high compression ratios. The comparisons have been made by using an objective distortion measure which corresponds well to subjective assessments of visual quality. The paper provides detailed motivation for the algorithms, and suggests the choice of parameters to obtain variable compression with minimal loss in visual quality. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: It is reported that some gain in the lossless compression ratio can be obtained by representing the original image with a Gray code, separating the bit plane and applying the JBIG standard.
Abstract: An important international standardization effort has been made for still image compression Two standards have been elaborated, namely, the IS 10918 or more commonly the JPEG (Joint Photographic image Expert Group) for continuous-tone images and the IS 11544 or JBIG (Joint Bi-level Image expert Group) for black-white images Though JPEG has a lossless operation mode, it addresses mainly lossy compression We report that some gain in the lossless compression ratio can be obtained by representing the original image with a Gray code, separating the bit plane and applying the JBIG standard A gain of 15% to 22% is achieved by the Gray code with respect to the usual binary code Some results obtained with a general data compression algorithm, namely, the GZIP (which is based on an Lempel, Ziv, Storer and Szymanski method cascaded with an adaptive Huffman coding) combined with Gray coding and bit plane separation are also presented >

Proceedings ArticleDOI
03 Aug 1994
TL;DR: The fundamentals of lossless signal coding are introduced, and a wide variety of decorrelation and entropy coding techniques are discussed.
Abstract: Lossless compression of signals is of interest in a wide variety of fields such as geophysics, telemetry, nondestructive evaluation and medical imaging, where vast amounts of data must be transmitted or stored, and exact recovery of the original data is required. Nearly all lossless signal coding techniques consist of a decorrelation stage followed by an entropy coding stage. In this paper, fundamentals of lossless signal coding are introduced, and a wide variety of decorrelation and entropy coding techniques are discussed.

Journal ArticleDOI
TL;DR: A novel still image codec is presented that uses an efficient adaptive bit-plane run-length coding on the wavelet transform coefficients of images that outperforms the standard JPEG codec for low bitrate applications.
Abstract: A novel still image codec is presented that uses an efficient adaptive bit-plane run-length coding on the wavelet transform coefficients of images. The main attraction of this coding scheme is its simplicity in which no training and storage of codebooks are required. Also its high visual quality at high compression ratio outperforms the standard JPEG codec for low bitrate applications. A comparative performance between the new codec and the JPEG codec is given.

Proceedings ArticleDOI
01 May 1994
TL;DR: A compression algorithm based on discrete wavelet transform (DWT) and arithmetic coding (AC) that satisfies the requirements of a radiological image archive that is far superior to the previously developed full frame DCT (FFDCT) method as well as the industrial standard JPEG.
Abstract: We have developed a compression algorithm based on discrete wavelet transform (DWT) and arithmetic coding (AC) that satisfies the requirements of a radiological image archive. This new method is far superior to the previously developed full frame DCT (FFDCT) method as well as the industrial standard JPEG. Since DWT is localized in both spatial and scale domains, the error due to quantization of coefficients does not propagate throughout the reconstructed picture as in FFDCT. Since it is a global transformation, it does not suffer the limitation of block transform methods like JPEG. The severity of error as measured by NMSE and maximum difference increases very slowly with compression ratio compared to FFDCT. Normalized nearest neighbor difference (NNND), which is a measure of blockiness, stays approximately constant, while JPEG's NNND increases rapidly with compression ratio. Furthermore, DWT has an efficient FIR implementation which can be put in parallel hardware. DWT also offers total flexibility in the image format; the size of the image does not have to be a power of two as in the case of FFDCT.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
29 Mar 1994
TL;DR: The authors show a rate-distortion optimal quantization technique to threshold the DCT coefficients in the industry image and video coding standards JPEG and MPEG respectively which achieves a decent thresholding gain and uses a fast dynamic programming recursive structure which exploits certain monotonicity characteristics of the JPEG andmpeg codebooks to drastically reduce the complexity.
Abstract: The authors show a rate-distortion optimal quantization technique to threshold the DCT coefficients in the industry image and video coding standards JPEG and MPEG respectively. Their scheme achieves a decent thresholding gain in terms of both objective SNR (about 1 dB) as well as perceived quality and uses a fast dynamic programming recursive structure which exploits certain monotonicity characteristics of the JPEG and MPEG codebooks to drastically reduce the complexity. The primary advantage of their encoding algorithm is that it is completely compatible with the baseline JPEG and MPEG decoders. >

Proceedings ArticleDOI
25 Sep 1994
TL;DR: Two new quantization tables are derived from the transfer function of the angiocardiographic system, which is a worst-case approach with respect to preserving sharp edges and evaluations based on Hosaka-plots are developed.
Abstract: The lossy JPEG standard may be used for high performance image compression. As implemented in presently available hard- and software in most cases the so-called luminance quantization table is applied for gray level images, which may be scaled by a quality factor. The questions arise which quality factor is optimal and whether it is possible and worthwhile to specify quantization tables for the particular characteristics of angiocardiograms. Two new quantization tables are derived from the transfer function of the angiocardiographic system, which is a worst-case approach with respect to preserving sharp edges. To assess the performance, evaluations based on Hosaka-plots are developed. These diagrams compare the different errors introduced by lossy JPEG compression objectively. >

Journal ArticleDOI
TL;DR: A method for determining optimum quantization tables for use in image compression systems which conform to the ISO/CCITT standard for Image Compression, also known as the Joint Photographic Experts Group (JPEG) standard is presented.
Abstract: We present a method for determining optimum quantization tables for use in image compression systems which conform to the ISO/CCITT standard for Image Compression, also known as the Joint Photographic Experts Group (JPEG) standard1. An algorithm based on simulated annealing2 compresses and decompresses any collection of 8 by 8 pixel blocks, while searching the space of 8 by 8 quantization tables for optimum fidelity according to some chosen measure. A composite cost function maintains a predefined compression ratio while minimizing the RMS error in the decoded image compared to the original. The process can be carried out on a raw image, or greater weight can be given to a selected range of DCT coefficients on the basis of psychophysical considerations3. The results of applying the methods to the intensity (y) components of three JPEG test images are presented. In all cases improved fidelity as measured by RMS error is obtained compared to the quantization table suggested in the JPEG standard. Significantly, the quantization tables obtained for one image most often provide smaller error when applied to other images than does the table suggested with the JPEG standard. An unexpected result when using pre-emphasized images suggests that the psychophysical assumptions underlying the suggested JPEG table may be oversimplified. By applying the method to blocks selected from a variety of images, improved quantization tables can be found for images in general, including color.

Proceedings ArticleDOI
01 May 1994
TL;DR: The overall compression performance of the Rice algorithm implementations exceeds that of all algorithms tested including arithmetic coding, UNIX compress, UNix pack, and gzip.
Abstract: This paper describes two VLSI implementations that provide an effective solution to compressing medical image data in real time. The implementations employ a lossless data compression algorithm, known as the Rice algorithm. The first chip set was fabricated in 1991. The encoder can compress at 20 Msamples/sec and the decoder decompresses at the rate of 10 Msamples/sec. The chip set is available commercially. The second VLSI chip development is a recently fabricated encoder that provides improvements for coding low entropy data and incorporates features that simplify system integration. A new decoder is scheduled to be designed and fabricated in 1994. The performance of the compression chips on a suite of medical images has been simulated. The image suite includes CT, MR, angiographic images, and nuclear images. In general, the single-pass Rice algorithm compression performance exceeds that of two-pass, lossless, Huffman-based JPEG. The overall compression performance of the Rice algorithm implementations exceeds that of all algorithms tested including arithmetic coding, UNIX compress, UNIX pack, and gzip.

Proceedings ArticleDOI
25 Sep 1994
TL;DR: This work focuses on techniques and algorithms for detecting the occurrence of a particular error and then for locating that error, and proposes the most effective method for error detection and image correction.
Abstract: The use of the variable-length coding in the final stage of image compression using JPEG makes the image more sensitive to channel errors and can have severe effects on the viewed image. This is due to loss of synchronization in the decoder. Even one bit error can propagate significantly throughout the image. In the past, some techniques have been proposed for resynchronizing Huffman decoders using special synchronizing codewords. The JPEG standard itself allows the use of a special restart marker to help decoder resynchronization. It does not, however, give any guidelines for error recovery. We first describe the most probable types of errors that occur in a JPEG data stream. We focus on techniques and algorithms for detecting the occurrence of a particular error and then for locating that error. One technique functions at the entropy encoding level by taking advantage of the specific data structure of the JPEG stream and using alternately two different end-of-block characters. Others function at the DCT coefficient level or at the pixel level, detecting unlikely patterns that are produced due to errors. We compare different methods and finally propose the most effective method for error detection and image correction. >

Proceedings ArticleDOI
21 Sep 1994
TL;DR: A robust and implementable compression algorithm for multispectral imagery with a selectable quality level within the near-lossless to visually lossy range by incorporating the best methods available to fully exploit the spectral and spatial correlation in the data.
Abstract: Future land remote sensing satellite systems will likely be constrained in terms of downlink communication bandwidth. To alleviate this limitation the data must be compressed. In this article we present a robust and implementable compression algorithm for multispectral imagery with a selectable quality level within the near-lossless to visually lossy range. The three-dimensional terrain-adaptive transform-based algorithm involves a one-dimensional Karhunen-Loeve transform (KLT) followed by two-dimensional discrete cosine transform (DCT). The images are spectrally decorrelated via the KLT to produce the eigen images. The resulting spectrally-decorrelated eigen images are then compressed using the JPEG algorithm. The key feature of this approach is that it incorporates the best methods available to fully exploit the spectral and spatial correlation in the data. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral decorrelation transformation based upon variations in the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a totally different coder (e.g., DPCM). However, the significant practical advantage of this approach is that it is leveraged on the standard and highly developed JPEG compression technology. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near- lossless at about 5:1 compression ratio (CR) to visually lossy beginning at around 40:1 CR.

Proceedings ArticleDOI
31 Oct 1994
TL;DR: This paper describes the research effort currently in progress to develop lossless data compression algorithms for seismic, speech, and image data sets using a recursive least squares a priori adaptive lattice structure followed by an arithmetic coding stage.
Abstract: This paper describes the research effort currently in progress to develop lossless data compression algorithms for seismic, speech, and image data sets. For many applications, such as transmitting and archiving research data bases, using lossy compression algorithms is not advisable. In situations where critical data (e.g. research instrumentation) is to be transmitted or archived, a real time lossless data compression algorithm is desirable. It presents a version of the algorithm using a recursive least squares a priori adaptive lattice structure followed by an arithmetic coding stage. The real time effectiveness of this algorithm is being verified by coding the technique to run on a TMS320C3x card custom developed for our applications. >

Proceedings ArticleDOI
01 May 1994
TL;DR: Full frame DCT coding has been investigated, using an optimized bit allocation and quantization scheme, and at a compression ratio 12:1, the image quality appeared to be better than the JPEG base-line compression.
Abstract: This paper reports on a lossy compression method applicable to cardiac angiography. Full frame DCT coding has been investigated, using an optimized bit allocation and quantization scheme. We compared it to the standard JPEG method in the environment of a cardiac angiography system with dedicated visualization devices and post-processing. At a compression ratio 12:1, the image quality appeared to be better than the JPEG base-line compression. Owing to the principle of our method, no blocking effect is induced, whereas this is a critical drawback of the JPEG algorithm. Furthermore, the sharpness of fine details is better preserved.

01 Apr 1994
TL;DR: A hybrid lossless compression model employing both the (lossy) JPEG DCT algorithm and one of a selection of lossless image compression methods has been tested, and lossless JPEG outperformed the other lossless methods over a broad range of browse image qualities.
Abstract: A hybrid lossless compression model employing both the (lossy) JPEG DCT algorithm and one of a selection of lossless image compression methods has been tested. The hybrid model decomposes the original image into a low-loss quick-look browse and a residual image. The lossless compression methods tested in the model are Huffman, arithmetic, LZW, lossless JPEG, and diagonal coding. For both the direct and the hybrid application of these lossless methods, the compression ratios (CR's) are calculated and compared on three test images. For each lossless method tested, the hybrid model had no more than a nominal loss in compression efficiency relative to the direct approach. In many cases, the hybrid model provided a significant compression gain. When used in the hybrid model, lossless JPEG outperformed the other lossless methods over a broad range of browse image qualities.