scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2001"


Proceedings ArticleDOI
02 Apr 2001
TL;DR: Two new invertible watermarking methods for authentication of digital images in the JPEG format are presented, providing new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply.
Abstract: We present two new invertible watermarking methods for authentication of digital images in the JPEG format. While virtually all previous authentication watermarking schemes introduced some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be completely removed to obtain the original image data. The first technique is based on lossless compression of biased bit-streams derived from the quantized JPEG coefficients. The second technique modifies the quantization matrix to enable lossless embedding of one bit per DCT coefficient. Both techniques are fast and can be used for general distortion-free (invertible) data embedding. The new methods provide new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply.

207 citations


Proceedings ArticleDOI
12 Nov 2001
TL;DR: A new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format is introduced.
Abstract: In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

201 citations


Journal ArticleDOI
TL;DR: It is found that lossless audio coders have reached a limit in what can be achieved for lossless compression of audio, and a new lossless Audio coder is described called AudioPak, which low algorithmic complexity and performs well or even better than most of the losslessaudio coders that have been described in the literature.
Abstract: Lossless audio compression is likely to play an important part in music distribution over the Internet, DVD audio, digital audio archiving, and mixing. The article is a survey and a classification of the current state-of-the-art lossless audio compression algorithms. This study finds that lossless audio coders have reached a limit in what can be achieved for lossless compression of audio. It also describes a new lossless audio coder called AudioPak, which low algorithmic complexity and performs well or even better than most of the lossless audio coders that have been described in the literature.

181 citations


Journal ArticleDOI
TL;DR: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling, and the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Abstract: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

145 citations


Journal ArticleDOI
01 Feb 2001
TL;DR: This approach simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average, which offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.
Abstract: A novel method is proposed for post-processing of JPEG-encoded images, in order to reduce coding artifacts and enhance visual quality. Our method simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average. This approach, despite its simplicity, offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.

101 citations


Proceedings ArticleDOI
10 Sep 2001
TL;DR: This paper presents the architecture and the VHDL design of a Two Dimensional Discrete Cosine Transform (2-D DCT) for JPEG image compression, used as the core of a JPEG compressor and is the critical path in JPEG compression hardware.
Abstract: This paper presents the architecture and the VHDL design of a Two Dimensional Discrete Cosine Transform (2-D DCT) for JPEG image compression This architecture is used as the core of a JPEG compressor and is the critical path in JPEG compression hardware The 2-D DCT calculation is made using the 2-D DCT separability property, such that the whole architecture is divided into two I-D DCT calculations by using a transpose buffer These parts are described in this paper, with an architectural discussion and the VHDL synthesis results as well The 2-D DCT architecture uses 4,792 logic cells of one Altera Flex10kE FPGA and reaches an operating frequency of 122 MHz One input block with 8/spl times/8 elements of 8 bits each is processed in 252 /spl mu/s and the pipeline latency is 160 clock cycles

80 citations


Journal Article
TL;DR: Wavelet methods have been shown to have no significant differences in diagnostic accuracy for compression ratios of up to 30:1, and the wavelet algorithm was found to have generally lower average error matrices and higher peak signal to noise ratios.
Abstract: Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression, which is essential for teleradiology and Picture Archiving and Communication System (PACS). In this study wavelet compression was applied to compress and decompress a digitized chest x-ray image at various compression ratios. The Wavelet Compression Engine (standard edition 2.5) was used in this study. This was then compared with the formal compression standard “Joint Photographic Expert Group” JPEG, using JPEG Wizard (standard edition 1.3.7). Currently there is no standard set of criteria for the clinical acceptability of compression ratio. Thus, histogram analysis, maximum absolute error (MAE), mean square error (MSE), root mean square error (RMSE), signal to noise ratio (SNR), and peak signal to noise ratio (PSNR) were used as a set of criteria to determine the ‘acceptability’ of image compression. The wavelet algorithm was found to have generally lower average error matrices and higher peak signal to noise ratios. Wavelet methods have been shown to have no significant differences in diagnostic accuracy for compression ratios of up to 30:1. Visual comparison was also made between the original image and compressed image to ascertain if there is any significant image degradation. Using wavelet algorithm, a very high compression ratio of up to 600:1 was achieved.

47 citations



Proceedings ArticleDOI
07 Oct 2001
TL;DR: This paper analyzes the impact of histogram sparseness in three state-of-the-art lossless image compression techniques: JPEG-LS, CALIC and lossless JPEG-2000 and proposes a simple procedure for on-line histogram packing, which holds nearly the same improvement as offline histograms packing.
Abstract: Most of the image compression techniques currently available were designed mainly with the aim of compressing continuous-tone natural images. However, if this assumption is not verified, such as in the case of histogram sparseness, a degradation in compression performance may occur. In this paper, we analyze the impact of histogram sparseness in three state-of-the-art lossless image compression techniques: JPEG-LS, CALIC and lossless JPEG-2000. Moreover, we propose a simple procedure for on-line histogram packing, which holds nearly the same improvement as offline histogram packing. Results of its effectiveness when associated with JPEG-LS are presented.

33 citations


Proceedings ArticleDOI
02 Sep 2001
TL;DR: An investigation of unequal error protection methods applied to JPEG image transmission using turbo codes is presented and simulation results are given to demonstrate how the UEP schemes outperforms the equal error protection (EEP) scheme in terms of bit error rate (BER) and peak signal to noise ratio (PSNR).
Abstract: An investigation of unequal error protection (UEP) methods applied to JPEG image transmission using turbo codes is presented. The JPEG image is partitioned into two groups, i.e., DC components and AC components according to their respective sensitivity to channel noise. The highly sensitive DC components are better protected with a lower coding rate, while the less sensitive AC components use a higher coding rate. Simulation results are given to demonstrate how the UEP schemes outperforms the equal error protection (EEP) scheme in terms of bit error rate (BER) and peak signal to noise ratio (PSNR).

32 citations


Proceedings ArticleDOI
03 Oct 2001
TL;DR: A spread-spectrum-based watermarking technique in the framework of the JPEG 2000 still image compression that exhibits a high robustness with respect to attacks which may occur in many applications.
Abstract: There are several advantages to combine, at one end, the image coding and watermark insertion operations and, at the other end, the image decoding and watermark extraction. We describe a spread-spectrum-based watermarking technique in the framework of the JPEG 2000 still image compression. We also show that, by re-using the JPEG 2000 wavelet domain for watermark embedding, the proposed watermarking scheme exhibits a high robustness with respect to attacks which may occur in many applications.

DissertationDOI
01 Jan 2001
TL;DR: A lossless compression scheme for colour video that takes advantage of the spatial, spectral and temporal redundancy inherent in such data to ensure full exploitation of these redundancies for compression purposes is developed.
Abstract: We develop a lossless compression scheme for colour video that takes advantage of the spatial, spectral and temporal redundancy inherent in such data We show that an adaptive scheme is vital to ensure full exploitation of these redundancies for compression purposes The results of the proposed scheme are found to be favourable when compared to lossless image compression standards and this performance is achieved while still permitting a computationally simple decoder

Journal ArticleDOI
TL;DR: The study has shown that lossless compression can exceed the CR of 2:1 usually quoted and the range of clinically viable compression ratios can probably be extended by 50 to 100% when applying wavelet compression algorithms as compared to JPEG compression.
Abstract: Background: Lossless or lossy compression of coronary angiogram data can reduce the enormous amounts of data generated by coronary angiographic imaging. The recent International Study of Angiographic Data Compression (ISAC) assessed the clinical viability of lossy Joint Photographic Expert Group (JPEG) compression but was unable to resolve two related questions: (A) the performance of lossless modes of compression in coronary angiography and (B) the performance of newer lossy wavelet algorithms. This present study seeks to supply some of this information. Methods: The performance of several lossless image compression methods was measured in the same set of images as used in the ISAC study. For the assessment of the relative image quality of lossy JPEG and wavelet compression, the observers ranked the perceived image quality of computer-generated coronary angiograms compressed with wavelet compression relative to the same images with JPEG compression. This ranking allowed the matching of compression ratios for wavelet compression with the clinically viable compression ratios for the JPEG method as obtained in the ISAC study. Results: The best lossless compression scheme (LOCO-I) offered a mean compression ratio (CR) of 3.80:1. The quality of images compressed with the lossy wavelet-based method at CR = 10:1 and 20:1 was comparable to JPEG compression at CR = 6:1 and 10:1, respectively. Conclusion: The study has shown that lossless compression can exceed the CR of 2:1 usually quoted. For lossy compression, the range of clinically viable compression ratios can probably be extended by 50 to 100% when applying wavelet compression algorithms as compared to JPEG compression. These results can motivate a larger clinical study.

Proceedings ArticleDOI
07 May 2001
TL;DR: Experiments show that the proposed lossless coder (which needs about 2 bit/sample for pre-filtered signals) outperforms competing lossless coders, WaveZip, Shorten, LTAC and LPAC, in terms of compression ratios.
Abstract: A novel predictive lossless coding scheme is proposed. The prediction is based on a new weighted cascaded least mean squared (WCLMS) method. WCLMS is especially designed for music/speech signals. It can be used either in combination with psycho-acoustically pre-filtered signals to obtain perceptually lossless coding, or as a stand-alone lossless coder. Experiments on a database of moderate size and a variety of pre-filtered mono-signals show that the proposed lossless coder (which needs about 2 bit/sample for pre-filtered signals) outperforms competing lossless coders, WaveZip, Shorten, LTAC and LPAC, in terms of compression ratios.

01 Jan 2001
TL;DR: In this paper, a quantization-based approach is proposed for computing JPEG quantization matrices for a given mean-square error (MSE) or peak signal-to-noise ratio (PSNR).
Abstract: In this paper we propose a method for computing JPEG quantization matrices for a given mean-square error (MSE) or peak signal-to-noise ratio (PSNR). Then, we employ our method to com- pute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. First, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Second, we study the JPEG standard progressive operation mode from a quantization-based ap- proach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be re- duced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement. © 2001 SPIE and IS&T.

Journal ArticleDOI
TL;DR: Simulations show that the method to compute JPEG standard progressive operation mode definition scripts using a quantization approach generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix.
Abstract: In this paper we propose a method for computing JPEG quantization matrices for a given mean-square error (MSE) or peak signal-to-noise ratio (PSNR). Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. First, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Second, we study the JPEG standard progressive operation mode from a quantization-based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement. © 2001 SPIE and IS&T. [DOI: 10.1117/1.1344592]

Proceedings ArticleDOI
07 Oct 2001
TL;DR: The design of a two dimensional lossless DCT based on a 4-point twodimensional lossless WHT is presented and it is indicated that its number of rounding off becomes smaller than that of the one dimensional lossed DCT and, as a result, the difference between its transform coefficients and those of the DCT becomes small.
Abstract: Since the lossless DCT is compatible with JPEG or MPEG, it is expected to play an important role in unified lossless/lossy image coding. However, there is a problem that the difference between the transform coefficients of the lossless DCT and those of the (lossy) DCT is not very small. We present the design of a two dimensional lossless DCT based on a 4-point two dimensional lossless WHT and indicate that its number of rounding off becomes smaller than that of the one dimensional lossless DCT and, as a result, the difference between its transform coefficients and those of the DCT becomes small.

Journal ArticleDOI
TL;DR: The results show that the inter-frame technique outperforms state-of-the-art intra-frame coders, i.e. Calic and JPEG-LS and the improvement in compression ratio is significant in the case of CT data but is rather small in the cases of MRI data.

Proceedings ArticleDOI
19 Jun 2001
TL;DR: In this article, a new algorithm for the compression factor control (CF-CTRL) when the JPEG compression algorithm is used, which can be used, for example, in the digital still cameras (DSCs), where a minimum number of photos must be stored in a fixed size memory storage.
Abstract: We propose a new algorithm for the compression factor control (CF-CTRL) when the JPEG compression algorithm is used. It can be used, for example, in the digital still cameras (DSCs), where a minimum number of photos must be stored in a fixed size memory storage. So each file size must be constant. The most used compression algorithm used in the DSC is the JPEG one, that do not ensure a fixed file size, so a compression factor control algorithm must be used. This new method is based on the analysis of the Bayer pattern, the color pattern obtained by a CCD or a CMOS sensor, obtaining the JPEG quantization tables that ensure the target dimension of the JPEG file. The main characteristics of this algorithm are speed and power consumption.

Proceedings ArticleDOI
25 Oct 2001
TL;DR: Experimental results show that the LTA yields comparable results to the Burrows-Wheeler algorithm and outperforms the Gzip, and shorten waveform coder for near-lossless ECG compression; for losslessECG compression it yields better compression than all the other techniques.
Abstract: We present a linear transformation algorithm (LTA), which is based on a new transformation, linear order transformation (LOT). Experimental results show that the LTA yields comparable results to the Burrows-Wheeler algorithm (BWA) and outperforms the Gzip, and shorten waveform coder for near-lossless ECG compression; for lossless ECG compression it yields better compression than all the other techniques.

Proceedings ArticleDOI
06 May 2001
TL;DR: An analysis-based approach to accomplish the most "appropriate" integer transform chosen adaptively to match the image being coded in the context of lossless multimedia image compression, thereby further improving the performance of such a coder.
Abstract: Integer-to-integer wavelet transforms for lossless image coding is useful in many multimedia applications. However, the use of a fixed integer-transform in such lossless coders prohibits them from delivering the best coding performance for different types of images. Recently we have shown that for improved performance, the most "appropriate" integer transform should be chosen adaptively to match the image being coded. In this paper, we present an analysis-based approach to accomplish this in the context of lossless multimedia image compression, thereby further improving the performance of such a coder.

Book ChapterDOI
TL;DR: An analytical model and a numerical analysis of the sub-sampling, compression and re-scaling process, that makes explicit the possible quality/compression trade-offs, and shows that the image auto-correlation can provide good estimates for establishing the down-sampled factor that achieves optimal performance.
Abstract: The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It appears that at low bit rates a down-scaled image when JPEG compressed visually beats the high resolution image compressed via JPEG to be represented with the same number of bits.Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process.We give an analytical model and a numerical analysis of the sub-sampling, compression and re-scaling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide good estimates for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down sampling factor necessary to get the best possible recovered image in terms of PSNR.

Patent
16 Jul 2001
TL;DR: In this paper, a coset analyzer is used for analyzing time-varying error correction codes in data communications and a lossless data sequence compressor and decompressor are also discussed.
Abstract: A coset analyzer is used for analyzing time-varying error correction codes in data communications. The time-varying error correction code has cosets, and each coset has a coset leader and a syndrome. The analyzer comprises a coset representation unit for representing a coset of the code as a time-varying error trellis and an error trellis searcher for searching the error trellis. Each member of the coset corresponds to a path through the error trellis. A lossless data sequence compressor and decompressor are also discussed.

Proceedings ArticleDOI
02 Sep 2001
TL;DR: Views and comparisons on these issues and up to date research activities in this regard with some experimental results have shown the superiority of JPG200 compared to JPEG-DCT.
Abstract: JPEG 2000 is the new ISO/ITU-T standard for still image coding, which is released in 2000. This paper puts into perspective JPEG 2000 concept a long with watermarking concept. It provides also a comparative evaluation study of JPEG2000 and JPEG-DCT. The principles behind each algorithm of JPEG 2000 and watermarking are briefly described. An outlook on the future of digital image watermarking within JPEG 2000 is discussed. The paper will give views and comparisons on these issues and up to date research activities in this regard with some experimental results. Our experimental results have shown the superiority of JPG200 compared to JPEG-DCT.

Journal ArticleDOI
TL;DR: A low cost and low complexity rate control algorithm to provide a useful potential for further extended application of this new standard for lossless and near-lossless image compression, using the non-linear prediction adopted by JPEG-LS.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This work exploits special geometry using the lattice reduction algorithm used in cryptography to estimate the compression color space of a color image that was quantized in some hidden color space during previous JPEG compression.
Abstract: Given a color image that was quantized in some hidden color space (termed compression color space) during previous JPEG compression, we aim to estimate this unknown compression color space from the image. This knowledge is potentially useful for color image enhancement and JPEG re-compression. JPEG quantizes the discrete cosine transform (DCT) coefficients of each color plane independently during compression. Consequently, the DCT coefficients of such a color image conform to a lattice. We exploit this special geometry using the lattice reduction algorithm used in cryptography to estimate the compression color space. Simulations verify that the proposed algorithm yields accurate compression space estimates.

Proceedings ArticleDOI
27 Mar 2001
TL;DR: Experiments show that the proposed lossless coder outperforms competing lossless coders, such as ppmz, bzip2, Shorten, and LPAC, in terms of compression ratios.
Abstract: A novel predictive lossless coding scheme is proposed. The prediction is based on a new weighted cascaded least mean squared (WCLMS) method. To obtain both a high compression ratio and a very low encoding and decoding delay, the residuals from the prediction are encoded using either a variant of adaptive Huffman coding or a version of adaptive arithmetic coding. WCLMS is especially designed for music/speech signals. It can be used either in combination with psycho-acoustically pre-filtered signals to obtain perceptually lossless coding, or as a stand-alone lossless coder. Experiments on a database of moderate size and a variety of pre-filtered mono-signals show that the proposed lossless coder (which needs about 2 bit/sample for pre-filtered signals) outperforms competing lossless coders, such as ppmz, bzip2, Shorten, and LPAC, in terms of compression ratios. The combination of WCLMS with either of the adaptive coding schemes is also shown to achieve better compression ratios and lower delay than an earlier scheme combining WCLMS with Huffman coding over blocks of 4096 samples.

Journal ArticleDOI
TL;DR: This paper proposes a procedure for the design of separable 2-D synthesis filters that minimize the reconstruction error power for transform coders and shows that the proposed decoding method gives some gain with respect to the usual decoder in most cases.
Abstract: Transform coding is a technique used worldwide for image coding, and JPEG has become the most common tool for image compression. In a JPEG decoder, the quantized transform coefficient blocks are usually processed using the inverse discrete cosine transform (DCT) in order to reconstruct an approximation of the original image. The direct and inverse DCT pair can be arranged in the form of a perfect reconstruction filter bank, and it can be shown that, in the presence of quantization of the transform coefficients, the perfect reconstruction synthesis is not the best choice. In this paper, we propose a procedure for the design of separable 2-D synthesis filters that minimize the reconstruction error power for transform coders. The procedure is used to design a family of filters which are used in the decoder instead of the inverse DCT. The appropriate reconstruction filters are selected on the basis of the standard quantization information provided in the JPEG bit stream. We show that the proposed decoding method gives some gain with respect to the usual decoder in most cases, Moreover, it only makes use of the standard information provided by a JPEG bit stream.

01 Jan 2001
TL;DR: This paper presents the architecture and design of a JPEG compressor in hardware, divided in four major parts: color space converter and downsampler, 2-D DCT module, quantization and entropy coding, and the results of the VHDL mapping into Altera Flex 10K FPGAs.
Abstract: This paper presents the architecture and design of a JPEG compressor in hardware. The system is a functional unit of a compressor chip, divided in four major parts: color space converter and downsampler, 2-D DCT module, quantization and entropy coding. Architectures for these four parts were designed and described in VHDL. The results of the VHDL mapping into Altera Flex 10K FPGAs are also herein presented.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: If modified quantification tables are substituted for the encoding quantization tables in the JPEG compressed data stream, an unchanged JPEG decoder can restore the dynamic range and increase image contrast.
Abstract: The JPEG standard was designed for compression of photographic digital images, but it also works well on digitized documents with only a limited number of shades of gray. For documents in which compression and legibility are more important than preserving all of the intermediate values, preprocessing the images to reduce their dynamic range can enhance JPEG compression, as it selectively discards some noise. If modified quantization tables are substituted for the encoding quantization tables in the JPEG compressed data stream, an unchanged JPEG decoder can restore the dynamic range and increase image contrast. Graphs of the compressed size in bytes vs. dynamic range after the application of different dynamic-range-reduction techniques are given for both Huffman coding and arithmetic coding. Examples of the reconstructed front and back sides of a check with normal processing and enhanced compression are shown.