scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1999"


Book
01 Sep 1999
TL;DR: This book discusses JPEG Compression Modes, Huffman Coding in JPEG, and Color Representation in PNG, the Representation of Images, and more.
Abstract: Preface. Acknowledgments. 1. Introduction. The Representation of Images. Vector and Bitmap Graphics. Color Models. True Color versus Palette. Compression. Byte and Bit Ordering. Color Quantization. A Common Image Format. Conclusion. 2. Windows BMP. Data Ordering. File Structure. Compression. Conclusion. 3. XBM. File Format. Reading and Writing XBM Files. Conclusion. 4. Introduction to JPEG. JPEG Compression Modes. What Part of JPEG Will Be Covered in This Book? What are JPEG Files? SPIFF File Format. Byte Ordering. Sampling Frequency. JPEG Operation. Interleaved and Noninterleaved Scans. Conclusion. 5. JPEG File Format. Markers. Compressed Data. Marker Types. JFIF Format. Conclusion. 6. JPEG Human Coding. Usage Frequencies. Huffman Coding Example. Huffman Coding Using Code Lengths. Huffman Coding in JPEG. Limiting Code Lengths. Decoding Huffman Codes. Conclusion. 7. The Discrete Cosine Transform. DCT in One Dimension. DCT in Two Dimensions. Basic Matrix Operations. Using the 2-D Forward DCT. Quantization. Zigzag Ordering. Conclusion. 8. Decoding Sequential-Mode JPEG Images. MCU Dimensions. Decoding Data Units. Decoding Example. Processing DCT Coefficients. Up-Sampling. Restart Marker Processing. Overview of JPEG Decoding. Conclusion. 9. Creating Sequential JPEG Files. Compression Parameters. Output File Structure. Doing the Encoding. Down-Sampling. Interleaving. Data Unit Encoding. Huffman Table Generation. Conclusion. 10. Optimizing the DCT. Factoring the DCT Matrix. Scaled Integer Arithmetic. Merging Quantization and the DCT. Conclusion. 11. Progressive JPEG. Component Division in Progressive JPEG. Processing Progressive JPEG Files. Processing Progressive Scans. MCUs in Progressive Scans. Huffman Tables in Progressive Scans. Data Unit Decoding. Preparing to Create Progressive JPEG Files. Encoding Progressive Scans. Huffman Coding. Data Unit Encoding. Conclusion. 12. GIF. Byte Ordering. File Structure. Interlacing. Compressed Data Format. Animated GIF. Legal Problems. Uncompressed GIF. Conclusion. 13. PNG. History. Byte Ordering. File Format. File Organization. Color Representation in PNG. Device-Independent Color. Gamma. Interlacing. Critical Chunks. Noncritical Chunks. Conclusion. 14. Decompressing PNG Image Data. Decompressing the Image Data. Huffman Coding in Deflate. Compressed Data Format. Compressed Data Blocks. Writing the Decompressed Data to the Image. Conclusion. 15. Creating PNG Files. Overview. Deflate Compression Process. Huffman Table Generation. Filtering. Conclusion. Glossary. Bibliography. Index. 0201604434T04062001

190 citations


Journal ArticleDOI
01 Sep 1999
TL;DR: This work investigates a near lossless compression technique that gives quantitative bounds on the errors introduced during compression and finds that such a technique gives significantly higher compression ratios than lossy compression.
Abstract: We study compression techniques for electroencephalograph (EEG) signals. A variety of lossless compression techniques, including compress, gzip, bzip, shorten, and several predictive coding methods, are investigated and compared. The methods range from simple dictionary based approaches to more sophisticated context modeling techniques. It is seen that compression ratios obtained by lossless compression are limited even with sophisticated context based bias cancellation and activity based conditional coding. Though lossy compression can yield significantly higher compression ratios while potentially preserving diagnostic accuracy, it is not usually employed due to legal concerns. Hence, we investigate a near lossless compression technique that gives quantitative bounds on the errors introduced during compression. It is observed that such a technique gives significantly higher compression ratios (up to 3-bit/sample saving with less than 1% error). Compression results are reported for EEG's recorded under various clinical conditions.

75 citations


Journal ArticleDOI
TL;DR: In a method and a device for transmission of S+P transform coded digitized images a mask is calculated by means of which a region of interest (ROI) can be transmitted lossless whereby the ROI can be transmit and received lossless and still maintaining a good compression ratio for the image as a whole.

73 citations


Journal ArticleDOI
TL;DR: A wavelet-based compression scheme that is able to operate in the lossless mode that implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding.
Abstract: The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).

68 citations


Journal ArticleDOI
TL;DR: The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective than the classical zerotree coding, and produces a losslessly compressed embedded data stream that supports progressive refinement of the decompressed images.
Abstract: Lossless image compression with progressive transmission capabilities plays a key role in measurement applications, requiring quantitative analysis and involving large sets of images. This work proposes a wavelet‐based compression scheme that is able to operate in the lossless mode. The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of multimodal medical images show that the proposed algorithm outperforms the embedded zerotree coder combined with the integer wavelet transform by 0.28 bpp, the set‐partitioning coder by 0.1 bpp, and the lossless JPEG coder by 0.6 bpp. The scheme produces a losslessly compressed embedded data stream; hence, it supports progressive refinement of the decompressed images. Therefore, it is a good candidate for telematics applications requiring fast user interaction with the image data, retaining the option of lossless transmission and archiving of the images. © 1999 John Wiley & Sons, Inc. Int J Imaging Syst Technol 10: 76–85, 1999

67 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: LOCO-I/JPEG-LS attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding, within a few percentage points of the best available compression ratios.
Abstract: LOGO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. The algorithm was conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit based on Golomb codes. The JPEG-LS standard evolved after successive refinements of the core algorithm, and a description of its design principles and main algorithmic components is presented in this paper. LOCO-I/JPEG-LS attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level.

53 citations



Journal ArticleDOI
TL;DR: A new technique is introduced for the implementation of context based adaptive arithmetic entropy coding, based on the prediction of the value of the current transform coefficient, using a weighted least squares method, in order to achieve appropriate context selection for arithmetic coding.
Abstract: Significant progress has recently been made in lossless image compression using discrete wavelet transforms. The overall performance of these schemes may be further improved by properly designing efficient entropy coders. A new technique is introduced for the implementation of context based adaptive arithmetic entropy coding. This technique is based on the prediction of the value of the current transform coefficient, using a weighted least squares method, in order to achieve appropriate context selection for arithmetic coding. Experimental results illustrate and evaluate the performance of the proposed technique.

37 citations



Journal ArticleDOI
TL;DR: Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that the APMAR method is suitable for reversible medical image compression.
Abstract: An adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the multiplicative autoregressive (MAR) model with Huffman coding. Comparisons with other methods [MAR, space-varying MAR (SMAR) and adaptive JPEG (AJPEG) models] on a series of test images show that our method is suitable for reversible medical image compression.

33 citations


Journal ArticleDOI
TL;DR: This study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG, and verified other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG.
Abstract: This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.

Proceedings ArticleDOI
01 Dec 1999
TL;DR: Effective coding by modifying the prediction errors obtained from each plane is realized using prediction errors of previously coded pixels around the pixel being encoded.
Abstract: In the present paper we propose lossless compression for RGB color still images which consist of three planes, though many of the proposed techniques for lossless compression are for gray scale images. In the proposed method, we realize effective coding by modifying the prediction errors obtained from each plane. We use color space correlation in modification of prediction errors. High speed and effective context modeling in entropy coding is also realized using prediction errors of previously coded pixels around the pixel being encoded.

Journal ArticleDOI
TL;DR: This work presents a rate-distortion (RD) optimized JPEG compliant progressive encoder that produces a sequence of scans, ordered in terms of decreasing importance, and can achieve precise rate/distortion control.
Abstract: Among the many different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this work, a rate-distortion (RD) optimized JPEG compliant progressive encoder is presented that produces a sequence of scans, ordered in terms of decreasing importance. Our encoder outperforms an optimized sequential JPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing JPEG compliant encoders, our encoder can achieve precise rate/distortion control. Substantially better compression performance and precise rate control, provided by our progressive JPEG compliant encoding algorithm, are two highly desired features currently sought for the emerging JPEG-2000 standard.

Journal ArticleDOI
TL;DR: Experimental results suggest that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel when it is used with bilinear interpolation and either error diffusion or ordered dithering.
Abstract: We describe a procedure by which Joint Photographic Experts Group (JPEG) compression may be customized for gray-scale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are significantly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We describe how the frequency-domain effects of scaling and halftoning may be measured, and how to account for those effects in an iterative design procedure for the JPEG quantization table. We also present experimental results suggesting that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel (with reference to the number of pixels in the original image) when it is used with bilinear interpolation and either error diffusion or ordered dithering. Based on these results, we believe that in terms of the achieved bit rate, the performance of our encoder is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.

Proceedings ArticleDOI
22 Aug 1999
TL;DR: A scheme which adaptively combines the LS-based prediction with a simpler prediction is proposed, and experiments show that the proposed algorithm can further improve the performance of the LS approach.
Abstract: The least squares (LS) approach-a powerful tool for estimation is incorporated into a predictive lossless image compression algorithm and is proven very effective in reducing the prediction error. Based on the observation that sometimes LS has also limitations, a scheme which adaptively combines the LS-based prediction with a simpler prediction is proposed. Experiments show that the proposed algorithm can further improve the performance of the LS approach. The compression performance of the proposed technique is the same as that of TMW, the best published technique for lossless image coding. However, in reaching this, this technique requires much less computation than TMW.

Journal ArticleDOI
TL;DR: The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bilevel images and is aimed at material containing half-toned images as a supplement to the specialized soft pattern matching techniques that work better for text.
Abstract: We present general and unified algorithms for lossy/lossless coding of bilevel images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template. For better compression, the more general free tree may be used. Loss may be introduced in a preprocess on the encoding side to increase compression. The primary algorithm is a rate-distortion controlled greedy flipping of pixels. Though being general, the algorithms are primarily aimed at material containing half-toned images as a supplement to the specialized soft pattern matching techniques that work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in half-toned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bilevel images.

Patent
18 Nov 1999
TL;DR: In this paper, the authors proposed a solution to compress picture data having a high compression ratio by calculating a prediction value by using the pixel value of a pixel on which a color filter of the same color component as that of the pixel of interest is arranged as the pixel values of a near-by pixel.
Abstract: PROBLEM TO BE SOLVED: To compress picture data having a high compression ratio by calculating a prediction value by using the pixel value of a pixel on which a color filter of the same color component as that of a pixel of interest is arranged as the pixel value of a near-by pixel. SOLUTION: An encoding processor 10 digitizes a picture signal inputted from an image input device 1, compresses the picture data by JPEG lossless encoding using both of DPCM encoding and entropy encoding and stores the compressed picture data in a storage medium 2. In the case of DPCM encoding/ decoding picture data highly accurately acquired by the input device 1, a temporary prediction value of a noticed pixel is calculated based on a prediction expression using the pixel value of an adjacent pixel on which a color filter of a different color component is arranged and the temporary prediction value is also calculated based on a prediction expression using the pixel value of a pixel on which a color filter of the same color component as that of the pixel of interest is arranged so as to determine a prediction expression minimizing a prediction error from the pixel value as an optimum prediction expression, the picture data can be compressed in a higher compression ratio.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: Based on the logarithmic and exponential transform, an efficient Log-Exp still image compression system is proposed that can get a higher compression ratio than JPEG by a factor of1.84 times for the benchmark image Lena.
Abstract: Based on the logarithmic and exponential transform, an efficient Log-Exp still image compression system is proposed. The Log-Exp compression is designed for the high quality still image, especially for the PSNR above 36. At similar image quality (Log-Exp PSNR=41.52 and JPEG PSNR=41.23), the Log-Exp compression can get a higher compression ratio than JPEG by a factor of1.84 times for the benchmark image Lena. Besides, the Log-Exp compression is computed in pixel-by-pixel without the block artifacts. In comparison with the JPEG compression result (bpp=0.99. PSNR=26.9), the Log-Exp compression uses less bpp (bpp=0.87) to get higher image quality (PSNR=36.38) for the benchmark image baboon.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: This paper takes advantage of perceptual classification to improve the performance of the standard JPEG implementation via adaptive thresholding, while being compatible with the baseline standard.
Abstract: We propose a new technique for transform coding based on rate-distortion (RD) optimized thresholding (i.e. discarding) of wasteful coefficients. The novelty in this proposed algorithm is that the distortion measure is made adaptive. We apply the method to the compression of mixed documents (containing text, natural images, and graphics) using JPEG for printing. Although the human visual system's response to compression artifacts varies depending on the region, JPEG applies the same coding algorithm throughout the mixed document. This paper takes advantage of perceptual classification to improve the performance of the standard JPEG implementation via adaptive thresholding, while being compatible with the baseline standard. A computationally efficient classification algorithm is presented, and the improved performance of the classified JPEG coder is verified. Tests demonstrate the method's efficiency compared to regular JPEG and to JPEG using non-adaptive thresholding. The non-stationary nature of distortion perception is true for most signal classes and the same concept can be used elsewhere.

Journal Article
TL;DR: This paper presents simple yet effective methods for designing lossless versions of block transforms and FIR filter banks, and demonstrates that filter banks can be decomposed into 2-point transforms or interpolative predictions.
Abstract: SUMMARY Lossless block transforms and filter banks that map integers to integers are important for unified lossless and lossy image coding systems. In this paper, we present simple yet effective methods for designing lossless versions of block transforms and FIR filter banks. First, an N-point lossless transform and a lossless interpolative prediction are introduced. Next, we demonstrate that filter banks can be decomposed into 2-point transforms or interpolative predictions. Lastly, lossless versions of block transforms and filter banks are obtained by replacing every constituent module by the corresponding lossless version. Lossless versions of 8-point discrete cosine transform (DCT), 8point Walsh-Hadamard transform (WHT) and several filter banks are designed and their lossless compression performance is evaluated.

Journal ArticleDOI
TL;DR: Technology for JQT design that takes a pattern recognition approach to the problem, using a database of images to train statistical models of the artifacts introduced through JPEG compression, and uses a model of human visual perception as an error measure.
Abstract: A JPEG Quality Transcoder (JQT) converts a JPEG image file that was encoded with low image quality to a larger JPEG image file with reduced visual artifacts, without access to the original uncompressed image. In this article, we describe technology for JQT design that takes a pattern recognition approach to the problem, using a database of images to train statistical models of the artifacts introduced through JPEG compression. In the training procedure for these models, we use a model of human visual perception as an error measure. Our current prototype system removes 32.2% of the artifacts introduced by moderate compression, as measured on an independent test database of linearly coded images using a perceptual error metric. This improvement results in an average PSNR reduction of 0.634 dB.

Proceedings ArticleDOI
26 May 1999
TL;DR: An image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS is proposed.
Abstract: This paper proposes an image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS. The companding and compression processes can also be augmented by a pre-processing step to first segment the foreground (collimation) portions of the image and then substitute the foreground pixel values with a uniform code value. The quantization companding function approach is based on a theory that relates the onset of distortion to changes in the second-order statistics in an image (i.e., the differences between nearest neighbor pixels). By choosing an appropriate companding function, the properties of the second-order statistics can be retained to within an insignificant error, and the companded image can then be lossless compressed using JPEG-LS; we call the reconstructed image statistically lossless. The approach offers a theoretical basis supporting the integrity of the compressed-reconstructed data relative to the original image, while providing a modest level of compression efficiency (≃3X improvement over current lossless JPEG). This intermediate level of compression could help to increase the comfort level for radiologists that do not currently utilize lossy compression and may also have benefits from a medico-legal perspective.

Journal ArticleDOI
TL;DR: A new block transformation, Linear Order Transformation (LOT) is introduced and it is shown that LOT is faster than BWT transformation and the compression gain obtained is better than the well-known compression techniques, such as GIF, JPEG, CALIC, Gzip, LZW and the BWA for pseudo-color images.
Abstract: In a color-mapped (pseudo-color) image, pixel values repre- sent indices that point to color values in a look-up table. Well-known linear predictive schemes, such as JPEG and CALIC, perform poorly when used with pseudo-color images, while universal compressors, such as Gzip, Pkzip and Compress, yield better compression gain. Recently, Burrows and Wheeler introduced the Block Sorting Lossless Data Com- pression Algorithm (BWA). The BWA algorithm received considerable attention. It achieves compression rates as good as context-based meth- ods, such as PPM, but at execution speeds closer to Ziv-Lempel tech- niques. The BWA algorithm is mainly composed of a block-sorting trans- formation which is known as Burrows-Wheeler Transformation (BWT), followed by Move-To-Front (MTF) coding. We introduce a new block transformation, Linear Order Transformation (LOT). We delineate its re- lationship to Burrows-Wheeler Transformation and show that LOT is faster than BWT transformation. We then show that when MTF coder is employed after the LOT, the compression gain obtained is better than the well-known compression techniques, such as GIF, JPEG, CALIC, Gzip, LZW (Unix Compress) and the BWA for pseudo-color images.

Proceedings ArticleDOI
19 May 1999
TL;DR: A focused-procedure based upon a collection of image processing algorithms that serve to identify regions-of-interest (ROIs), over a digital image is developed, so that the JPEG version allows the result of the compression to be formatted into a file compatible for standard JPEG decoding.
Abstract: We have developed a focused-procedure based upon a collection of image processing algorithms that serve to identify regions-of-interest (ROIs), over a digital image. To loci of these ROIs are quantitatively compared with ROIs identified by human eye fixations or glimpses while subjects were looking at the same digital images. The focused- procedure is applied to adjust and adapt the compression ratio over a digital image: - high resolution and poor compression for ROIs; low resolution and strong compression for the major expanse of the entire image. In this way, an overall high compression ratio can be achieved, while at the same time preserving, important visual information within particularly relevant regions of the image. We have bundled the focused-procedures with JPEG, so that the JPEG version allows the result of the compression to be formatted into a file compatible for standard JPEG decoding. Thus, once the image has been compressed, it can be read without difficulty.

Proceedings ArticleDOI
K. Hamamoto1
05 Sep 1999
TL;DR: This paper attempts standardizing JPEG quantization table for medical ultrasonic echo images by a statistical method, and results reveal that the proposed method achieves a lower bit rate compared with the JPEG standard under the same image quality.
Abstract: Storing digital medical images is standardized by the DICOM report. Lossy pulse-echo ultrasonic image compression by a JPEG baseline system is permitted by it. The purpose of this paper is to reduce the data volume and to achieve a low bit rate in the digital representation of pulse-echo ultrasonic images without perceived loss of image quality. In image compression with a JPEG baseline system, it is possible to control the compression ratio and image quality by controlling quantization values. This paper attempts standardizing JPEG quantization table for medical ultrasonic echo images by a statistical method. Results reveal that the proposed method achieves a lower bit rate compared with the JPEG standard under the same image quality.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A unified coding algorithm for lossless and near-lossless color image compression that exploits the correlations between RGB signals that can control the distortion level in the magnitude on the RGB plane is proposed.
Abstract: This paper proposes a unified coding algorithm for lossless and near-lossless color image compression that exploits the correlations between RGB signals. For lossless coding, a reversible color transform is proposed that removes the correlations between RGB signals while avoiding any finite word length limitation. Next, the lossless algorithm is extended to a unified coding algorithm of lossless and near-lossless compression that can control the distortion level in the magnitude on the RGB plane. Experimental results show the effectiveness of the proposed algorithm.

Book ChapterDOI
TL;DR: A new method is proposed that actively uses the JPEG quality level as a parameter in embedding a watermark into an image and can be extracted even when the image is compressed using JPEG.
Abstract: Digital watermarking has been considered as an important technique to protect the copyright of digital content. For a digital watermarking method to be effective, it is essential that a watermark embedded in a still or moving image resists against various attacks ranging from compression, filtering to cropping. As JPEG is a dominant still image compression standard for Internet applications, digital watermarking methods that are robust against the JPEG compression are especially useful. Most digital watermarking methods proposed so far work by modulating pixels/coefficients without considering the quality level of JPEG, which renders watermarks readily removable. In this paper, we propose a new method that actively uses the JPEG quality level as a parameter in embedding a watermark into an image. A useful feature of the new method is that the watermark can be extracted even when the image is compressed using JPEG.

Journal ArticleDOI
TL;DR: An adaptive fuzzy-tuning modeler is employed that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation and the compression results of the proposed method are good and satisfactory for various types of source data.
Abstract: This paper describes an online lossless data-compression method using adaptive arithmetic coding. To achieve good compression efficiency, we employ an adaptive fuzzy-tuning modeler that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation. In comparison with other lossless coding schemes, the compression results of the proposed method are good and satisfactory for various types of source data, Since we adopt the table-lookup approach for the fuzzy-tuning modeler, the design is simple, fast, and suitable for VLSI implementation.

01 Jan 1999
TL;DR: Application of Burrows-Wheeler transformation (BWT) to exploit repetition of strings across blocks in the entropy encoding of JPEG improves the compression ratio by 18% compared with that of the original JPEG algorithm.
Abstract: JPEG encodes a still image through a lossy encoding followed by a lossless entropy encoding. The entropy coding in JPEG is a combination of the runlength coding and a variable-length coding. The entropy encoding employed in JPEG aims to exploit repetition of strings within a block but not across blocks. We propose application of Burrows-Wheeler transformation (BWT) to exploit repetition of strings across blocks in the entropy encoding. We measure the degree of repetition (DOR) of a block of data for the entropy encoding first and apply BWT to the block if DOR is greater than a preset threshold value. Experimental results show that the proposed method improves the compression ratio, on average, by 18% compared with that of the original JPEG algorithm.