scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1999"


Journal ArticleDOI
TL;DR: An adaptive algorithm to reduce the computations of DCT, IDCT, quantization, and inverse quantization is developed and presented and it is shown that significant improvement in the processing speed can be achieved with negligible video-quality degradation.
Abstract: Digital video coding standards such as H.263 and MPEG are becoming more and more important for multimedia applications. Due to the huge amount of computations required, there are significant efforts to speed up the processing of video encoders. Previously, the efforts were mainly focused on the fast motion-estimation algorithm. However, as the motion-estimation algorithm becomes optimized, to speed up the video encoders further we also need to optimize other functions such as the discrete cosine transform (DCT) and inverse DCT (IDCT). In this paper, we propose a theoretical model for DCT coefficients. Based on the model, we develop an adaptive algorithm to reduce the computations of DCT, IDCT, quantization, and inverse quantization. We also present a fast DCT algorithm to speed up the calculations of DCT further when the quantization step size is large. We show, by simulations, that significant improvement in the processing speed can be achieved with negligible video-quality degradation, We also implement the algorithm in a real-time PC-based platform to show that it is effective and practical.

206 citations


Book
01 Sep 1999
TL;DR: This book discusses JPEG Compression Modes, Huffman Coding in JPEG, and Color Representation in PNG, the Representation of Images, and more.
Abstract: Preface. Acknowledgments. 1. Introduction. The Representation of Images. Vector and Bitmap Graphics. Color Models. True Color versus Palette. Compression. Byte and Bit Ordering. Color Quantization. A Common Image Format. Conclusion. 2. Windows BMP. Data Ordering. File Structure. Compression. Conclusion. 3. XBM. File Format. Reading and Writing XBM Files. Conclusion. 4. Introduction to JPEG. JPEG Compression Modes. What Part of JPEG Will Be Covered in This Book? What are JPEG Files? SPIFF File Format. Byte Ordering. Sampling Frequency. JPEG Operation. Interleaved and Noninterleaved Scans. Conclusion. 5. JPEG File Format. Markers. Compressed Data. Marker Types. JFIF Format. Conclusion. 6. JPEG Human Coding. Usage Frequencies. Huffman Coding Example. Huffman Coding Using Code Lengths. Huffman Coding in JPEG. Limiting Code Lengths. Decoding Huffman Codes. Conclusion. 7. The Discrete Cosine Transform. DCT in One Dimension. DCT in Two Dimensions. Basic Matrix Operations. Using the 2-D Forward DCT. Quantization. Zigzag Ordering. Conclusion. 8. Decoding Sequential-Mode JPEG Images. MCU Dimensions. Decoding Data Units. Decoding Example. Processing DCT Coefficients. Up-Sampling. Restart Marker Processing. Overview of JPEG Decoding. Conclusion. 9. Creating Sequential JPEG Files. Compression Parameters. Output File Structure. Doing the Encoding. Down-Sampling. Interleaving. Data Unit Encoding. Huffman Table Generation. Conclusion. 10. Optimizing the DCT. Factoring the DCT Matrix. Scaled Integer Arithmetic. Merging Quantization and the DCT. Conclusion. 11. Progressive JPEG. Component Division in Progressive JPEG. Processing Progressive JPEG Files. Processing Progressive Scans. MCUs in Progressive Scans. Huffman Tables in Progressive Scans. Data Unit Decoding. Preparing to Create Progressive JPEG Files. Encoding Progressive Scans. Huffman Coding. Data Unit Encoding. Conclusion. 12. GIF. Byte Ordering. File Structure. Interlacing. Compressed Data Format. Animated GIF. Legal Problems. Uncompressed GIF. Conclusion. 13. PNG. History. Byte Ordering. File Format. File Organization. Color Representation in PNG. Device-Independent Color. Gamma. Interlacing. Critical Chunks. Noncritical Chunks. Conclusion. 14. Decompressing PNG Image Data. Decompressing the Image Data. Huffman Coding in Deflate. Compressed Data Format. Compressed Data Blocks. Writing the Decompressed Data to the Image. Conclusion. 15. Creating PNG Files. Overview. Deflate Compression Process. Huffman Table Generation. Filtering. Conclusion. Glossary. Bibliography. Index. 0201604434T04062001

190 citations


Journal ArticleDOI
HyunWook Park1, Yung-Lyul Lee1
TL;DR: According to the comparison study of PSNR and computation complexity analysis, the proposed algorithm shows better performance than the VM postprocessing algorithm, whereas the subjective image qualities of both algorithms are similar.
Abstract: The reconstructed images from highly compressed MPEG data have noticeable image degradations, such as blocking artifacts near the block boundaries, corner outliers at crosspoints of blocks, and ringing noise near image edges because the MPEG quantizes the transformed coefficients of 8/spl times/8 pixel blocks. A postprocessing algorithm is proposed to reduce quantization effects, such as blocking artifacts, corner outliers, and ringing noise, in MPEG-decompressed images. The proposed postprocessing algorithm reduces the quantization effects adaptively by using both spatial frequency and temporal information extracted from the compressed data. The blocking artifacts are reduced by one-dimensional (1-D) horizontal and vertical low-pass filtering (LPF), and the ringing noise is reduced by two-dimensional (2-D) signal-adaptive filtering (SAF). A comparison study of the peak signal-to-noise ratio (PSNR) and the computation complexity analysis between the proposed algorithm and the MPEG-4 VM (verification model) postprocessing algorithm is performed by computer simulation with several image sequences. According to the comparison study of PSNR and computation complexity analysis, the proposed algorithm shows better performance than the VM postprocessing algorithm, whereas the subjective image qualities of both algorithms are similar.

152 citations


Patent
29 Jul 1999
TL;DR: In this article, the authors propose a quantization index (QIndex) for each block that reflects the level of quantization applied to the block and quantizes coefficients in the block by dividing them by the QFactor in the table corresponding to the QIndex for the block.
Abstract: A method for still image compression reduces pixel and texture memory requirements in graphics rendering and other applications. The image compression method divides an image into blocks and stores a quantization index (QIndex) for each block that reflects the level of quantization applied to the block. The QIndex is an index into a table of QFactors. The method performs an invertable transform on a block to generate coefficients for spatial frequency components in the block. It then quantizes coefficients in the block by dividing them by the QFactor in the table corresponding to the QIndex for the block. The QIndex enables the compression ratio of an image to vary across blocks and within each block. A control structure associated with the image stores a pointer to each of the blocks in an image. This control structure allows each block to be accessed and decompressed independently.

135 citations


Proceedings ArticleDOI
TL;DR: In this article, a novel system for image watermarking, which exploits the similarity exhibited by the Digital Wavelet Transform with respect to the models of the human visual system, for robustly hiding watermarks is presented.
Abstract: The growth of the Internet and the diffusion of multimedia applications requires the development of techniques for embedding identification codes into images, in such a way that it can be granted their authenticity and/or protected the copyright. In this paper a novel system for image watermarking, which exploits the similarity exhibited by the Digital Wavelet Transform with respect to the models of the Human Visual System, for robustly hiding watermarks is presented. In particular, a model for estimating the sensitivity of the eye to noise, previously proposed for compression applications, is used to adapt the watermark strength to the local content of the image. Experimental results are shown supporting the validity of the approach.

96 citations


Journal ArticleDOI
TL;DR: A chip has been designed and tested to demonstrate the feasibility of an ultra-low-power, two-dimensional inverse discrete cosine transform (IDCT) computation unit in a standard 3.3-V process, which meets the sample rate requirements for MPEG-2 MP@ML.
Abstract: A chip has been designed and tested to demonstrate the feasibility of an ultra-low-power, two-dimensional inverse discrete cosine transform (IDCT) computation unit in a standard 3.3-V process. A data-driven computation algorithm that exploits the relative occurrence of zero-valued DCT coefficients coupled with clock gating has been used to minimize switched capacitance. In addition, circuit and architectural techniques such as deep pipelining have been used to lower the voltage and reduce the energy dissipation per sample. A Verilog-based power tool has been developed and used for architectural exploration and power estimation. The chip has a measured power dissipation of 4.65 mW at 1.3 V and 14 MHz, which meets the sample rate requirements for MPEG-2 MP@ML. The power dissipation improves significantly at lower bit rates (coarser quantization), which makes this implementation ideal for emerging quality-on-demand protocols that trade off energy efficiency and video quality.

93 citations


Patent
11 Mar 1999
TL;DR: In this article, discrete cosine transform (DCT) domain representation of the images is used to perform eight operations in D4 (the dihedral group of symmetries of a square) on JPEG images.
Abstract: Image processing techniques which involve direct manipulation of the compressed domain representation of an image to achieve the desired spatial domain processing without having to go through a complete decompression and compression process. The techniques include processing approaches for performing the eight operations in D4 (the dihedral group of symmetries of a square) on JPEG images using the discrete cosine transform (DCT) domain representation of the images directly. For a task such as image rotation by 90° (an operation in D4), DCT-domain based methods can yield nearly a five-fold increase in speed over a spatial-domain based technique. These simple compressed-domain based processing techniques are well suited to the imaging tasks that are needed in a JPEG-based digital still-camera system.

91 citations


Journal ArticleDOI
TL;DR: A wavelet-based compression scheme that is able to operate in the lossless mode that implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding.
Abstract: The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).

68 citations


Proceedings ArticleDOI
30 May 1999
TL;DR: It is demonstrated through some simple experiments that for a given image reconstruction quality, more scalar parameters must be transmitted using the SVD, than when using the discrete cosine transform (DCT).
Abstract: During the past couple of decades several proposals for image coders using singular value decomposition (SVD) have been put forward. The results using SVD in this context have never been spectacular. The main problem with the SVD is that the transform itself must be transmitted as side information. We demonstrate through some simple experiments that for a given image reconstruction quality, more scalar parameters must be transmitted using the SVD, than when using the discrete cosine transform (DCT). Also, using an alternative interpretation of the SVD we show that the SVD representation necessitates quantization of individual factors as compared to quantization of the associated product. This is clearly suboptimal.

59 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: Experimental results indicate that the HVS-based quantization table can achieve improvements in PSNR about 0.2-2.0 dB without increasing the complexity in both encoder and encoder.
Abstract: In this paper, a quantization table based on the human visual system is designed for the baseline JPEG coder By incorporating the human visual system with the uniform quantizer, a perceptual quantization table is derived The quantization table is easy to adapt to the specified resolution for viewing and printing Experimental results indicate that the HVS-based quantization table can achieve improvements in PSNR about 02-20 dB without increasing the complexity in both encoder and encoder

51 citations


Patent
22 Dec 1999
TL;DR: In this article, a multipoint connection device connecting a plurality of video conference terminals, image data transmitted from a terminal is decoded, and a face area in the image data is recognized.
Abstract: In video conference system terminals, the conventional difficulty in high-quality display of facial expressions of a plurality of participants is solved. In a multipoint connection device connecting a plurality of video conference terminals, image data transmitted from a terminal is decoded, and a face area in the image data is recognized. Then, quantization coefficients for the face area are set to be greater coefficients than those for other areas than the face area, then the image data is compressed, and delivered to the respective terminals. By this arrangement, in image data, a face area with great significance can be re-compressed without degradation of image quality, while an area with less significance such as background can be compressed with high efficiency. Thus, the total code amount can be reduced. Accordingly, even if the system uses a narrow band communication channel, the significant face area can be clearly displayed at the respective terminals.

Patent
08 Dec 1999
TL;DR: In this article, a quad-tree embedded image coding technique is used in combination with a bit-plane encoding technique to provide an efficient and low-complexity embedded image decoding system, which identifies coefficients as significant, insignificant, or refinement at each successive quantization level.
Abstract: A quad-tree embedded image coding technique is used in combination with a bit-plane encoding technique to provide an efficient and low complexity embedded image coding system. A simple quad-tree method identifies coefficients as significant, insignificant, or refinement at each successive quantization level. The quad-tree technique is used instead of the zero-tree or hierarchical tree used in previous encoders.

Journal ArticleDOI
TL;DR: A new technique for sharpening compressed images in the discrete-cosine-transform domain by suitably scaling each element of the encoding quantization table to enhance the high-frequency characteristics of the image.
Abstract: We present a new technique for sharpening compressed images in the discrete-cosine-transform domain. For images compressed using the JPEG standard, image sharpening is achieved by suitably scaling each element of the encoding quantization table to enhance the high-frequency characteristics of the image. The modified version of the encoding table is then transmitted in lieu of the original. Experimental results with scanned images show improved text and image quality with no additional computation cost and without affecting compressibility.

Patent
21 Jan 1999
TL;DR: In this paper, the authors proposed a re-compression system for edited images which have previously been compressed, which uses software to track and determine the nature of edits made to each image frame.
Abstract: This disclosure provides a compression system for edited images which have previously been compressed. The preferred system uses software to track and determine the nature of edits made to each image frame. Each image frame is divided into spatial regions, and codes defining the nature of changes are then stored in a set of tables called the “registry of edits.” When it is time to compress images for output, re-compression software interrogates the registry to determine whether spatial regions in the frame has been altered in a manner that undermines the integrity of the original compression data. For example, if a compressed image signal is modified in the spatial domain to add the logo of a local television station, most of each image frame will remain unchanged and the original motion vectors and residuals (or other compressed representation) from an input signal may be re-used, thereby saving substantial processing time and minimizing introduction of additional quantization errors. The preferred embodiment may be used with most any digital editor or computer to substantially reduce the processing time and resources required to provide a compressed output signal.

Proceedings ArticleDOI
Yung-Lyul Lee1, HyunWook Park1
24 Oct 1999
TL;DR: A loop-filtering algorithm and a post- Filtering algorithm, which can reduce the quantization effects, are described, and the performance of both algorithms is compared with respect to the image quality, the computational complexity, andThe compressed bit rates.
Abstract: The decompressed images from highly compressed data have noticeable image degradations such as blocking artifacts, corner outliers, and ringing noise, because most image coding standards quantize the DCT coefficients of 8/spl times/8-pixel blocks independently. A loop-filtering algorithm and a post-filtering algorithm, which can reduce the quantization effects, are described, and the performance of both algorithms is compared with respect to the image quality, the computational complexity, and the compressed bit rates. Both the loop-filtering algorithm and the post-filtering algorithm reduce the quantization effects adaptively by using both DCT domain analysis and temporal information.

Journal ArticleDOI
TL;DR: It is demonstrated that the proposed algorithm is significantly more efficient than the conventional filtered spatial domain and earlier proposed DCT domain methods.
Abstract: A method for efficient spatial domain filtering, directly in the discrete cosine transform (DCT) domain, is developed and proposed. It consists of using the discrete sine transform (DST) and the DCT for transform-domain processing on the in JPEG basis of the previously derived convolution-multiplication properties of discrete trigonometric transforms. The proposed scheme requires neither zero padding of the input data nor kernel symmetry. It is demonstrated that, in typical applications, the proposed algorithm is significantly more efficient than the conventional filtered spatial domain and earlier proposed DCT domain methods. The proposed method is applicable to any DCT-based image compression standard, such as JPEG, MPEG, and H.261.

Journal ArticleDOI
TL;DR: In this paper, the properties of complex-valued SAR images relevant to the task of data compression are examined, and the use of transform-based compression methods is advocated but employ radically different quantization strategies than those commonly used for incoherent optical images.
Abstract: Synthetic aperture radars (SAR) are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. We examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented.

Journal Article
TL;DR: This work examines the properties of complex-valued SAR images relevant to the task of data compression and advocates the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images.
Abstract: Synthetic Aperture Radars are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. In this paper, we examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented.

Patent
Jungwoo Lee1
28 Sep 1999
TL;DR: In this paper, the authors proposed a parameterized Q matrix adaptation algorithm for MPEG-2 compression, where the Q matrix for the current frame is generated based on DCT coefficient data from the previous encoded frame of the same type (e.g., I, P, or B).
Abstract: In a video compression processing, such as MPEG-2 compression processing, the quantization (Q) matrix used to quantize discrete cosine transform (DCT) coefficients is updated from frame to frame based on a parameterized Q matrix adaptation algorithm. According to the algorithm, the Q matrix for the current frame is generated based on DCT coefficient data (108) from the previous encoded frame of the same type (e.g., I, P, or B) as the current frame. In particular, the Q matrix is generated using a function based on shape parameters (e.g., the slope of the diagonal of the Q matrix and/or the convexity of the diagonal of the Q matrix), where the diagonal slope for the Q matrix of the current frame is generated based on the diagonal slope of a DCT map (106) for the previously encoded frame. Before using the generated Q matrix to quantize the DCT coefficients for the current frame, the Q matrix is preferably adjusted for changes in the target mean from the previously encoded frame to the current frame.

Journal ArticleDOI
TL;DR: This study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG, and verified other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG.
Abstract: This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.

Patent
19 Oct 1999
TL;DR: In this article, a first look-up table for 8×8 DCT is presented, which produces a 15-valued quantization scale based on class number information and a QNO number for an 8-8 data block (data matrix) from an input encoded digital bit stream.
Abstract: An efficient digital video (DV) decoder process that utilizes a specially constructed quantization matrix allowing an inverse quantization subprocess to perform parallel computations, e.g., using SIMD processing, to efficiently produce a matrix of DCT coefficients. The present invention utilizes a first look-up table (for 8×8 DCT) which produces a 15-valued quantization scale based on class number information and a QNO number for an 8×8 data block (“data matrix”) from an input encoded digital bit stream to be decoded. The 8×8 data block is produced from a deframing and variable length decoding subprocess. An individual 8-valued segment of the 15-value output array is multiplied by an individual 8-valued segment, e.g., “a row,” of the 8×8 data matrix to produce an individual row of the 8×8 matrix of DCT coefficients (“DCT matrix”). The above eight multiplications can be performed in parallel using a SIMD architecture to simultaneously generate a row of eight DCT coefficients. In this way, eight passes through the 8×8 block are used to produce the entire 8×8 DCT matrix, in one embodiment consuming only 33 instructions per 8×8 block. After each pass, the 15-valued output array is shifted by one value position for proper alignment with its associated row of the data matrix. The DCT matrix is then processed by an inverse discrete cosine transform subprocess that generates decoded display data. A second lookup table can be used for 2×4×8 DCT processing.

Journal ArticleDOI
TL;DR: This paper proposes to enhance shape detection with the Hough transform through fuzzy analysis, i.e., the uncertainty/precision duality, is thus reduced.

Patent
07 Apr 1999
TL;DR: In this paper, a limit on the scaling factor of quantization tables is established such that the same quantization table is used for each layer of the composite and degradation of image quality in each layer is avoided.
Abstract: By limiting the extent to which the degree of quantization is lowered to increase the amount of compressed data, problems of data rate overshoots and image quality degradation in multi layer composites may be avoided. In particular, when a more complex image occurs after a simple image, the quantization used to compress the complex image will not cause as large of a change in the total amount of compressed data. Recovery from such a change also may occur more quickly. Where quantization tables are adjusted using a scaling factor, a limit on the scaling factor may be established such that the target data rate is not achieved for simple images. When rendering multi layer composites, this limit is such that recompression of previously compressed data does not result in additional loss of information. As a result, degradation of image quality in each layer of the composite is avoided. Where quantization tables are adjusted using a scaling factor, a limit on the scaling factor is established such that the same quantization table is used for each layer of the composite.

Patent
15 Nov 1999
TL;DR: In this article, the authors proposed a method for decoding coded image data divided into a plurality of blocks so as to generate decoded image data by applying an inverse orthogonal transformation to each block of the plurality.
Abstract: A device for decoding coded image data divided into a plurality of blocks so as to generate decoded image data by applying an inverse orthogonal transformation to each block of the plurality of blocks includes a quantization-information detecting unit (5) detecting block-quantization-step-size information indicative of quantization step sizes used for the plurality of blocks. The device also includes a variable-gain-low-pass filter (7) reducing high frequency components of the decoded image data in a predetermined proximity of borders of the plurality of blocks based on the block-quantization-step-size information, the high frequency components having frequencies higher than a predetermined frequency.

Proceedings ArticleDOI
28 Jun 1999
TL;DR: The FFT-BAQ outperforms the BAQ in terms of signal-to-quantization noise ratio and phase error and allows a direct decimation of the oversampled data equivalent to FIR-filtering in time domain.
Abstract: SAR raw data compression is necessary to reduce the huge amount of data for downlink and the required memory on board. In view of interferometric and polarimetric applications for SAR data it becomes more and more important to pay attention to phase errors caused by data compression. Here, a detailed comparison of block adaptive quantization in time domain (BAQ) and in frequency domain (FFT-BAQ) is given. Inclusion of raw data compression in the processing chain allows an efficient use of the FFT-BAQ and makes implementation for on-board data compression feasible. The FFT-BAQ outperforms the BAQ in terms of signal-to-quantization noise ratio and phase error and allows a direct decimation of the oversampled data equivalent to FIR-filtering in time domain. Impacts on interferometric phase and coherency are also given.

Journal ArticleDOI
TL;DR: This work presents a rate-distortion (RD) optimized JPEG compliant progressive encoder that produces a sequence of scans, ordered in terms of decreasing importance, and can achieve precise rate/distortion control.
Abstract: Among the many different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this work, a rate-distortion (RD) optimized JPEG compliant progressive encoder is presented that produces a sequence of scans, ordered in terms of decreasing importance. Our encoder outperforms an optimized sequential JPEG encoder in terms of compression efficiency, substantially at low and high bit rates. Moreover, unlike existing JPEG compliant encoders, our encoder can achieve precise rate/distortion control. Substantially better compression performance and precise rate control, provided by our progressive JPEG compliant encoding algorithm, are two highly desired features currently sought for the emerging JPEG-2000 standard.

Patent
15 Oct 1999
TL;DR: In this paper, an apparatus and method are provided wherein image data compression software displays the compressed image data (expanded image data) according to the JPEG method expanded by a JPEG expansion unit on a display unit 20 through an image display IF 18.
Abstract: This invention makes it possible for a user to know the characteristics of a quantization table used for image compression. An apparatus and method are provided wherein image data compression software displays the compressed image data (expanded image data) according to the JPEG method expanded by a JPEG expansion unit on a display unit 20 through an image display IF 18. Moreover, the image data compression software 5 reflects all values of n×n quantization levels included in a quantization table (first quantization table) supplied from a block decoding section used to generate input JPEG data and displays quantization index value to be indexed (first quantization index value before changed) on the display unit 20.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A simple transform-coefficient sorting algorithm that enhances the performance of image compression techniques and breaks the dominant role played by the zero tree structure in image coding, and provides a low complexity solution to image compression.
Abstract: In this work we describe a simple transform-coefficient sorting algorithm that enhances the performance of image compression techniques. We use multiresolution grids to localize significant pixels and send out pixel values using successive approximation. In the wavelet domain our method performs slightly better than SPIHT (in average 0.1 dB of PSNR). In the DCT domain our method outperforms the SPIHT-based method and the significant tree quantization method by 1 dB. Our approach breaks the dominant role played by the zero tree structure in image coding, and provides a low complexity solution to image compression.

Proceedings ArticleDOI
TL;DR: Experimental results indicate that high quality embedding is possible, with no visible distortions, and signature images can be recovered even when the embedded data is subject to significant lossy JPEG compression.
Abstract: A new technique for embedding image data that can be recovered in the absence of the original host image, is presented. The data to be embedded, referred to as the signature data, is inserted into the host image in the DCT domain. The signature DCT coefficients are encoded using a lattice coding scheme before embedding. Each block of host DCT coefficients is first checked for its texture content and the signatured codes are appropriately inserted depending on a local texture measure. Experimental results indicate that high quality embedding is possible, with no visible distortions. Signature images can be recovered even when the embedded data is subject to significant lossy JPEG compression.

Journal ArticleDOI
TL;DR: Experimental results suggest that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel when it is used with bilinear interpolation and either error diffusion or ordered dithering.
Abstract: We describe a procedure by which Joint Photographic Experts Group (JPEG) compression may be customized for gray-scale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are significantly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We describe how the frequency-domain effects of scaling and halftoning may be measured, and how to account for those effects in an iterative design procedure for the JPEG quantization table. We also present experimental results suggesting that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel (with reference to the number of pixels in the original image) when it is used with bilinear interpolation and either error diffusion or ordered dithering. Based on these results, we believe that in terms of the achieved bit rate, the performance of our encoder is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.