scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1998"


Journal ArticleDOI
TL;DR: This work shows how the complexity of computing the R-D data can be reduced without significantly reducing the performance of the optimization procedure, and proposes two methods which provide successive reductions in complexity.
Abstract: Digital video's increased popularity has been driven to a large extent by a flurry of international standards (MPEG-1, MPEG-2, H.263, etc). In most standards, the rate control scheme, which plays an important role in improving and stabilizing the decoding and playback quality, is not defined, and thus different strategies can be implemented in each encoder design. Several rate-distortion (R-D)-based techniques have been proposed aimed at the best possible quality for a given channel rate and buffer size. These approaches are complex because they require the R-D characteristics of the input data to be measured before making quantization assignment decisions. We show how the complexity of computing the R-D data can be reduced without significantly reducing the performance of the optimization procedure. We propose two methods which provide successive reductions in complexity by: (1) using models to interpolate the rate and distortion characteristics, and (2) using past frames instead of current ones to determine the models. Our first method is applicable to situations (e.g., broadcast video) where a long encoding delay is possible, while our second approach is more useful for computation-constrained interactive video applications. The first method can also be used to benchmark other approaches. Both methods can achieve over 1 dB peak signal-to-noise rate (PSNR) gain over simple methods like the MPEG Test Model 5 (TM5) rate control, with even greater gains during scene change transitions. In addition, both methods make few a priori assumptions and provide robustness in their performance over a range of video sources and encoding rates. In terms of complexity, our first algorithm roughly doubles the encoding time as compared to simpler techniques (such as TM5). However, the complexity is greatly reduced as compared to methods which exactly measure the R-D data. Our second algorithm has a complexity marginally higher than TM5 and a PSNR performance slightly lower than that of the first approach.

296 citations


Patent
Yung-Lyul Lee1, Hyun Wook Park1
09 Oct 1998
TL;DR: In this article, an image data post-processing method for reducing quantization effect induced when image data compressed based on a block is decoded, and an apparatus therefor is presented.
Abstract: An image data post-processing method for reducing quantization effect induced when image data compressed based on a block is decoded, and an apparatus therefor. The image data post-processing method includes the steps of: (a) detecting semaphore representing whether or not post-processing is required, using distribution of inverse quantization coefficients of inverse-quantized image data and a motion vector representing the difference between the blocks of a previous video object plane (VOP) and blocks of a current VOP; and (b) filtering the decoded image data corresponding to the semaphore by a predetermined method, if it is determined by checking the detected semaphore that post-processing is required. Therefore, the quantization effect can be reduced by using the semaphore and an adaptive filter, and the amount of computation for filtering is also reduced. Also, the complexity of the hardware is reduced by a parallel process without multiplication and division.

154 citations


Patent
Asghar Nafarieh1
26 Jan 1998
TL;DR: In this paper, pixel blocks of an input image are type classified based on an analysis of pixel values for each respective pixel block, and a quantization modification process thresholds and/or quantizes the resulting DCT coefficients based on the type classification of the respective pixel blocks.
Abstract: Pixel blocks of an input image are type classified based on an analysis of pixel values for each respective pixel block. A discrete cosine transform (DCT) is performed on the pixel values of each pixel block, and a quantization modification process thresholds and/or quantizes the resulting DCT coefficients based on the type classification of the respective pixel block. Once the coefficients are modified in this way and encoded, the resulting data can be decoded and dequantized in compliance with the standard JPEG sequential mode data syntax in order to construct a perceptually faithful representation of the image, without passing any additional information to the decoder concerning the quantization modification.

145 citations


Proceedings ArticleDOI
17 Jul 1998
TL;DR: A new video quality metric is described that is an extension of these still image metrics into the time domain, based on the Discrete Cosine Transform, in order that might be applied in the widest range of applications.
Abstract: The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

134 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: A perceptual model based on the texture masking and luminance masking properties of the human visual system and an adaptive JPEG coder is implemented that provides savings in bit-rate over baseline JPEG, with no overall loss in perceptual quality according to a subjective test.
Abstract: A perceptual model based on the texture masking and luminance masking properties of the human visual system is presented in this paper. The model computes a local multiplier map for scaling of the JPEG quantization matrix. The result is that fewer bits are used to represent the perceptually less important areas of the image. The texture masking model is based on a block classification algorithm to differentiate between the plain, edge, and texture blocks. An adaptive luminance masking scheme is used to adjust the luminance masking strategy depending on the image's mean luminance value. An adaptive JPEG coder based on the perceptual model is implemented. Experimental results show that the adaptive coder provides savings in bit-rate over baseline JPEG, with no overall loss in perceptual quality according to a subjective test.

109 citations


Journal ArticleDOI
TL;DR: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed and can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates up to approximately 0.4%.
Abstract: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed. The objective is to eliminate transmission errors in JPEG images. Here a transmission error may be either a single-bit error or a burst error containing N successive error bits. For an entropy-coded JPEG image, a single transmission error in a codeword will not only affect the underlying codeword, but may also affect subsequent codewords. Consequently, a single error in an entropy-coded system may result in a significant degradation. To cope with the synchronization problem, in the proposed approach the restart capability of JPEG images is enabled, i.e., the eight unique restart markers (synchronization codewords) are periodically inserted into the JPEG compressed image bitstream. Transmission errors in a JPEG image are sequentially detected both when the JPEG image is under decoding and after the JPEG image has been decoded. When a transmission error or equivalently a corrupted restart interval is detected, the proposed error correction approach simply performs a sequence of bit inversions and redecoding operations on the corrupted restart interval and selects the "best" feasible redecoding solution by using a proposed cost function for error correction. The proposed approach can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates (BERs) up to approximately 0.4%. This shows the feasibility of the proposed approach.

93 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel blocking artifacts reduction method that compensates for selected transform coefficients so that the resultant image has a minimum block boundary discontinuity.
Abstract: This paper proposes a novel blocking artifacts reduction method based on the notion that the blocking artifacts are caused by heavy accuracy loss of transform coefficients in the quantization process. We define the block boundary discontinuity measure as the sum of the squared differences of pixel values along the block boundary. The proposed method compensates for selected transform coefficients so that the resultant image has a minimum block boundary discontinuity. The proposed method does not require a particular transform domain where the compensation should take place; therefore, an appropriate transform domain can be selected at the user's discretion. In the experiments, the scheme is applied to DCT-based compressed images to show its performance.

79 citations


Patent
27 Oct 1998
TL;DR: In this article, an image compression system includes a source, a memory, a personality/graphics engine having a personality and a graphics engine, an image processor and a memory allocator.
Abstract: An image compression system includes a source, a memory, a personality/graphics engine having a personality and a graphics engine, a memory allocator and an image processor. The source is operative to supply digital data representative of images. The memory has a finite size for receiving the digital data. The personality is configured to interpret an input file and is operative to construct image patches from the digital data. The graphics engine is operative to generate a display list from the memory. The memory allocator is associated with the memory and is operative to allocate image patches. The image processor includes a JPEG compressor and a JPEG decompressor. The image processor is operative to render the display list into strip buffers. The JPEG compressor is operative to JPEG compress images on the display list. The JPEG decompressor is operative to decompress compressed images on the display list. The image processor is operative to uncompress the compressed patch data and copy each bit in the image patch into the strip buffers. A method is also disclosed.

61 citations


Patent
09 Nov 1998
TL;DR: In this article, a recursive reverse DWT reconstruction on the desired subset of each image line from a compressed image stored on disk or received from continual transmission is performed, for only as many levels as required to result in the required level of detail view for the image subset to be accessed or viewed.
Abstract: Image transformation and selective inverse transformation is implemented by performing DWT-based transformation on very large images using single or multi-CPU architectures without requiring large amounts of computer memory. The compression of a large image I(x,y) is accomplished by defining L = (log2(x)-log2(Filter_size)) DWT levels, each level Ln containing pre-allocated memory buffers sufficient to hold (Filter_size+1) lines of x/2n pixels length of DWT subbands. For each line of I(x,y), in a recursive fashion through levels L0 to Ln, the level DWT is computed and the level subband wavelets are compressed and stored or transmitted as required. The recursive DWT for each level of each line of I(x,y) results in a seamless DWT for I(x,y). The continual transmission or storage of compressed imagery occurs where it is desirable to transmit high volumes of imagery through limited transmission bandwidth or to limited disk storage, and is accomplished by compression and transmission or storage of each subband wavelet transformed image line using wavelet quantization and encoding before transmission or storage. The selective decompression of imagery is accomplished by a recursive reverse DWT reconstruction on the desired subset of each image line from the desired subset of lines from a compressed image stored on disk or received from continual transmission, for only as many levels as required to result in the required level of detail view for the image subset to be accessed or viewed.

53 citations


Proceedings ArticleDOI
D. Nister1, C. Christopoulos1
05 Jun 1998
TL;DR: An embedded discrete cosine transform-based (DCT- based) image coding algorithm is described that outperforms other DCT-based coders published in the literature, including the Joint Photographers Expert Group (JPEG).
Abstract: An embedded DCT-based image coding algorithm is described. The decoder can cut the bitstream at any point and therefore reconstruct an image at lower rate. The quality of the reconstructed image at this lower rate would be the same as if the image was coded directly at that rate. The algorithm outperforms any other DCT-based coders published in the literature, including the JPEG algorithm. Moreover, our DCT-based embedded image coder gives results close to the best wavelet-based coders. The algorithm is very useful in various applications, like WWW, fast browsing of databases, etc.

53 citations


Proceedings ArticleDOI
Fan Ling, Weiping Li, Hongqiao Sun1
28 Dec 1998
TL;DR: Bitplane coding of the DCT coefficients is a new coding scheme that provides a better performance than run_value coding under all conditions.
Abstract: In the current image and video coding standards, such as MPEG- 1, MPEG-2, MPEG-4, H.261, H.263, and JPEG, quantized DCT coefficients are entropy coded using a so-called run_value coding technique. A problem with this technique is that the statistics of the run_value symbols are highly dependent on the quantization step size and the dynamic range of the DCT coefficients. Therefore, a single fixed entropy coding table cannot achieve the optimal coding efficiency for all possible quantization step sizes and all possible dynamic ranges of the DCT coefficients. Bitplane coding of the DCT coefficients is a new coding scheme that overcomes this problem. It provides a better performance than run_value coding under all conditions.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
24 Jul 1998
TL;DR: In this article, a method for analyzing the effects of quantization, developed for temporal one-dimensional signals, is extended to two-dimensional radiographic images by calculating the probability density function for the second order statistics (the differences between nearest neighbor pixels) and utilizing its Fourier transform (the characteristic function).
Abstract: A method for analyzing the effects of quantization, developed for temporal one-dimensional signals, is extended to two- dimensional radiographic images. By calculating the probability density function for the second order statistics (the differences between nearest neighbor pixels) and utilizing its Fourier transform (the characteristic function), the effect of quantization on image statistics can be studied by the use of standard communication theory. The approach is demonstrated by characterizing the noise properties of a storage phosphor computed radiography system and the image statistics of a simple radiographic object (cylinder) and by comparing the model to experimental measurements. The role of quantization noise and the onset of contouring in image degradation are explained.

Proceedings ArticleDOI
Aria Nosratinia1
07 Dec 1998
TL;DR: This approach simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average, which offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.
Abstract: A novel method is proposed for post-processing of JPEG-encoded images, in order to reduce coding artifacts and enhance visual quality. Our method simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average. This approach, despite its simplicity, offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.

Proceedings ArticleDOI
12 May 1998
TL;DR: A simulation on continuous-tone still images shows that the lossless and lossy compression efficiencies of RDCT are comparable to those obtained with reversible wavelet transform.
Abstract: In this paper a reversible discrete cosine transform (RDCT) is presented. The N-point reversible transform is firstly presented, then the 8-point RDCT is obtained by substituting the 2 and 4-point reversible transforms for the 2 and 4-point transforms which compose the 8-point discrete cosine transform (DCT), respectively. The integer input signal can be losslessly recovered, although the transform coefficients are integer numbers. If the floor functions are ignored in RDCT, the transform is exactly the same as DCT with determinant=1. RDCT is also normalized so that we can avoid the problem that dynamic range is nonuniform. A simulation on continuous-tone still images shows that the lossless and lossy compression efficiencies of RDCT are comparable to those obtained with reversible wavelet transform.

Journal ArticleDOI
TL;DR: A novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures.
Abstract: This paper presents a novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures, and uses the similarity between the codevectors in the consecutive code-books during the iterative clustering-process to reduce the number of codevectors necessary to be checked for one codebook search. Verified test results have shown that the proposed algorithm can provide almost 98% reduction of the execution time when compared to the conventional Linde-Buzo-Gray (LBG) algorithm.

Patent
28 Sep 1998
TL;DR: Multi-threshold wavelet coding (MTWC) as discussed by the authors utilizes a separate quantization threshold for each subband generated by the wavelet transform, which substantially reduces the number of insignificant bits generated during the initial quantization steps.
Abstract: A system and method for performing image compression using multi-threshold wavelet coding (MTWC) which utilizes a separate initial quantization threshold for each subband generated by the wavelet transform, which substantially reduces the number of insignificant bits generated during the initial quantization steps. Further, the MTWC system chooses the order in which the subbands are encoded according to a preferred rate-distortion tradeoff in order to enhance the image fidelity. Moreover, the MTWC system utilizes a novel quantization sequence order in order to optimize the amount of error energy reduction in significant and refinement maps generated during the quantization step. Thus, the MTWC system provides a better bit rate-distortion tradeoff and performs faster than existing state-of-the-art wavelet coders.

Proceedings ArticleDOI
D. Nister1, C. Christopoulos
04 Oct 1998
TL;DR: An embedded wavelet based image coding algorithm is described that allows certain regions of the image to be coded losslessly so that they can be exactly recovered by the decoder, while the remaining part is coded in a lossy manner.
Abstract: In this paper, an embedded wavelet based image coding algorithm is described. The algorithm allows certain regions of the image to be coded losslessly so that they can be exactly recovered by the decoder, while the remaining part is coded in a lossy manner. This maintains high compression while meeting the requirement of having lossless regions of interest (ROIs) in certain applications, like medical imaging. All coding, regional and full image, is done in a naturally progressive way all the way up to lossless.

Journal ArticleDOI
TL;DR: New schemes to reduce the computation of the discrete cosine transform (DCT) with negligible peak-signal-to-noise ratio (PSNR) degradation and a method to approximate the DCT coefficients which leads to significant computation savings are presented.
Abstract: This paper presents new schemes to reduce the computation of the discrete cosine transform (DCT) with negligible peak-signal-to-noise ratio (PSNR) degradation. The methods can be used in the software implementation of current video standard encoders, for example, H.26x and MPEG. We investigated the relationship between the quantization parameters and the position of the last nonzero DCT coefficient after quantization. That information is used to adaptively make the decision of calculating all 8/spl times/8 DCT coefficients or only part of the coefficients. To further reduce the computation, instead of using the exact DCT coefficients, we propose a method to approximate the DCT coefficients which leads to significant computation savings. The results show that for practical situations, significant computation reductions can be achieved while causing negligible PSNR degradation. The proposed method also results in computation savings in the quantization calculations.

Proceedings ArticleDOI
06 Nov 1998
TL;DR: This paper combines both watermarking paradigms to design an oblivious watermark that is capable of surviving an extremely wide range of severe image distortions.
Abstract: Low-frequency watermarks and watermarks generated using spread spectrum techniques have complementary robustness properties. In this paper, we combine both watermarking paradigms to design an oblivious watermark that is capable of surviving an extremely wide range of severe image distortions. An image is first watermarked with a low- frequency pattern and then a spread spectrum signal is added to the watermarked image. Since both watermarks are embedded in a different portion of the frequency space, they do not interfere. For the low-frequency watermark, we modify the NEC scheme so that the original image is not needed for watermark extraction. The image is first normalized nd the watermark is embedded into the lowest frequency discrete cosine modes by encoding a binary pattern using a special quantization-like function. The low-frequency watermark is combined with a spread spectrum signal added to the middle frequencies of a DCT. The resulting double watermarked image is extremely robust with respect to a very wide range of quite severe image distortions including low-pass filtering, pixel permutations, JPEG compression, noise adding, and nonlinear deformations of the signal, such as gamma correction, histogram manipulations, and dithering.

Journal ArticleDOI
TL;DR: A new adaptive page segmentation method is proposed to extract text blocks from various types of color technical journals' cover images to speed up processing time and reduce the processing complexity on true color images.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: A modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed lossless and some experimental rate-distortion curves for this scheme are presented.
Abstract: This paper introduces a modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed losslessly. It describes an embedded coding scheme which incorporates this lossless DCT and presents some experimental rate-distortion curves for this scheme. The results show that the lossless compression ratio of the proposed scheme exceeds that of the lossless JPEG predictive coding scheme. On the other hand, in lossy operation the rate-distortion curve of the proposed scheme is very close to that of lossy JPEG. Also, the transform coefficients of the proposed scheme can be decoded with the ordinary DCT at the expense of a small error, which is only significant in lossless operation.

Patent
26 Feb 1998
TL;DR: In this article, the authors proposed a multi-resolution video encoding system, which improves the computational efficiency associated with encoding a video sequence in two or more different resolutions, such as H.261, H.263, Motion-JPEG, MPEG-1 and MPEG-2.
Abstract: The invention provides a multi-resolution video encoding system which improves the computational efficiency associated with encoding a video sequence in two or more different resolutions. An illustrative embodiment includes a first encoder for encoding the sequence at a first resolution, and a second encoder for encoding the sequence at a second resolution higher than the first resolution. Information obtained from encoding the sequence at the first resolution is used to provide rate control for the sequence at the second resolution. This information may include, for example, a relationship between a quantization parameter selected for an image at the first resolution and a resultant output bitrate generated by encoding the image using the selected quantization parameter. The invention can be used with a variety of video encoding standards, including H.261, H.263, Motion-JPEG, MPEG-1 and MPEG-2.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: This paper applies some of the Golomb-Rice coding techniques that emerged in JPEG-LS, a new standard for lossless image compression, and applies them towards coding DCT coefficients in the lossy JPEG baseline algorithm, showing significant improvements in performance with limited impact on computational complexity.
Abstract: In this paper we apply some of the Golomb-Rice coding techniques that emerged in JPEG-LS, a new standard for lossless image compression, and apply them towards coding DCT coefficients in the lossy JPEG baseline algorithm. We show that this results in significant improvements in performance with limited impact on computational complexity. In fact, one significant reduction in complexity provided by the proposed techniques is the complete elimination of the Huffman tables in JPEG baseline which can be a bottleneck in hardware implementations. We give simulation results, comparing the performance of the proposed technique to JPEG baseline, JPEG baseline with optimal Huffman coding (two pass) and JPEG arithmetic.

Journal ArticleDOI
TL;DR: The CREW system and format is described, how the correct data can be quickly extracted from a CREW file to support a variety of target devices is shown, and the mechanisms needed for panning, zooming, and fixed-size compression are described.
Abstract: As the applications of digital imagery expand in resolution and pixel fidelity there is a greater need for more efficient compression and extraction of images and subimages. No longer is it sufficient to compress and decompress an image for a specific target device. The ability to handle many types of image data, extract images at different resolutions and quality, lossless and lossy, zoom and pan, and extract regions-of-interest are the new measures of image compression system performance. Compression with reversible embedded wavelets (CREW) is a high-quality image compression system that is progressive from high compression to lossless, and pyramidal in resolution. CREW supports regions-of-interest, and multiple image types, such as bi-level and continuous-tone. This paper describes the CREW system and format, shows how the correct data can be quickly extracted from a CREW file to support a variety of target devices, describes the mechanisms needed for panning, zooming, and fixed-size compression, and explains the superior performance on bi-level and graphic images.

Journal ArticleDOI
TL;DR: A blockwise distortion measure is proposed for evaluating the visual quality of compressed images, which outperforms PQS for a set of test images, and is much simpler to implement.
Abstract: A blockwise distortion measure is proposed for evaluating the visual quality of compressed images. The proposed measure calculates quantitatively how well important visual properties have been preserved in the distorted image. The method consists of three quality factors detecting contrast errors, structural errors, and quantization errors. The proposed method outperforms PQS for a set of test images, and is much simpler to implement. The method should also be applicable to color images; properties like color richness and saturation are captured by the quantization and contrast measures, respectively.

Journal ArticleDOI
TL;DR: This correspondence addresses the use of a joint source-channel coding strategy for enhancing the error resilience of images transmitted over a binary channel with additive Markov noise via a maximum a posteriori (MAP) channel detector.
Abstract: This article addresses the use of a joint source-channel coding strategy for enhancing the error resilience of images transmitted over a binary channel with additive Markov noise. In this scheme, inherent or residual (after source coding) image redundancy is exploited at the receiver via a maximum a posteriori (MAP) channel detector. This detector, which is optimal in terms of minimizing the probability of error, also exploits the larger capacity of the channel with memory as opposed to the interleaved (memoryless) channel. We first consider MAP channel decoding of uncompressed two-tone and bit-plane encoded grey-level images. Next, we propose a scheme relying on unequal error protection and MAP detection for transmitting grey-level images compressed using the discrete cosine transform (DCT), zonal coding, and quantization. Experimental results demonstrate that for various overall (source and channel) operational rates, significant performance improvements can be achieved over interleaved systems that do not incorporate image redundancy.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to simultaneously quantize and dither color images based on a rigorous cost-function approach which optimizes a quality criterion derived from a generic model of human perception.
Abstract: Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches.

Proceedings ArticleDOI
07 Dec 1998
TL;DR: The threshold of the selected subband is used as one of the energy weighting factors in the generation of a broadband watermark so that it cannot be easily damaged by frequency-selective filtering, DCT or wavelet based compression attack.
Abstract: A new scheme to search perceptually significant wavelet coefficients for effective digital watermark casting is proposed in this research. Unlike other watermark casting algorithms, which select a fixed set of DCT or wavelet coefficients in the frequency domain, we use an adaptive method to find significant subbands and a number of coefficients in these subbands. The resulting method is image dependent. Furthermore, the threshold of the selected subband is used as one of the energy weighting factors in the generation of a broadband watermark so that it cannot be easily damaged by frequency-selective filtering, DCT or wavelet based compression attack.

Patent
22 Dec 1998
TL;DR: In this paper, a regenerative image cutting its high frequency component can be provided by a controller corresponding to the resolution of display device, since the highfrequency component has already been cut in the reproduced image, no color noise or more appears in the subsample result.
Abstract: PROBLEM TO BE SOLVED: To generate low resolution image data with reduced color noises or moires at high speed from a compressed image data, in the case of an electronic still camera or the like. SOLUTION: When displaying the compressed image data stored in a secondary memory 26 onto a display device 28 having resolution lower than that of a CCD image sensor 12, the data of quantization group turning the quantization coefficient of high-frequency component to '0' are supplied to a joint photographic coding experts group(JPEG) compressor/expander 22 and JPEG expansion is performed. Through this expansion processing, a regenerative image cutting its high frequency component can be provided. Sub-sampling is performed to this regenerative image by a controller 16 corresponding to the resolution of display device 28. Since the high-frequency component has already been cut in the reproduced image, no color noise or more appears in the subsample result. Since the effect of smoothing can be provided on this device by expansion processing, it is not necessary to separately perform the smoothing processing and processing time can be shortened greatly.

Proceedings ArticleDOI
22 Apr 1998
TL;DR: Experimental results show that the watermark image is transparent to embedding for large amounts of hidden data, and the quality of the extracted signature is high even when the watermarked image is subjected to up to 75% wavelet compression and 85% JPEG lossy compression.
Abstract: Describes a data hiding technique which uses noise-resilient channel codes based on multidimensional lattices. A trade-off between between the quantity of hidden data and the quality of the watermarked image is achieved by varying the number of quantization levels for the signature and a scale factor for data embedding. Experimental results show that the watermarked image is transparent to embedding for large amounts of hidden data, and the quality of the extracted signature is high even when the watermarked image is subjected to up to 75% wavelet compression and 85% JPEG lossy compression. These results can be combined with a private key-based scheme to make unauthorized retrieval practically impossible, even with the knowledge of the algorithm.