scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 2000"


Journal ArticleDOI
TL;DR: This paper investigates the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval, and compares the retrieval performance of the EMD with that of other distances.
Abstract: We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances.

4,593 citations


Proceedings ArticleDOI
01 Feb 2000
TL;DR: This work develops a compressed index, called the IQ-tree, with a three-level structure, and develops a cost model and an optimization algorithm based on this cost model that permits an independent determination of the degree of compression for each second level page to minimize expected query cost.
Abstract: Two major approaches have been proposed to efficiently process queries in databases: speeding up the search by using index structures, and speeding up the search by operating on a compressed database, such as a signature file. Both approaches have their limitations: indexing techniques are inefficient in extreme configurations, such as high-dimensional spaces, where even a simple scan may be cheaper than an index-based search. Compression techniques are not very efficient in all other situations. We propose to combine both techniques to search for nearest neighbors in a high-dimensional space. For this purpose, we develop a compressed index, called the IQ-tree, with a three-level structure: the first level is a regular (flat) directory consisting of minimum bounding boxes, the second level contains data points in a compressed representation, and the third level contains the actual data. We overcome several engineering challenges in constructing an effective index structure of this type. The most significant of these is to decide how much to compress at the second level. Too much compression will lead to many needless expensive accesses to the third level. Too little compression will increase both the storage and the access cost for the first two levels. We develop a cost model and an optimization algorithm based on this cost model that permits an independent determination of the degree of compression for each second level page to minimize expected query cost. In an experimental evaluation, we demonstrate that the IQ-tree shows a performance that is the "best of both worlds" for a wide range of data distributions and dimensionalities.

303 citations


Proceedings ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks.
Abstract: In this paper, we propose a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks. The authenticator can identify the positions of corrupted blocks, and recover them with approximation of the original ones. In addition to JPEG compression, adjustments of the brightness of the image within reasonable ranges, are also acceptable using the proposed authenticator. The security of the proposed method is achieved by using the secret block mapping function which controls the signature generating/embedding processes. Our authenticator is based on two invariant properties of DCT coefficients before and after JPEG compressions. They are deterministic so that no probabilistic decision is needed in the system. The first property shows that if we modify a DCT coefficient to an integral multiple of a quantization step, which is larger than the steps used in later JPEG compressions, then this coefficient can be exactly reconstructed after later acceptable JPEG compression. The second one is the invariant relationships between two coefficients in a block pair before and after JPEG compression. Therefore, we can use the second property to generate authentication signature, and use the first property to embed it as watermarks. There is no perceptible degradation between the watermarked image and the original. In additional to authentication signatures, we can also embed the recovery bits for recovering approximate pixel values in corrupted areas. Our authenticator utilizes the compressed bitstream, and thus avoids rounding errors in reconstructing DCT coefficients. Experimental results showed the effectiveness of this system. The system also guaranies no false alarms, i.e., no acceptable JPEG compression is rejected.

258 citations


Patent
Navin Chaddha1
23 Mar 2000
TL;DR: In this paper, a multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data is presented, which is scalable across all of the relevant characteristics of the data.
Abstract: A multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data Universally scalable data is scalable across all of the relevant characteristics of the data. In the case of video, these characteristics include frame rate, resolution, and quality. The scalable data generated by the compression system is comprised of multiple additive layers for each characteristic across which the data is scalable. In the case of video, the frame rate layers are additive temporal layers, the resolution layers are additive base and enhancement layers, and the quality layers are additive index planes of embedded codes. Various techniques can be used for generating each of these layers (e.g., Laplacian pyramid decomposition or wavelet decomposition for generating the resolution layers; tree structured vector quantization or tree structured-scalar quantization for generating the quality layers). The compression system further provides for embedded inter-frame compression in the context of frame rate scalability, and non-redundant layered multicast network delivery of the scalable data.

245 citations


Proceedings Article
01 Sep 2000
TL;DR: In this paper, the performance of JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, was evaluated by comparing the principles behind each algorithm.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper puts into perspective the performance of these by evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG. The study concentrates on compression efficiency, although complexity and set of supported functionalities are also evaluated. Lossless compression efficiency as well as the lossy rate-distortion behavior is discussed. The principles behind each algorithm are briefly described and an outlook on the future of image coding is given. The results show that the choice of the “best” standard depends strongly on the application at hand.

197 citations


Journal ArticleDOI
TL;DR: This work describes the implementation of a discrete cosine transform core compression system targetted to low-power video (MPEG2 MP@ML) and still-image (JPEG) applications and exhibits two innovative techniques for arithmetic operation reduction in the DCT computation context along with standard voltage scaling techniques.
Abstract: This work describes the implementation of a discrete cosine transform (DCT) core compression system targetted to low-power video (MPEG2 MP@ML) and still-image (JPEG) applications. It exhibits two innovative techniques for arithmetic operation reduction in the DCT computation context along with standard voltage scaling techniques such as pipelining and parallelism. The first method dynamically minimizes the bitwidth of arithmetic operations in the presence of data spatial correlation. The second method trades off power dissipation and image compression quality (arithmetic precision). The chip dissipates 4.38 mW at 14 MHz and 1.56 V.

122 citations


Proceedings ArticleDOI
David A. Clunie1
18 May 2000
TL;DR: In this article, JPEG-LS and JPEG-2000 were evaluated on a set of CT images from multiple anatomical regions, modalities, and vendors, and the results showed that the proposed scheme outperformed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM).
Abstract: Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

121 citations


Proceedings ArticleDOI
28 Dec 2000
TL;DR: Evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, shows that the choice of the “best” standard depends strongly on the application at hand.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and more recent PNG. The study concentrates on compression efficiency and functionality set, while addressing other aspects such as complexity. Lossless compression efficiency as well as the fixed and progressive lossy rate-distortion behaviors are evaluated. Robustness to transmission errors, Region of Interest coding and complexity are also discussed. The principles behind each algorithm are briefly described. The results show that the choice of the "best" standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

119 citations


Proceedings Article
01 Sep 2000
TL;DR: A new algorithm able to achieve a one-to-one mapping of a digital image onto another image that takes into account the limited resolution and the limited quantization of the pixels is presented.
Abstract: This paper present a new algorithm able to achieve a one-to-one mapping of a digital image onto another image. This mapping takes into account the limited resolution and the limited quantization of the pixels. The mapping is achieved in a multiresolution way. Performing small modifications on statistics of the details allows to build a lossless, i.e. revertible, watermarking authenticating procedure.

109 citations


Journal ArticleDOI
TL;DR: The problem of evaluating the uncertainty that characterises discrete Fourier transform output data is dealt with, using a method based on a ‘white box’ theoretical approach, which can be particularly useful for any designer and user of DFT-based instruments.

107 citations


Patent
30 Jun 2000
TL;DR: In this paper, the authors proposed to remove high-frequency AC DCT coefficients at the end of the coefficient scan order to reduce the frequency of escape sequences in the (run, level) coding.
Abstract: To reduce bandwidth of non-scalable MPEG-2 coded video, certain non-zero AC DCT coefficients for the 8×8 blocks are removed from the MPEG-2 coded video. In one implementation, high-frequency AC DCT coefficients are removed at the end of the coefficient scan order. This method requires the least computation and is most desirable if the reduced-bandwidth video is to be spatially sub-sampled. In another implementation, the smallest-magnitude AC DCT coefficients are removed. This method may produce an undesirable increase in the frequency of occurrence of escape sequences in the (run, level) coding. This frequency can be reduced by retaining certain non-zero AC DCT coefficients that are not the largest magnitude coefficients, and by increasing a quantization scale to reduce the coefficient levels. The reduced-bandwidth video can be used for a variety of applications, such as browsing for search and play-list generation, bit stream scaling for splicing, and bit-rate adjustment for services with limited resources and for multiplexing of transport streams.

Journal ArticleDOI
TL;DR: At x2 magnification, images compressed with either JPEG or WTCQ algorithms were indistinguishable from unaltered original images for most observers at compression ratios between 8:1 and 16:1, indicating that 10:1 compression is acceptable for primary image interpretation.
Abstract: PURPOSE: To determine the degree of irreversible image compression detectable in conservative viewing conditions. MATERIALS AND METHODS: An image-comparison workstation, which alternately displayed two registered and magnified versions of an image, was used to study observer detection of image degradation introduced by irreversible compression. Five observers evaluated 20 16-bit posteroanterior digital chest radiographs compressed with Joint Photographic Experts Group (JPEG) or wavelet-based trellis-coded quantization (WTCQ) algorithms at compression ratios of 8:1–128:1 and ×2 magnification by using (a) traditional two-alternative forced choice; (b) original-revealed two-alternative forced choice, in which the noncompressed image is identified to the observer; and (c) a resolution-metric method of matching test images to degraded reference images. RESULTS: The visually lossless threshold was between 8:1 and 16:1 for four observers. JPEG compression resulted in performance as good as that with WTCQ compres...

Proceedings ArticleDOI
18 May 2000
TL;DR: The proposed JPEG 2000 scheme appears to offer similar or improved image quality performance relative to the current JPEG standard for compression of medical images, yet has additional features useful for medical applications, indicating that it should be included as an additional standard transfer syntax in DICOM.
Abstract: A multi-institution effort was conducted to assess the visual quality performance of various JPEG 2000 (Joint Photographic Experts Group) lossy compression options for medical imagery. The purpose of this effort was to provide clinical data to DICOM (Digital Imaging and Communications in Medicine) WG IV to support recommendations to the JPEG 2000 committee regarding the definition of the base standard. A variety of projection radiographic, cross sectional, and visible light images were compressed-reconstructed using various JPEG 2000 options and with the current JPEG standard. The options that were assessed included integer and floating point transforms, scalar and vector quantization, and the use of visual weighting. Experts from various institutions used a sensitive rank order methodology to evaluate the images. The proposed JPEG 2000 scheme appears to offer similar or improved image quality performance relative to the current JPEG standard for compression of medical images, yet has additional features useful for medical applications, indicating that it should be included as an additional standard transfer syntax in DICOM.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: This paper presents a point-wise extended visual masking approach that nonlinearly maps the wavelet coefficients to a perceptually uniform domain prior to quantization by taking advantages of both self-contrast masking and neighborhood masking effects, thus achieving very good visual quality.
Abstract: One common visual optimization strategy for image compression is to exploit the visual masking effect where artifacts are locally masked by the image acting as a background signal. In this paper, we present a point-wise extended visual masking approach that nonlinearly maps the wavelet coefficients to a perceptually uniform domain prior to quantization by taking advantages of both self-contrast masking and neighborhood masking effects, thus achieving very good visual quality. It is essentially a coefficient-wise adaptive quantization without any overhead. It allows bitstream scalability, as opposed to many previous works. The proposed scheme has been adopted into the working draft of JPEG-2000 Part II.

Journal ArticleDOI
TL;DR: A new model to simultaneously quantize and halftone color images is proposed based on a rigorous cost-function approach which optimizes a quality criterion derived from a simplified model of human perception and thus overcomes the artificial separation of quantization and Halftoning.
Abstract: Image quantization and digital halftoning, two fundamental image processing problems, are generally performed sequentially and, in most cases, independent of each other. Color reduction with a pixel-wise defined distortion measure and the halftoning process with its local averaging neighborhood typically optimize different quality criteria or, frequently, follow a heuristic approach without reference to any quantitative quality measure. In this paper, we propose a new model to simultaneously quantize and halftone color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a simplified model of human perception. It incorporates spatial and contextual information into the quantization and thus overcomes the artificial separation of quantization and halftoning. Optimization is performed by an efficient multiscale procedure which substantially alleviates the computational burden. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images showing a significant image quality improvement compared to standard color reduction approaches. Applying the developed cost function, we also suggest a new distortion measure for evaluating the overall quality of color reduction schemes.

Journal ArticleDOI
TL;DR: Llyod 10:1 compression is suitable for on-call electronic transmission of body CT images as long as original images are subsequently reviewed.
Abstract: PURPOSE: To determine acceptable levels of JPEG (Joint Photographic Experts Group) and wavelet compression for teleradiologic transmission of body computed tomographic (CT) images. MATERIALS AND METHODS: A digital test pattern (Society of Motion Picture and Television Engineers, 512 × 512 matrix) was transmitted after JPEG or wavelet compression by using point-to-point and Web-based teleradiology, respectively. Lossless, 10:1 lossy, and 20:1 lossy ratios were tested. Images were evaluated for high- and low-contrast resolution, sensitivity to small signal differences, and misregistration artifacts. Three independent observers who were blinded to the compression scheme evaluated these image quality measures in 20 clinical cases with similar levels of compression. RESULTS: High-contrast resolution was not diminished with any tested level of JPEG or wavelet compression. With JPEG compression, low-contrast resolution was not lost with 10:1 lossy compression but was lost at 3% modulation with 20:1 lossy compres...

Proceedings ArticleDOI
10 Sep 2000
TL;DR: It was found that the binocular percept depended on the type of degradation: for low-pass filtering, the Binocular percept was dominated by the high-quality image, whereas for quantization it corresponded to the average of the inputs to the two eyes.
Abstract: The bandwidth required to transmit stereoscopic video images is nominally twice that required for standard, monoscopic images. One method of reducing the required bandwidth is to code the two video streams asymmetrically. We assessed the perceptual impact of this bandwidth-reduction technique for low-pass filtering, DCT-based quantization, and a combination of filtering and quantization. It was found that the binocular percept depended on the type of degradation: for low-pass filtering, the binocular percept was dominated by the high-quality image, whereas for quantization it corresponded to the average of the inputs to the two eyes. The results indicated that asymmetrical coding is a promising technique for reducing storage and transmission bandwidth of stereoscopic sequences.

Proceedings ArticleDOI
Zhigang Fan1, R.L. de Queiroz
10 Sep 2000
TL;DR: A method is presented for the maximum likelihood estimation (MLE) of the JPEG quantization tables and an efficient method is provided to identify if an image has been previously JPEG compressed.
Abstract: To process previously JPEG coded images the knowledge of the quantization table used in compression is sometimes required. This happens for example in JPEG artifact removal and in JPEG re-compression. However, the quantization table might not be known due to various reasons. A method is presented for the maximum likelihood estimation (MLE) of the JPEG quantization tables. An efficient method is also provided to identify if an image has been previously JPEG compressed.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes that standard at a high level, indicates the component pieces which empower the standard, and gives example applications which highlight differences between JPEG 2000 and prior image compression standards.
Abstract: JPEG 2000 will soon be an international standard for still image compression. This paper describes that standard at a high level, indicates the component pieces which empower the standard, and gives example applications which highlight differences between JPEG 2000 and prior image compression standards.

Journal ArticleDOI
TL;DR: This work proposes the application of the hierarchical Bayesian paradigm to the reconstruction of block discrete cosine transform (BDCT) compressed images and the estimation of the required parameters.
Abstract: With block-based compression approaches for both still images and sequences of images annoying blocking artifacts are exhibited, primarily at high compression ratios. They are due to the independent processing (quantization) of the block transformed values of the intensity or the displaced frame difference. We propose the application of the hierarchical Bayesian paradigm to the reconstruction of block discrete cosine transform (BDCT) compressed images and the estimation of the required parameters. We derive expressions for the iterative evaluation of these parameters applying the evidence analysis within the hierarchical Bayesian paradigm. The proposed method allows for the combination of parameters estimated at the coder and decoder. The performance of the proposed algorithms is demonstrated experimentally.

Journal ArticleDOI
TL;DR: The RD-OPT algorithm for DCT quantization optimization, which can be used as an efficient tool for near-optimal rate control in DCT-based compression techniques, such as JPEG and MPEG, is described.
Abstract: We describe the RD-OPT algorithm for DCT quantization optimization, which can be used as an efficient tool for near-optimal rate control in DCT-based compression techniques, such as JPEG and MPEG. RD-OPT measures DCT coefficient statistics for the given image data to construct rate/distortion-specific quantization tables with nearly optimal tradeoffs.

Proceedings ArticleDOI
TL;DR: A fragile technique that can detect the most minor changes in a marked image using a DCT-based data hiding method to embed a tamper-detection mark and a semi-fragile technique that detects the locations of significant manipulations while disregarding the less important effects of image compression and additive channel noise.
Abstract: In this paper, we present two tamper-detection techniques. The first is a fragile technique that can detect the most minor changes in a marked image using a DCT-based data hiding method to embed a tamper-detection mark. The second is a semi-fragile technique that detects the locations of significant manipulations while disregarding the less important effects of image compression and additive channel noise. Both techniques are fully described and the performance of each algorithms demonstrated by manipulation of the marked images.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: An image authentication scheme, which is able to detect malicious tampering of images even they have also been incidentally distorted, is proposed by modeling incidental and malicious distortions as Gaussian distributions with small and large variances by a mean quantization technique.
Abstract: The objective of this paper is to propose an image authentication scheme, which is able to detect malicious tampering of images even they have also been incidentally distorted. By modeling incidental and malicious distortions as Gaussian distributions with small and large variances, respectively, we propose to embed a watermark in the wavelet domain by a mean quantization technique. Due to the various probabilities of tamper response at each scale, these responses are integrated to make a decision on the tampered areas. Statistical analysis is conducted and experimental results are given to demonstrate that our watermarking scheme is able to detect malicious attacks while tolerating incidental distortions.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes how the JPEG 2000 syntax and file format support the decomposition of the image into the codestream and examples of how the syntax enables some of the features of JPEG 2000 are offered.
Abstract: As the resolution and pixel fidelity of digital imagery grows, there is a greater need for more efficient compression and extraction of images and sub-images. The ability to handle many types of image data, extract images at different resolutions and quality, lossless and lossy, zoom and pan, and extract regions-of-interest is the new measures of image compression system performance. JPEG 2000 is designed to address the needs of high quality imagery. This paper describes how the JPEG 2000 syntax and file format support these features. The decomposition of the image into the codestream is described along with associated syntax markers. Examples of how the syntax enables some of the features of JPEG 2000 are offered.

Proceedings ArticleDOI
28 May 2000
TL;DR: Two simple DCT-domain-based schemes to embed single or multiple watermarks into an image for copyright protection and data monitoring and tracking are proposed.
Abstract: This paper proposes two simple DCT-domain-based schemes to embed single or multiple watermarks into an image for copyright protection and data monitoring and tracking. The watermark data are essentially embedded in the middle band of the DCT domain to make a tradeoff between visual degradation and robustness. The proposed schemes are simple and no original host image is required for watermark extraction. The algorithm also features the capability of embedding multiple orthogonal watermarks into an image simultaneously. A set of systematic experiments, including Gaussian smoothing, JPEG compression, and image cropping are performed to prove the robustness of our algorithms.

Journal ArticleDOI
TL;DR: A quantitative definition of corn kernel whiteness, and a fast, accurate, and easy to perform approach to measure corn whiteness was developed.
Abstract: A quantitative definition of corn kernel whiteness, and a fast, accurate, and easy to perform approach to measure corn whiteness was developed. The whiteness values of 63 corn samples with large color variations were measured. Representative results were presented and whiteness values were able to show color differences of corn kernels quantitatively and reliably.

Proceedings ArticleDOI
19 Apr 2000
TL;DR: In this paper, a non-uniform quantization scheme for JPEG2000 was proposed that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases.
Abstract: We describe a nonuniform quantization scheme for JPEG2000 that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases. Derivatives of contrast transducer functions convey visual threshold changes due to local image content (i.e. the mask). For any frequency region, these functions have approximately the same shape, once the threshold and mask contrast axes are normalized to the frequency's threshold. We have developed two methods that can work together to take advantage of masking. One uses a nonlinearity interposed between the visual weighting and uniform quantization stage at the encoder. In the decoder, the inverse nonlinearity is applied before the inverse transform. The resulting image- adaptive behavior is achieved with only a small overhead (the masking table), and without adding image assessment computations. This approach, however, underestimates masking near zero crossings within a frequency band, so an additional technique pools coefficient energy in a small local neighborhood around each coefficient within a frequency band. It does this in a causal manner to avoid overhead. The first effect of these techniques is to improve the image quality as the image becomes more complex, and these techniques allow image quality increases in applications where using the visual system's frequency response provides little advantage. A key area of improvement is in low amplitude textures, in areas such as facial skin. The second effect relates to operational attributes, since for a given bitrate, the image quality is more robust against variations in image complexity.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
04 Nov 2000
TL;DR: Jpeg2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all.
Abstract: This paper presents an overview of the upcoming JPEG2000 still picture compression standard. JPEG2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all. Lossless and lossy compression, encoding of very large images, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative examples of its features.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: The main contributions lie in the proposal of a coding strategy based on the magnitude of a DCT coefficient, the use of turbo codes for effective error correction, and the incorporation of JPEG quantization tables at embedding.
Abstract: We describe effective channel coding strategies which can be used in conjunction with linear programming optimization techniques for the embedding of robust perceptually adaptive DCT domain watermarks. The main contributions lie in the proposal of a coding strategy based on the magnitude of a DCT coefficient, the use of turbo codes for effective error correction, and finally the incorporation of JPEG quantization tables at embedding.

Proceedings ArticleDOI
01 Jul 2000
TL;DR: A novel combined watermarking scheme for image authentication and protection is proposed by utilizing the publicly available wavelet-based just noticeable distortion (JND) values, and the hidden watermark is designed to carry the host image's information such that blind watermark detection becomes possible.
Abstract: A novel combined watermarking scheme for image authentication and protection is proposed in this paper. By utilizing the publicly available wavelet-based just noticeable distortion (JND) values, the hidden watermark is designed to carry the host image's information such that blind watermark detection becomes possible. The watermarks are embedded using the previously proposed cocktail watermarking technique and are extracted by a quantization process. According to the polarities and the differences of the hidden and the extracted watermarks, the fragility and robustness of a watermark can be measured, respectively, such that both content authentication and copyright protection are achieved simultaneously.