scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2000"


Journal ArticleDOI
TL;DR: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT), capable of modeling the spatially varying visual masking phenomenon.
Abstract: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.

1,933 citations


Journal ArticleDOI
TL;DR: It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.

1,485 citations


Proceedings ArticleDOI
28 Mar 2000
TL;DR: The JPEG-2000 standard as discussed by the authors is an emerging standard for still image compression, which defines the minimum compliant decoder and bitstream syntax, as well as optional, value-added extensions.
Abstract: JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.

391 citations


Proceedings Article
01 Jan 2000

365 citations


Proceedings ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks.
Abstract: In this paper, we propose a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks. The authenticator can identify the positions of corrupted blocks, and recover them with approximation of the original ones. In addition to JPEG compression, adjustments of the brightness of the image within reasonable ranges, are also acceptable using the proposed authenticator. The security of the proposed method is achieved by using the secret block mapping function which controls the signature generating/embedding processes. Our authenticator is based on two invariant properties of DCT coefficients before and after JPEG compressions. They are deterministic so that no probabilistic decision is needed in the system. The first property shows that if we modify a DCT coefficient to an integral multiple of a quantization step, which is larger than the steps used in later JPEG compressions, then this coefficient can be exactly reconstructed after later acceptable JPEG compression. The second one is the invariant relationships between two coefficients in a block pair before and after JPEG compression. Therefore, we can use the second property to generate authentication signature, and use the first property to embed it as watermarks. There is no perceptible degradation between the watermarked image and the original. In additional to authentication signatures, we can also embed the recovery bits for recovering approximate pixel values in corrupted areas. Our authenticator utilizes the compressed bitstream, and thus avoids rounding errors in reconstructing DCT coefficients. Experimental results showed the effectiveness of this system. The system also guaranies no false alarms, i.e., no acceptable JPEG compression is rejected.

258 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: A software-based implementation of the image codec specified in the emerging JPEG-2000 standard is discussed and the run-time complexity and coding performance of this implementation are analyzed.
Abstract: A software-based implementation of the image codec specified in the emerging JPEG-2000 standard is discussed. The run-time complexity and coding performance of this implementation are also analyzed.

208 citations


Proceedings Article
01 Sep 2000
TL;DR: In this paper, the performance of JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, was evaluated by comparing the principles behind each algorithm.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper puts into perspective the performance of these by evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG. The study concentrates on compression efficiency, although complexity and set of supported functionalities are also evaluated. Lossless compression efficiency as well as the lossy rate-distortion behavior is discussed. The principles behind each algorithm are briefly described and an outlook on the future of image coding is given. The results show that the choice of the “best” standard depends strongly on the application at hand.

197 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: The embedded block coding algorithm at the heart of the JPEG2000 image compression standard achieves excellent compression performance, usually somewhat higher than that of SPIHT with arithmetic coding, but in some cases substantially higher.
Abstract: This paper describes the embedded block coding algorithm at the heart of the JPEG2000 image compression standard. The algorithm achieves excellent compression performance, usually somewhat higher than that of SPIHT with arithmetic coding, but in some cases substantially higher. The algorithm utilizes the same low complexity binary arithmetic coding engine as JBIG2. Together with careful design of the bit-plane coding primitives, this enables comparable execution speed to that observed with the simpler variant of SPIHT without arithmetic coding. The coder offers additional advantages including memory locality, spatial random access and ease of geometric manipulation.

186 citations


Journal ArticleDOI
TL;DR: An overview of the error-resilient approaches that have been evaluated and inserted into the emerging JPEG-2000 wavelet-based image coding standard and the performance of these approaches under various channel conditions is evaluated.
Abstract: The rapid growth of mobile communications and the widespread access to information via the Internet have resulted in a strong demand for robust transmission of compressed image and video data for various multimedia applications and services. The challenge of robust transmission is to protect the compressed image/video data against hostile channel conditions while bringing little impact on bandwidth efficiency. This paper addresses this critical problem and provides an overview of the error-resilient approaches that have been evaluated and inserted into the emerging JPEG-2000 wavelet-based image coding standard. We also review the state-of-the-art techniques adopted in the MPEG-4 standard for robust transmission of video and still texture data. These techniques include resynchronization strategies, data partitioning, reversible VLCs, and header extension codes. The performance of these approaches under various channel conditions is evaluated.

166 citations


Proceedings ArticleDOI
David A. Clunie1
18 May 2000
TL;DR: In this article, JPEG-LS and JPEG-2000 were evaluated on a set of CT images from multiple anatomical regions, modalities, and vendors, and the results showed that the proposed scheme outperformed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM).
Abstract: Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

121 citations


Proceedings ArticleDOI
28 Dec 2000
TL;DR: Evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, shows that the choice of the “best” standard depends strongly on the application at hand.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and more recent PNG. The study concentrates on compression efficiency and functionality set, while addressing other aspects such as complexity. Lossless compression efficiency as well as the fixed and progressive lossy rate-distortion behaviors are evaluated. Robustness to transmission errors, Region of Interest coding and complexity are also discussed. The principles behind each algorithm are briefly described. The results show that the choice of the "best" standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: As the results show, JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and the PNG The study concentrates on the set of supported features, although lossless and lossy progressive compression efficiency results are also reported Each standard, and the principles of the algorithms behind them, are also described As the results show, JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance

Proceedings ArticleDOI
18 May 2000
TL;DR: The proposed JPEG 2000 scheme appears to offer similar or improved image quality performance relative to the current JPEG standard for compression of medical images, yet has additional features useful for medical applications, indicating that it should be included as an additional standard transfer syntax in DICOM.
Abstract: A multi-institution effort was conducted to assess the visual quality performance of various JPEG 2000 (Joint Photographic Experts Group) lossy compression options for medical imagery. The purpose of this effort was to provide clinical data to DICOM (Digital Imaging and Communications in Medicine) WG IV to support recommendations to the JPEG 2000 committee regarding the definition of the base standard. A variety of projection radiographic, cross sectional, and visible light images were compressed-reconstructed using various JPEG 2000 options and with the current JPEG standard. The options that were assessed included integer and floating point transforms, scalar and vector quantization, and the use of visual weighting. Experts from various institutions used a sensitive rank order methodology to evaluate the images. The proposed JPEG 2000 scheme appears to offer similar or improved image quality performance relative to the current JPEG standard for compression of medical images, yet has additional features useful for medical applications, indicating that it should be included as an additional standard transfer syntax in DICOM.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: This paper presents a point-wise extended visual masking approach that nonlinearly maps the wavelet coefficients to a perceptually uniform domain prior to quantization by taking advantages of both self-contrast masking and neighborhood masking effects, thus achieving very good visual quality.
Abstract: One common visual optimization strategy for image compression is to exploit the visual masking effect where artifacts are locally masked by the image acting as a background signal. In this paper, we present a point-wise extended visual masking approach that nonlinearly maps the wavelet coefficients to a perceptually uniform domain prior to quantization by taking advantages of both self-contrast masking and neighborhood masking effects, thus achieving very good visual quality. It is essentially a coefficient-wise adaptive quantization without any overhead. It allows bitstream scalability, as opposed to many previous works. The proposed scheme has been adopted into the working draft of JPEG-2000 Part II.

Proceedings ArticleDOI
Zhigang Fan1, R.L. de Queiroz
10 Sep 2000
TL;DR: A method is presented for the maximum likelihood estimation (MLE) of the JPEG quantization tables and an efficient method is provided to identify if an image has been previously JPEG compressed.
Abstract: To process previously JPEG coded images the knowledge of the quantization table used in compression is sometimes required. This happens for example in JPEG artifact removal and in JPEG re-compression. However, the quantization table might not be known due to various reasons. A method is presented for the maximum likelihood estimation (MLE) of the JPEG quantization tables. An efficient method is also provided to identify if an image has been previously JPEG compressed.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes that standard at a high level, indicates the component pieces which empower the standard, and gives example applications which highlight differences between JPEG 2000 and prior image compression standards.
Abstract: JPEG 2000 will soon be an international standard for still image compression. This paper describes that standard at a high level, indicates the component pieces which empower the standard, and gives example applications which highlight differences between JPEG 2000 and prior image compression standards.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: A low-complexity entropy coder originally designed to work in the JPEG2000 image compression standard framework is presented, and it was shown to yield a significant reduction in the complexity of entropy coding, with small loss in compression performance.
Abstract: We present a low-complexity entropy coder originally designed to work in the JPEG2000 image compression standard framework. The algorithm is meant for embedded and non-embedded coding of wavelet coefficients inside a subband, and is called subband-block hierarchical partitioning (SBHP). It was extensively tested following the standard experiment procedures, and it was shown to yield a significant reduction in the complexity of entropy coding, with small loss in compression performance. Furthermore, it is able to seamlessly support all JPEG2000 features. We present a description of the algorithm, an analysis of its complexity, and a summary of the results obtained after its integration into the verification model (VM).

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes how the JPEG 2000 syntax and file format support the decomposition of the image into the codestream and examples of how the syntax enables some of the features of JPEG 2000 are offered.
Abstract: As the resolution and pixel fidelity of digital imagery grows, there is a greater need for more efficient compression and extraction of images and sub-images. The ability to handle many types of image data, extract images at different resolutions and quality, lossless and lossy, zoom and pan, and extract regions-of-interest is the new measures of image compression system performance. JPEG 2000 is designed to address the needs of high quality imagery. This paper describes how the JPEG 2000 syntax and file format support these features. The decomposition of the image into the codestream is described along with associated syntax markers. Examples of how the syntax enables some of the features of JPEG 2000 are offered.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: This algorithm is based on an accurate modelisation of the distortion-rate curve even at low bit rate and results that the efficiency of the method is very close to JPEG 2000 with a very low complexity and possible parallelization of the encoding process.
Abstract: Efficient compression algorithms generally use wavelet transforms. They try to exploit all the signal dependencies that can appear inside and across the different sub-bands of the decomposition. This provides highly complex algorithms that can't generally he implemented for real-time purposes. However, efficiency of a coding scheme highly depends on bit allocation. In this paper, we present a new wavelet based image coder EBWIC (efficient bit allocation wavelet image coder). This algorithm is based on an accurate modelisation of the distortion-rate curve even at low bit rate. It results that the efficiency of our method is very close to JPEG 2000 with a very low complexity and possible parallelization of the encoding process. The method proposed below has a complexity less than 60 arithmetic operations per pixel (against about 300 for JPEG 2000). Our method can be applied whatever the wavelet transform (quincunx, dyadic...) and the entropy coder are. It can be used for strip-based processing.

Proceedings ArticleDOI
19 Apr 2000
TL;DR: In this paper, a non-uniform quantization scheme for JPEG2000 was proposed that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases.
Abstract: We describe a nonuniform quantization scheme for JPEG2000 that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases. Derivatives of contrast transducer functions convey visual threshold changes due to local image content (i.e. the mask). For any frequency region, these functions have approximately the same shape, once the threshold and mask contrast axes are normalized to the frequency's threshold. We have developed two methods that can work together to take advantage of masking. One uses a nonlinearity interposed between the visual weighting and uniform quantization stage at the encoder. In the decoder, the inverse nonlinearity is applied before the inverse transform. The resulting image- adaptive behavior is achieved with only a small overhead (the masking table), and without adding image assessment computations. This approach, however, underestimates masking near zero crossings within a frequency band, so an additional technique pools coefficient energy in a small local neighborhood around each coefficient within a frequency band. It does this in a causal manner to avoid overhead. The first effect of these techniques is to improve the image quality as the image becomes more complex, and these techniques allow image quality increases in applications where using the visual system's frequency response provides little advantage. A key area of improvement is in low amplitude textures, in areas such as facial skin. The second effect relates to operational attributes, since for a given bitrate, the image quality is more robust against variations in image complexity.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
04 Nov 2000
TL;DR: Jpeg2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all.
Abstract: This paper presents an overview of the upcoming JPEG2000 still picture compression standard. JPEG2000 is not only intended to provide rate-distortion and subjective image quality performance superior to existing JPEG standard, but to also provide functionality that the current JPEG standard can either not address efficiently nor address at all. Lossless and lossy compression, encoding of very large images, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative examples of its features.

Proceedings ArticleDOI
11 Oct 2000
TL;DR: This paper proposes a VLSI architecture to compute lifting-based 2D DWT, for a set of seven filters recommended in the JPEG2000 verification model, and presents an efficient memory organization to address the high memory bandwidth requirements.
Abstract: The discrete wavelet transform (DWT) is the basis for many image compression techniques, such as the upcoming JEPG2000. Lifting-based DWT requires fewer computations compared to the traditional convolution-based approach. In this paper, we propose a VLSI architecture to compute lifting-based 2D DWT, for a set of seven filters recommended in the JPEG2000 verification model. The architecture produces transform coefficients in every clock cycle for three of the filters and in every alternate cycle for the rest of the filters. We also present an efficient memory organization to address the high memory bandwidth requirements. The performance metrics of the proposed architecture have also been furnished.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: This work presents a maximum likelihood approach to the ringing artifact removal problem that employs a parameter estimation method based on the k-means algorithm with the number of clusters determined by a cluster separation measure.
Abstract: At low bit rates, image compression codecs based on overlapping transforms introduce spurious oscillation known as ringing artifacts in the vicinity of major edges. The image quality can be enhanced considerably by removing the artifacts. We present a maximum likelihood approach to the ringing artifact removal problem. Our approach employs a parameter estimation method based on the k-means algorithm with the number of clusters determined by a cluster separation measure. The proposed algorithm and its simplified approximation are applied to JPEG2000 compressed images to demonstrate their effectiveness.

Patent
15 Dec 2000
TL;DR: In this paper, the 8×8 Discrete Cosine Transform (DCT) blocks are stored after entropy decoding in a JPEG decoder or after the forward discrete cosine transform (FDCT) in the JPEG encoder to use as an intermediate format between transform processes.
Abstract: JPEG (Joint Photographic Experts Group) images are encoded and decoded as fast as possible for a variety of disparate applications A novel structure stores the 8×8 Discrete Cosine Transform (DCT) blocks after entropy decoding in a JPEG decoder or after the Forward Discrete Cosine Transform (FDCT) in the JPEG encoder to use as an intermediate format between transform processes The format was chosen to speed up the entropy decode and encode processes and is based on the information needed for the JPEG Huffman entropy coding, but lends itself to fast execution of other DCT based transforms, including arithmetic entropy coding

Proceedings ArticleDOI
08 Aug 2000
TL;DR: Concepts that optimize image compression ratio by utilizing the information about a signal's properties and their uses are introduced to achieve further gains in image compression.
Abstract: Introduces concepts that optimize image compression ratio by utilizing the information about a signal's properties and their uses. This additional information about the image is used to achieve further gains in image compression. The techniques developed in this work are on the ubiquitous JPEG still image compression standard [IS094] for compression of continuous tone grayscale and color images. This paper is based on a region based variable quantization JPEG software codec that was developed tested and compared with other image compression techniques. The application, named JPEGTool, has a graphical user interface (GUI) and runs under Microsoft Windows (R) 95. This paper discusses briefly the standard JPEG implementation and software extensions to the standard. region selection techniques and algorithms that complement variable quantization techniques are presented in addition to a brief discussion on the theory and implementation of variable quantization schemes. The paper includes a presentation of generalized criteria for image compression performance and specific results obtained with JPEGTool.

Proceedings ArticleDOI
14 Nov 2000
TL;DR: The design, implementation, and evaluation of the Image Transport Protocol (ITP) for image transmission over loss-prone congested or wireless networks are described and a variety of new receiver post-processing algorithms such as error concealment are enabled that further improve the interactivity and responsiveness of reconstructed images.
Abstract: Images account for a significant and growing fraction of Web downloads. The traditional approach to transporting images uses TCP, which provides a generic reliable, in-order byte-stream abstraction, but which is overly, restrictive for image data. We analyze the progression of image quality at the receiver with time and show that the in-order delivery abstraction provided by a TCP-based approach prevents the receiver application from processing and rendering portions of an image when they, actually, arrive. The end result is that an image is rendered in bursts interspersed with long idle times rather than smoothly. This paper describes the design, implementation, and evaluation of the Image Transport Protocol (ITP) for image transmission over loss-prone congested or wireless networks. ITP improves user-perceived latency using application level framing (ALF) and out-of-order pplication data unit (ADU) delivery achieving significantly better interactive performance as measured by the evolution of peak signal-to-noise ratio (PSNR) with time at the receiver ITP runs over UDP, incorporates receiver-driven selective reliability uses the congestion manager (CM) to adapt to network congestion, and is customizable for specific image formats (e.g., JPEG and JPEG2000). ITP enables a variety of new receiver post-processing algorithms such as error concealment that further improve the interactivity and responsiveness of reconstructed images. Performance experiments using our implementation across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of image downloads at the receiver.

Proceedings ArticleDOI
24 Jul 2000
TL;DR: A novel and efficient strip-based satellite image coder adapted for on board processing that is better than JPEG 2000 with a complexity lower than 60 operations per pixel and can be implemented for different image samplings.
Abstract: Spacecraft sensor obtained data has to be stored on-board until an opportunity arises for it to be transmitted to the ground. Compression algorithms used in space were based first on DPCM and then on DCT. Image quality evaluation determined however that for applications using the adaptive DCT, such as for SPOT 5, the maximum acceptable compression ratio was about 3 for high resolution observation and about 15 for scientific missions. Beyond that compression ratio, the occurrence of block effects in uniform areas and the loss of detail related to compression noise is no more acceptable for scientific use. To overcome these limits and to prepare future generations of Earth observation satellites and scientific missions, the authors propose a novel and efficient strip-based satellite image coder adapted for on board processing. The performances of their method are better than JPEG 2000 with a complexity lower than 60 operations per pixel. Furthermore, their method can be implemented for different image samplings. The authors quickly explain the general compression scheme used and the solution for on the fly DWT computation. They present the dynamic bit allocation and the regulation process chosen for rate control. Then, they evaluate the complexity and performances of their method comparatively to JPEG 2000.

Proceedings ArticleDOI
28 Dec 2000
TL;DR: In this article, a synthesis of numerous studies evaluating several data compression algorithms, some of them supposing that the adaptation between sampling grid and modulation transfer function is obtained by the quincunx Supermode scheme.
Abstract: Future high resolution instruments planned by CNES to succeed SPOT5 will lead to higher bit rates because of the increase in both resolution and number of bits per pixel, not compensated by the reduced swatch. Data compression is then needed, with compression ratio goals higher than the 2.81 SPOT5 value obtained with a JPEG like algorithm. Compression ratio should rise typically to 4 - 6 values, with artifacts remaining unnoticeable: SPOT5 algorithm performances have clearly to be outdone. On another hand, in the framework of optimized and low cost instruments, noise level will increase. Furthermore, the Modulation Transfer Function (MTF) and the sampling grid will be fitted together, to -- at least roughly -- satisfy Shannon requirements. As with the Supermode sampling scheme of the SPOT5 Panchromatic band, the images will have to be restored (deconvolution and denoising) and that renders the compression impact assessment much more complex. This paper is a synthesis of numerous studies evaluating several data compression algorithms, some of them supposing that the adaptation between sampling grid and MTF is obtained by the quincunx Supermode scheme. The following points are analyzed: compression decorrelator (DCT, LOT, wavelet, lifting), comparison with JPEG2000 for images acquired on a square grid, compression fitting to the quincunx sampling and on board restoration (before compression) versus on ground restoration. For each of them, we describe the proposed solutions, underlining the associated complexity and comparing them from a quantitative and qualitative point of view, giving the results of experts analyses.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: The various tools in JPEG 2000 that allow the users to take advantages of the various properties of the human visual system such as spatial frequency sensitivity and the visual masking effect are reviewed.
Abstract: We review the various tools in JPEG 2000 that allow the users to take advantages of the various properties of the human visual system such as spatial frequency sensitivity and the visual masking effect. We show that the visual tool sets in JPEG 2000 are much richer than what was available in JPEG, where only locally invariant frequency weighting can be exploited.

Proceedings ArticleDOI
08 Oct 2000
TL;DR: An algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression, shows that the PIQE model is most accurate in the compression range for which JPEG is most effective.
Abstract: We propose an algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression. The PIQE evaluates the image quality using two psychovisually-based fidelity criteria: blockiness and similarity. The blockiness is an index that measures the patterned square artifact created as a by-product of the lossy DCT-based compression technique used by JPEG and MPEG. The similarity measures the perceivable detail remaining after compression. The blockiness and similarity are combined into a single PIQE index used to assess quality. The PIQE model is tuned by using subjective assessment results of five subjects on six sets of images. The results show that the PIQE model is most accurate in the compression range for which JPEG is most effective.