scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2005"


Book ChapterDOI
28 Jan 2005

196 citations


Journal ArticleDOI
TL;DR: A wavelet-based HDR still-image encoding method that maps the logarithm of each pixel value into integer values and then sends the results to a JPEG 2000 encoder to meet the HDR encoding requirement.
Abstract: The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.

137 citations


Journal ArticleDOI
TL;DR: This paper describes a method for compressing floating-point coordinates with predictive coding in a completely lossless manner and reports compression results using the popular parallelogram predictor, but the approach will work with any prediction scheme.
Abstract: The size of geometric data sets in scientific and industrial applications is constantly increasing. Storing surface or volume meshes in standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Scientists and engineers often refrain from using mesh compression because currently available schemes modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid to enable efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe a method for compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.

111 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.
Abstract: In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. We investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.

106 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: JPG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images and are based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency.
Abstract: Lossless compression is necessary for many high performance applications such as geophysics, telemetry, nondestructive evaluation, and medical imaging, which require exact recoveries of original images. Lossless image compression can be always modeled as a two-stage procedure: decorrelation and entropy coding. The first stage removes spatial redundancy or inter-pixel redundancy by means of run-length coding, SCAN language based methodology, predictive techniques, transform techniques, and other types of decorrelation techniques. The second stage, which includes Huffman coding, arithmetic coding, and LZW, removes coding redundancy. Nowadays, the performances of entropy coding techniques are very close to its theoretical bound, and thus more research activities concentrate on decorrelation stage. JPEG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images. JPEG-LS is based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency. Another technique proposed for JPEG-LS was CALIC. JPEG-2000 was designed with the main objective of providing efficient compression for a wide range of compression ratios.

81 citations


Journal ArticleDOI
01 Apr 2005-Displays
TL;DR: The data suggest that it may be possible to use the compressed file size measure to predict display performance in applied tasks, analogous to algorithmic complexity, a theoretical but impractical measure of bit string complexity.

54 citations


Book ChapterDOI
22 Aug 2005
TL;DR: This is the first comprehensive study of standard JPEG2000 compression effects on face recognition, as well as an extension of existing experiments for JPEG compression.
Abstract: In this paper we analyse the effects that JPEG and JPEG2000 compression have on subspace appearance-based face recognition algorithms. This is the first comprehensive study of standard JPEG2000 compression effects on face recognition, as well as an extension of existing experiments for JPEG compression. A wide range of bitrates (compression ratios) was used on probe images and results are reported for 12 different subspace face recognition algorithms. Effects of image compression on recognition performance are of interest in applications where image storage space and image transmission time are of critical importance. It will be shown that not only that compression does not deteriorate performance but it, in some cases, even improves it slightly. Some unexpected effects will be presented (like the ability of JPEG2000 to capture the information essential for recognizing changes caused by images taken later in time) and lines of further research suggested.

47 citations


Journal ArticleDOI
TL;DR: A shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications and retaining the storage advantage provided by JPEG compression standard is proposed.
Abstract: Several methods have been proposed for encrypting images by shared key encryption mechanisms since the work of Naor and Shamir. All the existing techniques are applicable to primarily non-compressed images in either monochrome or color domains. However, most imaging applications including digital photography, archiving, and Internet communications nowadays use images in the JPEG compressed format. Application of the existing shared key cryptographic schemes for these images requires conversion back into spatial domain. In this paper we propose a shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications. The scheme directly works on the quantized DCT coefficients and the resulting noise-like shares are also stored in the JPEG format. The decryption process is lossless preserving the original JPEG data. The experiments indicate that each share image is approximately the same size as the original JPEG image retaining the storage advantage provided by JPEG compression standard. Three extensions, one to improve the random appearance of the generated shares, another to obtain shares with asymmetric file sizes, and the third to generalize the scheme for n>2 share cases, are described as well.

45 citations


Journal ArticleDOI
TL;DR: A dedicated architecture of the block-coding engine was implemented in VHDL and synthesized for field-programmable gate array devices and results show that the single engine can process about 22 million samples at 66-MHz working frequency.
Abstract: JPEG 2000 offers critical advantages over other still image compression schemes at the price of increased computational complexity. Hardware-accelerated performance is a key to successful development of real time JPEG 2000 solutions for applications such as digital cinema and digital home theatre. The crucial role in the whole processing plays embedded block coding with optimized truncation because it requires bit-level operations. In this paper, a dedicated architecture of the block-coding engine is presented. Square-based bit-plane scanning and the internal first-in first-out are combined to speed up the context generation. A dynamic significance state restoring technique reduces the size of the state memories to 1 kbits. The pipeline architecture enhanced by an inverse multiple branch selection method is exploited to code two context-symbol pairs per clock cycle in the arithmetic coder module. The block-coding architecture was implemented in VHDL and synthesized for field-programmable gate array devices. Simulation results show that the single engine can process, on average, about 22 million samples at 66-MHz working frequency.

38 citations


Patent
28 Mar 2005
TL;DR: In this paper, the authors proposed a method of JPEG compression of an image frame divided up into a plurality of non-overlapping, tiled 8×8 pixel blocks X i.
Abstract: A method of JPEG compression of an image frame divided up into a plurality of non-overlapping, tiled 8×8 pixel blocks X i . A global quantization matrix Q is determined by either selecting a standard JPEG quantization table or selecting a quantization table such that the magnitude of each quantization matrix coefficient, Q[m,n] is inversely proportional to the aggregate visual importance in the image of the corresponding DCT basis vector. Next a linear scaling factor S i is selected for each block, bounded by user selected values S min and S max . Transform coefficients, Y i , obtained from a digital cosine transform of X i , are quantized with global table S min Q while emulated the effects of quantization with local table S i Q and the quantized coefficients T i [m,n] and global quantization table S min Q are entropy encoded , where S min is a user selected minimum scaling factor, to create a JPEG Part 1 image file. The algorithm is unique in that it allows for the effect of variable-quantization to be achieved while still producing a fully compliant JPEG Part 1 file.

36 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: The results show that JPEG-LS is the algorithm with the best performance, both in terms of compression ratio and compression speed in the application of compressing medical infrared images.
Abstract: Several popular lossless image compression algorithms were evaluated for the application of compressing medical infrared images. Lossless JPEG, JPEG-LS, JPEG2000, PNG, and CALIC were tested on an image dataset of 380+ thermal images. The results show that JPEG-LS is the algorithm with the best performance, both in terms of compression ratio and compression speed

Journal ArticleDOI
TL;DR: A novel and efficient diagnostically lossless compression scheme provides the 3D medical image sets with a progressive transmission capability and achieves better compression than the state-of-the-art.

Proceedings ArticleDOI
16 May 2005
TL;DR: Simulation results demonstrate that the embedded watermarks can be almost fully extracted from images compressed with very high compression ratio.
Abstract: A watermarking technique for copyright protection is introduced. It achieves a good improvement in the robustness of protected images, especially against attacks using JPEG compression. Simulation results demonstrate that the embedded watermarks can be almost fully extracted from images compressed with very high compression ratio.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: This work proposes two coding methods that improve the coding efficiency of SLS, namely, a context-based arithmetic code (CBAC) method and a low energy mode code method that work harmonically with the current SLS framework.
Abstract: The recently introduced MPEG standard for lossless audio coding, MPEG-4 Audio Scalable to Lossless (SLS) coding technology, provides a universal audio format that integrates the functionalities of lossy audio coding, lossless audio coding and fine granular scalable audio coding in a single framework. We propose two coding methods that improve the coding efficiency of SLS, namely, a context-based arithmetic code (CBAC) method and a low energy mode code method. These two coding methods work harmonically with the current SLS framework and preserve all its desirable features, such as fine granular scalability, while successfully improving its lossless compression ratio performance.

15 May 2005
TL;DR: It is confirmed that ISO/IEC の MPEG(Moving Picture Experts Group)で�’準化が進められているロスレス・オーディオ信号の 符号化モジュールとして提案され,
Abstract: 本稿では,多チャネル時系列信号を対象に,各チャネルごとの時間領域の線形予測とその残差信号 のチャネル間相関に基づく適応的差分を柔軟に利用した可逆圧縮符号化法を提案する.予測残差信号 のチャネル間相関関数を基準に,符号化対象のチャネル信号から参照チャネルに最適重み係数を乗じ た信号を引いた差信号を求め,信号の振幅が小さくなる傾向を利用してエントロピー符号化を行う. 参照チャネルと符号化対象チャネルは逐次芋づる式に求め,参照チャネルの番号と重み係数を補助情 報とする.また全体のチャネルを複数の集合に分割することで,処理量の増大を防ぐ手法や,重み 係数と参照チャネルの探索を複数回繰り返す改善法を示した.通常の 2 から 8 チャネルの音響信号, 256チャネルの脳磁計を対象に本手法の符号化実験を行い,2チャネルを対にした従来のジョイント・ ステレオ符号化より最大約 3%圧縮率を改善できることを示した.これらの提案手法は ISO/IEC の MPEG(Moving Picture Experts Group)で標準化が進められているロスレス・オーディオ信号の 符号化モジュールとして提案され,オーディオ信号だけでなく,広範囲の時系列信号の符号化に国際 的に広く利用される見込みである.この研究は東京大学大学院情報理工学系研究科と日本電信電話株 式会社との産学連携講座に基づく共同研究の成果である.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A reversible video coding method that combines an adaptive transform with H.264 tools and provides better performance than other existing methods is proposed.
Abstract: In this paper, we propose a reversible video coding method that combines an adaptive transform with H.264 tools. We extensively compare its lossless coding performance against three different reversible transforms and motion JPEG 2000. Experimental results show that for I picture coding, the proposed method performed slightly worse (2.0% lower compression ratio on average) than motion JPEG 2000 while outperforming the new H.264 FRExt standard (9.4 to 14%). For B and P pictures, our method offered the best performance with 0.3 to 6.2% gain over FRExt. Our method requires minimal modification to H.264 software and provides better performance than other existing methods.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed compression scheme, that innovatively combines lossless, near-lossless, and progressive coding attributes, gives competitive performance in comparison with state-of-the-art compression schemes.
Abstract: We present a compression technique that provides progressive transmission as well as lossless and near-lossless compression in a single framework. The proposed technique produces a bit stream that results in a progressive, and ultimately lossless, reconstruction of an image similar to what one can obtain with a reversible wavelet codec. In addition, the proposed scheme provides near-lossless reconstruction with respect to a given bound, after decoding of each layer of the successively refinable bit stream. We formulate the image data-compression problem as one of successively refining the probability density function (pdf) estimate of each pixel. Within this framework, restricting the region of support of the estimated pdf to a fixed size interval then results in near-lossless reconstruction. We address the context-selection problem, as well as pdf-estimation methods based on context data at any pass. Experimental results for both lossless and near-lossless cases indicate that the proposed compression scheme, that innovatively combines lossless, near-lossless, and progressive coding attributes, gives competitive performance in comparison with state-of-the-art compression schemes.

Proceedings ArticleDOI
25 Jul 2005
TL;DR: Experimental results on AVIRIS data show that the proposed technique exhibits very competitive performance for both reversible and irreversible compression, with significantly lower complexity than DPCM-based methods, and memory requirements compatible with typical onboard processing subsystems of remote sensing platforms.
Abstract: Hyperspectral image compression has recently attracted a remarkable interest for remote sensing applications. In this paper we propose a unified embedded lossy-to-lossless compression framework based on the JPEG 2000 standard. In particular, we exploit the multicomponent transformation feature of Part 2 of JPEG 2000 to devise a compression framework based on a spectral decorrelating transform followed by JPEG 2000 compression of the transformed coefficients. We evaluate several possible choices for the spectral transform, including a floating-point DCT, an integer DCT, and a wavelet transform. The final version of the proposed algorithm has been compared to 3D-SPIHT in the lossy case, and to several state-of-the-art compression algorithms including JPEG-LS and 3D-CALIC in the lossless case. Experimental results on AVIRIS data show that the proposed technique exhibits very competitive performance for both reversible and irreversible compression, with significantly lower complexity than DPCM-based methods, and memory requirements compatible with typical onboard processing subsystems of remote sensing platforms.

Journal ArticleDOI
TL;DR: This paper will provide readers with some insight on various features and functionalities supported by a baseline JPEG 2000-compliant codec and can serve as a guideline for users to estimate the effectiveness of JPEG 2000 for various applications, and to select optimal parameters according to specific application requirements.
Abstract: Some of the major objectives of the JPEG 2000 still image coding standard were compression and memory efficiency, lossy to lossless coding, support for continuous-tone to bi-level images, error resilience, and random access to regions of interest. This paper will provide readers with some insight on various features and functionalities supported by a baseline JPEG 2000-compliant codec. Three JPEG 2000 software implementations (Kakadu, JasPer, JJ2000) are compared with several other codecs, including JPEG, JBIG, JPEG-LS, MPEG-4 VTC and H.264 intra coding. This study can serve as a guideline for users to estimate the effectiveness of JPEG 2000 for various applications, and to select optimal parameters according to specific application requirements.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: A model for generating a robust JPEG quantization table using the techniques of genetic algorithms at the JPEG image compression process and chooses the one with the higher SNR for a group of natural images by the natural selection is presented.
Abstract: We present a model for generating a robust JPEG quantization table using the techniques of genetic algorithms at the JPEG image compression process. After several experiments over a range of generations, the final quantization table was obtained. The detection of the best quantization table (Q-table) using genetic algorithms with the JPEG standard is a great tool to obtain the desired quality of recovered image. This method compares the SNR of the quantization tables created during the process, and choose the one with the higher SNR for a group of natural images by the natural selection, the program also give conditions to change anytime the parameters of the program to produce better results.

Journal ArticleDOI
TL;DR: The proposed 1D Int-DCT is newly designed to reduce rounding effects by minimizing number of rounding operations and can be operated not only lossless coding for a high quality decoded image but also lossy coded for a compatibility with the conventional DCT-based coding system.
Abstract: In this paper, we propose a new one-dimensional (1D) integer discrete cosine transform (Int-DCT) for unified lossless/lossy image compression. The proposed 1D Int-DCT is newly designed to reduce rounding effects by minimizing number of rounding operations. The proposed Int-DCT can be operated not only lossless coding for a high quality decoded image but also lossy coding for a compatibility with the conventional DCT-based coding system. Both theoretical analysis and simulation results confirm an effectiveness of the proposed Int-DCT.

Proceedings ArticleDOI
Hua Cai1, Jiang Li1
25 Jul 2005
TL;DR: This paper presents a new image coding algorithm based on a simple architecture that is easy to model and encode the residual samples and gives close performance to JPEG-LS and JPEG2000.
Abstract: With the rapid development of digital technology in consumer electronics, the demand to preserve raw image data for further editing or repeated compression is increasing. Traditional lossless image coders usually consist of computationally intensive modeling and entropy coding phases, therefore might not be suitable to mobile devices or scenarios with a strict real-time requirement. This paper presents a new image coding algorithm based on a simple architecture that is easy to model and encode the residual samples. In the proposed algorithm, each residual sample is separated into three parts: (1) a sign value, (2) a magnitude value, and (3) a magnitude level. A tree structure is then used to organize the magnitude levels. By simply coding the tree and the other two parts without any complicated modeling and entropy coding, good performance can be achieved with very low computational cost in the binary-uncoded mode. Moreover, with the aid of context-based arithmetic coding, the magnitude values are further compressed in the arithmetic-coded mode. This gives close performance to JPEG-LS and JPEG2000.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: This work is the first real-time implementation of JPEG 2000 on a mammogram image database and results indicate JPEG 2000 and JPEG-LS provide comparable compression performance since their compression ratios differed by 0.72% and both compressors also superseded the results of the other coders.
Abstract: In this study, we propose JPEG 2000 as an algorithm for the compression of digital mammograms and the proposed work is the first real-time implementation of JPEG 2000 on a mammogram image database. Only the lossless compression mode of JPEG 2000 was examined to ensure that the mammogram is delivered without distortion. The performance of JPEG 2000 was compared against several other lossless coders: JPEG-LS, lossless-JPEG, adaptive Huffman, arithmetic with a zero order and a first order probability model and Lempel-Ziv Welch (EZW) with 12 and 15 bit dictionaries. Each compressor was supplied the identical set of 50 mammograms, each having a resolution of 8bits/pixel and dimensions of 1024times1024. Experimental results indicate JPEG 2000 and JPEG-LS provide comparable compression performance since their compression ratios differed by 0.72% and both compressors also superseded the results of the other coders. Although JPEG 2000 suffered from a slightly longer encoding and decoding delay than JPEG-ES (0.8 s on average), it is still preferred for mammogram images due to the wide variety of features that aid in reliable image transmission, provide an efficient mechanism for remote access to digital libraries and contribute to fast database access

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A novel lossless compression scheme for multispectral and hyperspectral images, which combines low encoding complexity and high-performance, is proposed.
Abstract: In remote sensing systems, on-board data compression is a crucial task that has to be carried out with limited computational resources. In this paper we propose a novel lossless compression scheme for multispectral and hyperspectral images, which combines low encoding complexity and high-performance. The encoder is based on distributed source coding concepts, and employs Slepian-Wolf coding of the bitplanes of the CALIC prediction errors to achieve improved performance. Experimental results on AVIRIS data show that the proposed scheme exhibits performance similar to CALIC, and significantly better than JPEG 2000.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: The task of additional compression of images earlier coded using JPEG is considered and a novel efficient method for coding quantized DCT coefficients is proposed, based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks.
Abstract: The task of additional compression of images earlier coded using JPEG is considered. A novel efficient method for coding quantized DCT coefficients is proposed. It is based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks, between the values of the corresponding coefficients of neighbor blocks as well as between the corresponding coefficients of different color layers. It is shown that the designed technique allows for images already compressed by JPEG to additionally increase compression ratio by 1.3...2.3 times without introducing additional losses.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: An efficient syntax-compliant encryption scheme for JPEG 2000 and motion JPEG 2000 is proposed in this paper, which shows advantages on syntax compliance, compression overhead, scalable granularity, and error resilience.
Abstract: An efficient syntax-compliant encryption scheme for JPEG 2000 and motion JPEG 2000 is proposed in this paper. Compressed visual data is completely encrypted yet the full scalability of the unencrypted codestream is completely preserved to allow near RD-optimal truncations and other manipulations securely without decryption. Compared with other reported schemes, our scheme shows advantages on syntax compliance, compression overhead, scalable granularity, and error resilience. In addition to preserving the original scalability, a JPEG 2000 codestream encrypted with our scheme has the same error resilience capability as the unencrypted codestream. The encrypted codestream is still syntax-compliant so that an encryption-unaware decoder can still decode the encrypted codestream, although the decoded visual data is completely garbled and meaningless. Our scheme has virtually no adverse impact on the compression efficiency.

Proceedings ArticleDOI
15 Apr 2005
TL;DR: It is suggested that DCT JPEG may outperform JPEG2000 for compression ratios generally used in medical imaging and that the differences between DCT and JPEG2000 could be visible to observers and thus clinically significant.
Abstract: The JPEG2000 compression standard is increasingly a preferred industry method for 2D image compression. Some vendors, however, continue to use proprietary discrete cosine transform (DCT) JPEG encoding. This study compares image quality in terms of just-noticeable differences (JNDs) and peak signal-to-noise ratios (PSNR) between DCT JPEG encoding and JPEG2000 encoding. Four computed tomography and 6 computed radiography studies were compressed using a proprietary DCT JPEG encoder and JPEG2000 standard compression. Image quality was measured in JNDs and PSNRs. The JNDmetrix computational visual discrimination model simulates known physiological mechanisms in the human visual system, including the luminance and contrast sensitivity of the eye and spatial frequency and orientation responses of the visual cortex. Higher JND values indicate that a human observer would be more likely to notice a significant difference between compared images. DCT JPEG compression showed consistently lower image distortions at lower compression ratios, whereas JPEG2000 compression showed benefit at higher compression ratios (>50:1). The crossover occurred at ratios that varied among the images. The magnitude of any advantage of DCT compression at low ratios was often small. Interestingly, this advantage of DCT JPEG compression at lower ratios was generally not observed when image quality was measured in PSNRs. These results suggest that DCT JPEG may outperform JPEG2000 for compression ratios generally used in medical imaging and that the differences between DCT and JPEG2000 could be visible to observers and thus clinically significant.

Journal ArticleDOI
TL;DR: Fault tolerance error-detecting capabilities for the major subsystems that constitute a JPEG 2000 standard are developed and the design strategies have been tested using Matlab programs and simulation results are presented.
Abstract: The JPEG 2000 image compression standard is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. The implementations of the JPEG 2000 codec are susceptible to computer-induced soft errors. One situation requiring fault tolerance is remote-sensing satellites, where high energy particles and radiation produce single event upsets corrupting the highly susceptible data compression operations. This paper develops fault tolerance error-detecting capabilities for the major subsystems that constitute a JPEG 2000 standard. The nature of the subsystem dictates the realistic fault model where some parts have numerical error impacts whereas others are properly modeled using bit-level variables. The critical operations of subunits such as discrete wavelet transform (DWT) and quantization are protected against numerical errors. Concurrent error detection techniques are applied to accommodate the data type and numerical operations in each processing unit. On the other hand, the embedded block coding with optimal truncation (EBCOT) system and the bitstream formation unit are protected against soft-error effects using binary decision variables and cyclic redundancy check (CRC) parity values, respectively. The techniques achieve excellent error-detecting capability at only a slight increase in complexity. The design strategies have been tested using Matlab programs and simulation results are presented.

Journal ArticleDOI
TL;DR: This letter demonstrates the utility of standardized image formats for losslessly compressing, archiving, and distributing 2-D geophysical data by comparing them with the traditional file compression utilities gzip and bzip2 on several types of remote sensing data.
Abstract: Certain types of two-dimensional (2-D) numerical remote sensing data can be losslessly and compactly compressed for archiving and distribution using standardized image formats. One common method for archiving and distributing data involves compressing data files using file compression utilities such as gzip and bzip2, which are widely available on UNIX and Linux operating systems. GZIP-compressed files and bzip2-compressed files must first be uncompressed before they can be read by a scientific application (e.g., MATLAB, IDL). Data stored using an image format, on the other hand, can be read directly by a scientific application supporting that format and, therefore, can be stored in compressed form, saving disk space. Moreover, wide use of image formats by data providers and wide support by scientific applications can reduce the need for providers of geophysical data to develop and maintain software customized for each type of dataset and reduce the need for users to develop and maintain or download and install such software. This letter demonstrates the utility of standardized image formats for losslessly compressing, archiving, and distributing 2-D geophysical data by comparing them with the traditional file compression utilities gzip and bzip2 on several types of remote sensing data. The formats studied include TIFF, PNG, lossless JPEG, JPEG-LS, and JPEG2000. PNG and TIFF are widely supported. JPEG2000 and JPEG-LS could become widely supported in the future. It is demonstrated that when the appropriate image format is selected, the compression ratios can be comparable to or better than those resulting from the use of file compression utilities. In particular, PNG, JPEG-LS, and JPEG2000 show promise for the types of data studied.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A differential pulse code modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image and an effective scheme based on the Huffman coding method is developed to encode the residual image.
Abstract: In this paper, a fast lossless compression scheme is presented for the medical image . This scheme consists of two stages. In the first stage, a differential pulse code modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression