scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1998"


Patent
28 Aug 1998
TL;DR: In this paper, the original image is cut into blocks to which the Discrete Cosine Transform (DCT) coefficients are quantized and the coefficients are authenticated, and the DCT output of each block is dequantized.
Abstract: A watermarking method involves mostly invisible artifacts and is sensitive to any modification of the picture at the level of precision rendered by the compressed version of the image. The image is compressed according to a known compression standard, such as the JPEG standard, and with a fixed quality setting. Using the JPEG standard, the original image is cut into blocks to which the Discrete Cosine Transform (DCT) is applied and the DCT coefficients quantized. The watermark according to the invention is applied to the quantized DCT coefficients. This is done using an encryption function, such as a secret key/public key algorithm. The JPEG compression is then completed using a lossless compression scheme, such as Huffman coding, to produce the compressed and watermarked image. Authentication of the compressed and watermarked image begins with a lossless decompression scheme to obtain the set of quantized DCT coefficients. The coefficients are authenticated, and the DCT output of each block is dequantized. If necessary, an inverse DCT is applied to each block to output the decompressed watermarked image.

185 citations


Journal ArticleDOI
TL;DR: A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images by consists of three stages, which reduces these blocking artifacts efficiently.
Abstract: A postprocessing algorithm is proposed to reduce the blocking artifacts of Joint Photographic Experts Group (JPEG) decompressed images. The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular for highly compressed images, because each block is transformed and quantized independently. The blocking effects are classified into three types of noises in this paper: grid noise, staircase noise, and corner outlier. The proposed postprocessing algorithm, which consists of three stages, reduces these blocking artifacts efficiently. A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images.

180 citations


Journal ArticleDOI
TL;DR: A spatial subband image-compression method well suited to the local nature of the CNNUM, which performs especially well with radiographical images (mammograms) and is suggested to use as part of a cellular neural/nonlinear (CNN)-based mammogram-analysis system.
Abstract: This paper demonstrates how the cellular neural-network universal machine (CNNUM) architecture can be applied to image compression. We present a spatial subband image-compression method well suited to the local nature of the CNNUM. In case of lossless image compression, it outperforms the JPEG image-compression standard both in terms of compression efficiency and speed. It performs especially well with radiographical images (mammograms); therefore, it is suggested to use it as part of a cellular neural/nonlinear (CNN)-based mammogram-analysis system. This paper also gives a CNN-based method for the fast implementation of the moving pictures experts group (MPEG) and joint photographic experts group (JPEG) moving and still image-compression standards.

118 citations


Journal ArticleDOI
TL;DR: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed and can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates up to approximately 0.4%.
Abstract: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed. The objective is to eliminate transmission errors in JPEG images. Here a transmission error may be either a single-bit error or a burst error containing N successive error bits. For an entropy-coded JPEG image, a single transmission error in a codeword will not only affect the underlying codeword, but may also affect subsequent codewords. Consequently, a single error in an entropy-coded system may result in a significant degradation. To cope with the synchronization problem, in the proposed approach the restart capability of JPEG images is enabled, i.e., the eight unique restart markers (synchronization codewords) are periodically inserted into the JPEG compressed image bitstream. Transmission errors in a JPEG image are sequentially detected both when the JPEG image is under decoding and after the JPEG image has been decoded. When a transmission error or equivalently a corrupted restart interval is detected, the proposed error correction approach simply performs a sequence of bit inversions and redecoding operations on the corrupted restart interval and selects the "best" feasible redecoding solution by using a proposed cost function for error correction. The proposed approach can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates (BERs) up to approximately 0.4%. This shows the feasibility of the proposed approach.

93 citations


Patent
27 Oct 1998
TL;DR: In this article, an image compression system includes a source, a memory, a personality/graphics engine having a personality and a graphics engine, an image processor and a memory allocator.
Abstract: An image compression system includes a source, a memory, a personality/graphics engine having a personality and a graphics engine, a memory allocator and an image processor. The source is operative to supply digital data representative of images. The memory has a finite size for receiving the digital data. The personality is configured to interpret an input file and is operative to construct image patches from the digital data. The graphics engine is operative to generate a display list from the memory. The memory allocator is associated with the memory and is operative to allocate image patches. The image processor includes a JPEG compressor and a JPEG decompressor. The image processor is operative to render the display list into strip buffers. The JPEG compressor is operative to JPEG compress images on the display list. The JPEG decompressor is operative to decompress compressed images on the display list. The image processor is operative to uncompress the compressed patch data and copy each bit in the image patch into the strip buffers. A method is also disclosed.

61 citations


Journal ArticleDOI
TL;DR: A partial embedding two-layer scheme is proposed in which an embedded multiresolution coder generates a lossy base layer, and a simple but effective context-based lossless coder codes the difference between the original image and the lossy reconstruction.
Abstract: Predictive and multiresolution techniques for near-lossless image compression based on the criterion of maximum allowable deviation of pixel values are investigated. A procedure for near-lossless compression using a modification of lossless predictive coding techniques to satisfy the specified tolerance is described. Simulation results with modified versions of two of the best lossless predictive coding techniques known, CALIC and JPEG-LS, are provided. Application of lossless coding based on reversible transforms in conjunction with prequantization is shown to be inferior to predictive techniques for near-lossless compression. A partial embedding two-layer scheme is proposed in which an embedded multiresolution coder generates a lossy base layer, and a simple but effective context-based lossless coder codes the difference between the original image and the lossy reconstruction. Results show that this lossy plus near-lossless technique yields compression ratios close to those obtained with predictive techniques, while providing the feature of a partially embedded bit-stream.

59 citations


Proceedings ArticleDOI
Aria Nosratinia1
07 Dec 1998
TL;DR: This approach simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average, which offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.
Abstract: A novel method is proposed for post-processing of JPEG-encoded images, in order to reduce coding artifacts and enhance visual quality. Our method simply re-applies JPEG to the shifted versions of the already-compressed image, and forms an average. This approach, despite its simplicity, offers better performance than other known methods, including those based on nonlinear filtering, POCS, and redundant wavelets.

51 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: A modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed lossless and some experimental rate-distortion curves for this scheme are presented.
Abstract: This paper introduces a modification of the discrete cosine transform (DCT) that produces integer coefficients from which the original image data can be reconstructed losslessly. It describes an embedded coding scheme which incorporates this lossless DCT and presents some experimental rate-distortion curves for this scheme. The results show that the lossless compression ratio of the proposed scheme exceeds that of the lossless JPEG predictive coding scheme. On the other hand, in lossy operation the rate-distortion curve of the proposed scheme is very close to that of lossy JPEG. Also, the transform coefficients of the proposed scheme can be decoded with the ordinary DCT at the expense of a small error, which is only significant in lossless operation.

45 citations


Journal ArticleDOI
TL;DR: A nonexpansive pyramidal decomposition is proposed for low-complexity image coding that guarantees perfect reconstruction and is used to replace the discrete cosine transform in the Joint Photographic Expert Group (JPEG) coder.
Abstract: A nonexpansive pyramidal decomposition is proposed for low-complexity image coding. The image is decomposed through a nonlinear filterbank into low- and highpass signals and the recursion of the filterbank over the lowpass signal generates a pyramid resembling that of the octave wavelet transform. The structure itself guarantees perfect reconstruction and we have chosen nonlinear filters for performance reasons. The transformed samples are grouped into square blocks and used to replace the discrete cosine transform (DCT) in the Joint Photographic Expert Group (JPEG) coder. The proposed coder has some advantages over the DCT-based JPEG: computation is greatly reduced, image edges are better encoded, blocking is eliminated, and it allows lossless coding.

44 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: This paper applies some of the Golomb-Rice coding techniques that emerged in JPEG-LS, a new standard for lossless image compression, and applies them towards coding DCT coefficients in the lossy JPEG baseline algorithm, showing significant improvements in performance with limited impact on computational complexity.
Abstract: In this paper we apply some of the Golomb-Rice coding techniques that emerged in JPEG-LS, a new standard for lossless image compression, and apply them towards coding DCT coefficients in the lossy JPEG baseline algorithm. We show that this results in significant improvements in performance with limited impact on computational complexity. In fact, one significant reduction in complexity provided by the proposed techniques is the complete elimination of the Huffman tables in JPEG baseline which can be a bottleneck in hardware implementations. We give simulation results, comparing the performance of the proposed technique to JPEG baseline, JPEG baseline with optimal Huffman coding (two pass) and JPEG arithmetic.

43 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: A lossless compression technique specifically designed for palettized synthetic images that uses patterns of neighborhood pixels to predict and code each pixel.
Abstract: We propose a lossless compression technique specifically designed for palettized synthetic images. Predictive techniques no not work very well for these images, as a prediction "formula" based on some average of the values or palette indices of neighbors is not likely to be very meaningful. The proposed algorithm uses patterns of neighborhood pixels to predict and code each pixel. The prediction rules for different patterns are learned adaptively from the image itself. Using a large number of test images of the above kind (maps, clip-art, line drawings), the proposed method is found to reduce the size achieved by GIF compression by 50%, and the size resulting from the previous best approach (CALIC with optimized palette reordering) by 20%.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: A novel lossless medical image compression algorithm based on three-dimensional integer wavelet transforms and zerotree coding is presented, which efficiently encodes image volumes by exploiting the dependencies in all three dimensions, while enabling lossy and lossless compression from the same bitstream.
Abstract: A novel lossless medical image compression algorithm based on three-dimensional integer wavelet transforms and zerotree coding is presented. The EZW algorithm is extended to three dimensions and context-based adaptive arithmetic coding is used to improve its performance. The algorithm (3-D CB-EZW) efficiently encodes image volumes by exploiting the dependencies in all three dimensions, while enabling lossy and lossless compression from the same bitstream. Results on lossless compression of CT and MR images are presented, and compared to other lossless compression algorithms. The progressive performance of the 3-D CB-EZW algorithm is also compared to other lossy progressive coding algorithms. For representative images, the 3-D CB-EZW algorithm produced an average of 14% and 20% decrease in compressed file sizes for CT and MR images, respectively, compared to the best available 2-D lossless compression techniques.

Proceedings ArticleDOI
09 Jan 1998
TL;DR: Methods of near-lossless image compression based on the criterion of maximum allowable deviation of pixel values are described in this paper and it is shown that the application of lossless coding based on reversible transforms in conjunction with pre-quantization is inferior to predictive techniques for near- Lossless compression.
Abstract: Methods of near-lossless image compression based on the criterion of maximum allowable deviation of pixel values are described in this paper. Predictive and multi resolution techniques for performing near-lossless compression are investigated. A procedure for near-lossless compression using a modification of lossless compression are investigated. A procedure for near-lossless compression using a modification of lossless predictive coding techniques to satisfy the specified tolerance is descried. Simulation results with modified versions of two of the best lossless predictive coding techniques known, CALIC and JPEG- LS, are provided. It is shown that the application of lossless coding based on reversible transforms in conjunction with pre-quantization is inferior to predictive techniques for near-lossless compression. A partial embedding two-layer scheme is proposed in which an embedded multi-resolution coder generates a lossy base layer, and a simple but effective context-based lossless coder codes the difference between the original image and the lossy reconstruction. Simulation results show that this lossy plus-lossless technique yields compression ratios very close to those obtained with predictive techniques, while providing the feature of a partially embedded bit-stream.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: The proposed general purpose lossless greyscale image compression method, TMW, can significantly outperform LOCO and is compared to CALIC on a selection of test images, and typically outperforms it by between 2 and 10 percent.
Abstract: We present a general purpose lossless greyscale image compression method, TMW, that is based on the use of linear predictors and implicit segmentation. We then proceed to extend the presented methods to cover near lossless image compression. In order to achieve competitive compression, the compression process is split into an analysis step and a coding step. In the first step, a set of linear predictors and other parameters suitable for the image is calculated, which is included in the compressed file and subsequently used for the coding step. This adaption allows TMW to perform well over a very wide range of image types. Other significant features of TMW are the use of a one-parameter probability distribution, probability calculations based on unquantized prediction values, blending of multiple probability distributions instead of prediction values, and implicit image segmentation. For lossless image compression, the method has been compared to CALIC on a selection of test images, and typically outperforms it by between 2 and 10 percent. For near lossless image compression, the method has been compared to LOCO (Weinberger et al. 1996). Especially for larger allowed deviations from the original image the proposed method can significantly outperform LOCO. In both cases the improvement in compression is achieved at the cost of considerably higher computational complexity.

Journal ArticleDOI
TL;DR: The proposed FEREC algorithm is shown to be almost twice as fast as EREC in encoding the data, and hence the error resilience capability is also observed to be significantly better.
Abstract: There has been an outburst of research in image and video compression for transmission over noisy channels. Channel matched source quantizer design has gained prominence. Further, the presence of variable-length codes in compression standards like the JPEG and the MPEG has made the problem more interesting. Error-resilient entropy coding (EREC) has emerged as a new and effective method to combat catastrophic loss in the received signal due to burst and random errors. We propose a new channel-matched adaptive quantizer for JPEG image compression. A slow, frequency-nonselective Rayleigh fading channel model is assumed. The optimal quantizer that matches the human visibility threshold and the channel bit-error rate is derived. Further, a new fast error-resilient entropy code (FEREC) that exploits the statistics of the JPEG compressed data is proposed. The proposed FEREC algorithm is shown to be almost twice as fast as EREC in encoding the data, and hence the error resilience capability is also observed to be significantly better. On average, a 5% decrease in the number of significantly corrupted received image blocks is observed with FEREC. Up to a 2-dB improvement in the peak signal-to-noise ratio of the received image is also achieved.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: This work investigates lossless coding of video using predictive coding and motion compensation, and uses bi-linear interpolation in order to achieve sub-pixel precision of the motion field.
Abstract: Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimation, 3D predictors, prediction using one or multiple (k) previous images, predictor dependent error modelling, and selection of motion field by code length. We treat the problem of precision of the motion field as one of choosing among a number of predictors. This way, we can incorporate 3D-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion information is coded with the same algorithm as is used for the data. For slow pan or slow zoom sequences, coding methods that use multiple previous images perform up to 20% better than motion compensation using a single previous image and up to 40% better than coding that does not utilize motion compensation.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: It is demonstrated for the first time that reversible integer wavelets together with proper context modeling and entropy coding of wavelet coefficients can match the lossless compression performance of CALIC.
Abstract: The past few years have seen an increasing interest in using reversible integer wavelets in image compression. Reversible integer wavelet image coders facilitate decompression from low bit rates all the way up to lossless reconstruction. However, in the past, specific implementations of such techniques, like S+P, could not match the lossless compression performance of state-of-the-art predictive coding techniques like CALIC. We demonstrate for the first time that reversible integer wavelets together with proper context modeling can match the lossless compression performance of CALIC. This can be done without increase in the essential complexity over S+P. Our findings present a strong argument for using subband coding as a unified, elegant approach for both lossy and lossless image compression. Specifically, in this paper we outline how to obtain significantly higher coding efficiency over S+P by utilizing better filters and better context modeling and entropy coding of wavelet coefficients.

Journal ArticleDOI
TL;DR: This paper considers an alternative image representation scheme, based on Gaussian derivatives, to the standard discrete cosine transformation (DCT), within a Joint Photographic Experts Group (JPEG) framework, which might yield a compression/decompression technique twice as fast as the DCT and of (essentially) equal quality.
Abstract: The compression and decompression of continuous-tone images is important in document management and transmission systems. This paper considers an alternative image representation scheme, based on Gaussian derivatives, to the standard discrete cosine transformation (DCT), within a Joint Photographic Experts Group (JPEG) framework. Depending on the computer arithmetic hardware used, the approach developed might yield a compression/decompression technique twice as fast as the DCT and of (essentially) equal quality.

Proceedings ArticleDOI
12 May 1998
TL;DR: A rate-distortion optimized JPEG compliant progressive encoder is presented that produces a sequence of bit scans, ordered in terms of decreasing importance, and can achieve precise rate/distortion control.
Abstract: Among the different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this paper, a rate-distortion optimized JPEG compliant progressive encoder is presented that produces a sequence of bit scans, ordered in terms of decreasing importance. Our encoder outperforms a baseline sequential JPEG encoder in terms of compression, significantly at medium bit rates, and substantially at low and high bit rates. Moreover, unlike baseline JPEG encoders, ours can achieve precise rate/distortion control. Good rate-distortion performance at low bit rates and precise rate control, provided by our JPEG compliant progressive encoder, are two highly desired features currently sought for JPEG-2000.

Patent
25 Feb 1998
TL;DR: In this paper, the authors proposed a method to produce progressively higher resolution images of a JPEG compressed image without doing full decompression in a cost effective manner during the JPEG decompression process.
Abstract: This invention enables progressively higher resolution images of a JPEG compressed image to be produced in a cost effective manner during the JPEG decompression process. The operation count is very low when images of {fraction (1/64)}th, {fraction (1/16)}th, and ¼th of full resolution are to be produced without doing a full JPEG decompression. The low resolution images are useful for high speed search, and the ability to produce them without doing full decompression is an important factor in making such search practical.

Patent
28 Jul 1998
TL;DR: In this paper, a common sharing of a quantization circuit and an inverse quantisation circuit was proposed for MPEG and JPEG processing. But the problem of duplicate processing in the case of simultaneously realizing coding/decoding of an MPEG stream and a JPEG stream was not addressed.
Abstract: PROBLEM TO BE SOLVED: To use common circuits to the utmost for duplicate processing in the case of simultaneously realizing coding/decoding of an MPEG stream and a JPEG stream and to use a coding/decoding means through hardware and a coding/decoding means by software in common. SOLUTION: Common sharing of a quantization circuit 5 and an inverse quantization circuit 6 is devised for MPEG and JPEG processing. This is realized by providing two planes of memories storing a quantization matrix, Intra.Non- Intra coefficients are used for the MPEG and coefficients for luminance and color difference signals are used for the JPEG as the coefficients stored in them to cope with the respective processing. In the JPEG processing, a coding table is selected freely, depending on the image by conducting variable length coding 13/decoding 27 using the variable coding table, even for the software in addition to variable length coding 10/decoding 21 which correspond to a fixed coding table by the hardware and the compression efficiency is improved. COPYRIGHT: (C)2000,JPO

Book
01 Jan 1998
TL;DR: Images transforms the direct cosine transformation JPEG motion estimation MPEG-1 MPEG-2 audio compression moving the MPEG- 2 data MPEG-3 applications future directions.
Abstract: Images transforms the direct cosine transformation JPEG motion estimation MPEG-1 MPEG-2 audio compression moving the MPEG-2 data MPEG-2 applications future directions.

Proceedings ArticleDOI
09 Jan 1998
TL;DR: Z coding system that supports both the lossless coding of such graphics data and regular lossy video compression is studied and a simple block predictive coding technique featuring individual pixel access is introduced, so that it enables a gradual shift from lossed coding of graphics to the lossy coding of video.
Abstract: The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
30 Jun 1998
TL;DR: In this paper, orthogonal transforms are performed on the original image signals without interpolation, which means it requires less calculation than JPEG standard, and the degradation of the image quality can be suppressed.
Abstract: An image compression and restoration technique may be used as a substitute for JPEG. In this technique, orthogonal transforms are performed on the original image signals without interpolation, which means it requires less calculation than JPEG standard. Furthermore, the degradation of the image quality can be suppressed. After that quantization, re-ordering and Huffman coding follow. The Huffman coding is also unique in the sense that unlike JPEG, it uses the data from the previous blocks which results in a more simple, less CPU intensive codec technique. Conversely, the compressed data is restored using a procedure reverse to that of the aforementioned compression technique.

Proceedings ArticleDOI
06 Nov 1998
TL;DR: Simulations show that the novel method proposed generates better quantization matrices than the classical method scaling the JPEG default quantization matrix, with a cost lower than the coding, decoding and error measuring procedure.
Abstract: In this paper we propose a novel method for computing JPEG quantization matrices based on desired mean square error, avoiding the classical trial and error procedure. First, we use a relationship between a Laplacian source and its quantization error when uniform quantization is used in order to find a model for uniform quantization error. Then we apply this model to the coefficients obtained in the JPEG standard once the image to be compressed has been transformed by the discrete cosine transform. This allows us to compress an image using JPEG standard under a global MSE constraints and a set of local constraints determined by JPEG standard and visual criteria. Simulations show that our method generates better quantization matrices than the classical method scaling the JPEG default quantization matrix, with a cost lower than the coding, decoding and error measuring procedure.

01 Jan 1998
TL;DR: Research proposes new and more re liab le approaches for lossless set compression, as well as extensions to more general lossy set compression.
Abstract: Set compression allows the compression a set o f s im ila r (corre lated) images m ore e ffic ien tly th a n compressing the same images independently. C urren tly , set com­ pression is perform ed w ith different inter-im age predictive models, th a t forecast the com m on image properties from a few reference images. W ith suffic ient in ter-im age co rre la tion , one can p red ic t any database image from a few tem plates, hence avoid­ in g in ter-im age redundancy and achieving much improved compression ra tios. T h is research focused on tw o m a jo r aspects o f th is technique: the p rac tica l lim its o f the p red ic tive set compression, and the theore tica l estimates o f the compression efficiency. T h is includes a review o f the previous w ork in set compression area, a discussion o f the m ore im p o rta n t s ta tis tica l and in fo rm ationa l aspects involved in p red ic tive set compression, p rac tica l observations and measurements for m edical (C T and M R ) data, and theore tica l analysis o f lossless s im ila r image compression. T h is research proposes new and more re liab le approaches for lossless set compression, as w e ll as th e ir extensions to more general lossy set compression.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: This work presents a new technique for exploiting inter-component redundancies based on a modified Karhunen-Loeve transform (KLT) scheme in combination with a novel quantization scheme that guarantees losslessness.
Abstract: Summary form only given. In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. We present a new technique for exploiting inter-component redundancies. The technique is based on a modified Karhunen-Loeve transform (KLT) scheme in combination with a novel quantization scheme that guarantees losslessness. The KLT decorrelates the color components. It is recomputed on a block by block basis and is therefore spatially adaptive. Spatial redundancies are removed using predictive techniques (lossless JPEG predictor no. 7 and the CALIC-predictor). The data which remains after the (spatial and color) decorrelation should be entropy-coded, but in the current implementation of our scheme only the entropy of the remaining data is computed. Note that in each block some block-dependent information must be sent, such as entropy-coder initialization information and KLT-descriptors (i.e., the rotation angles of the orthogonal KLT-matrix).

Proceedings ArticleDOI
TL;DR: The efficiency of several predictive techniques (MAP, CALIC, 3D predictors), are compared, and the advantages of 2D versus 3D error feedback and context modeling are examinated; and the use of wavelet transforms for lossless multispectral compression are discussed.
Abstract: In this paper, we address the problem of lossless and nearly- lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Transformation of the available data, and coding. The purpose of the first stage is to express the data as uncorrelated data in an optimal way. In the second stage, coding is performed by means of an arithmetic coder. In this paper, we discuss two well-known approaches for spatial as well as multispectral compression of SPOT images: (1) The efficiency of several predictive techniques (MAP, CALIC, 3D predictors), are compared, and the advantages of 2D versus 3D error feedback and context modeling are examinated; (2) The use of wavelet transforms for lossless multispectral compression are discussed. Then, applications of the above mentioned methods for quincunx sampling are evaluated. Lastly, some results, on how predictive and wavelet techniques behave when nearly-lossless compression is needed, are given.

Proceedings ArticleDOI
24 Nov 1998
TL;DR: It is confirmed by the numerical simulations that the performance of the lossed coding scheme with the lossless color coordinate transform is better than that without the loss less color coordinatetransform.
Abstract: This paper proposes a lossless color coordinate transform for lossless color image coding. Lossless color coordinate transforms are used to remove the correlation and to bias the signal energy ratio between the color signal components. In order to form the lossless coding, a ladder network is used. It is confirmed by the numerical simulations that the performance of the lossless coding scheme with the lossless color coordinate transform is better than that without the lossless color coordinate transform.

Proceedings ArticleDOI
16 May 1998
TL;DR: This study compares motion wavelet compression to motion JPEG compression using the standard correlation coefficient and the normalized mean squared error, and found the motion wavelets technique slightly better.
Abstract: Future developments in teleradiology hinge on the delivery of real or near real-time images, sometimes across less than optimal bandwidth communication channels. Ultrasound, to achieve its greatest diagnostic value, needs to transmit not just still images but video as well. A significant amount of compression, however, may be required to achieve near real-time video across limited bandwidths. This will inevitably result in degraded video quality. A variety of compression algorithms are in widespread use including H.261, H.323, JPEG (Joint Photographic Experts Group), MPEG (Motion Picture Expert Group) and most recently wavelets. We have developed a suite of tools to evaluate each of these methods, and to identify potential areas where wavelet compression may have an advantage. In this particular study, we compare motion wavelet compression to motion JPEG compression using the standard correlation coefficient and the normalized mean squared error, and found the motion wavelet technique slightly better.