scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1993"


Proceedings ArticleDOI
30 Mar 1993
TL;DR: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed, based on a novel use of two neighboring pixels for both prediction and error modeling.
Abstract: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter. >

259 citations


Journal ArticleDOI
TL;DR: The two-dimensional method of Langdon and Rissanen for compression of black and white images is extended to handle the exact lossless compression of grey-scale images, using the JPEG lossless mode predictors.
Abstract: The two-dimensional method of Langdon and Rissanen for compression of black and white images is extended to handle the exact lossless compression of grey-scale images. Neighbouring pixel values are used to define contexts and probabilities associated with these contexts are used to compress the image. The problem of restricting the number of contexts, both to limit the storage requirements and to be able to obtain sufficient data to generate meaningful probabilities, is addressed. Investigations on a variety of images are carried out using the JPEG lossless mode predictors

57 citations


Journal ArticleDOI
TL;DR: New methods for lossless predictive coding of medical images using two dimensional multiplicative autoregressive models and experimental results indicate that the proposed schemes achieve higher compression compared to the lossless image coding techniques considered.
Abstract: Presents new methods for lossless predictive coding of medical images using two dimensional multiplicative autoregressive models. Both single-resolution and multi-resolution schemes are presented. The performances of the proposed schemes are compared with those of four existing techniques. The experimental results clearly indicate that the proposed schemes achieve higher compression compared to the lossless image coding techniques considered. >

56 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: The efficiency of the proposed interpolative scheme has been enhanced and the blocking artifacts have been reduced significantly, thus yielding a much more pleasant visual result.
Abstract: The primary goal was to reduce the blocking artifacts that would be produced by using the standard JPEG when the bit rate is very low. In order to test the proposed scheme and compare it with the standard JPEG, extensive computer simulations have been performed. As compared with the standard JPEG, the efficiency of the proposed interpolative scheme has been enhanced and the blocking artifacts have been reduced significantly, thus yielding a much more pleasant visual result. >

54 citations


Journal ArticleDOI
01 Apr 1993
TL;DR: A two-chip set has been designed, fabricated and is fully functional which performs the baseline JPEG image compression and decompression algorithm.
Abstract: A two-chip set has been designed, fabricated and is fully functional which performs the baseline JPEG image compression and decompression algorithm. The major functions of the devices include: DCT and IDCT, forward and inverse quantization, Huffman coding and decoding. The devices operate with pixel rates beyond 30 MHz at 70 degrees C and 4.75 V. Each die is less than 10 mm on a side and was implemented in a 1.0 µ CMOS cell-based technology to achieve a 9 man-month design time.

25 citations


Journal ArticleDOI
TL;DR: The use of data compression to reduce bandwidth and reduce storage requirements, and a simple method for lossless compression, runlength encoding, is described, as are the more sophisticated Huffman codes, arithmetic coding, and the trie-based codes.
Abstract: The use of data compression to reduce bandwidth and reduce storage requirements is discussed. The merits of lossless versus lossy compression techniques, the latter offering far greater compression ratios, are considered. The limits of lossless compression are discussed, and a simple method for lossless compression, runlength encoding, is described, as are the more sophisticated Huffman codes, arithmetic coding, and the trie-based codes invented by A. Lempel and J. Ziv (1977, 1978), WAN applications as well as throughput and latency are briefly considered. >

15 citations


Proceedings ArticleDOI
01 Nov 1993
TL;DR: A three-dimensional terrain-adaptive transform-based bandwidth compression technique for multispectral imagery that has the unique capability to adaptively vary the characteristics of the spectral decorrelation transformation based upon the local terrain variation.
Abstract: We present a three-dimensional terrain-adaptive transform-based bandwidth compression technique for multispectral imagery. The transformation involves one dimensional Karhunen-Loeve transform (KLT) followed by two-dimensional discrete cosine transform. The algorithm exploits the inherent spectral and spatial correlations in the data. The images are spectrally decorrelated via the KLT to produce the eigen images. The resulting spectrally-decorrelated eigen images are then compressed using the JPEG algorithm. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 compression ratio (CR) to visually lossy beginning at around 40:1 CR. A significant practical advantage of this approach is that it is leveraged on the standard and highly developed JPEG compression technology. Because of the significant compaction of the data resulting from the initial KLT process, an 8-bit JPEG can be used for coding the eigen images associated with 8, 10, or 12 bits multispectral data. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral decorrelation transformation based upon the local terrain variation. >

15 citations


Proceedings ArticleDOI
16 Aug 1993
TL;DR: It is necessary to preprocess the images in order to reduce the amount of correlation among neighboring pixels, thereby the compression ratio is improved and some lossless compression techniques in combination with preprocessing methods are examined.
Abstract: Data compression deals with representing information in a succinct way. In view that the major lossless or error-free compression methods like Huffman, arithmetic and Lempel-Ziv coding do not achieve great compression ratios, it is necessary to preprocess the images in order to reduce the amount of correlation among neighboring pixels, thereby the compression ratio is improved. These preprocessing methods could achieve reduction of image entropy in the spatial domain, or in the spatial frequency domain. The performance of some lossless compression techniques in combination with preprocessing methods is examined. >

13 citations


Proceedings ArticleDOI
T. Tada1, Kohei Cho1, Haruhisa Shimoda1, Toshibumi Sakata1, Shinichi Sobue 
18 Aug 1993
TL;DR: It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality.
Abstract: Image compression is a key technology to realize on-line satellite image transmission economically and quickly Among various image compression algorithms, the JPEG algorithm is the international standard for still color image compression In this study, various kinds of satellite images were compressed with the JPEG algorithm The relation between compression ratio and image quality were evaluated As for the image quality evaluation, both subjective evaluation and objective evaluation were performed It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality The degradation of spatial distribution quality of the compressed images were evaluated using power spectrum of original and compressed images >

11 citations



Proceedings ArticleDOI
30 Jun 1993
TL;DR: This paper reports on an adaptation of a standardized technique for lossy image compression (the JPEG approach), which provides high compression ratios for radiographic images with minimal apparent loss of diagnostic quality.
Abstract: The transmission of digitized medical images over the existing telecommunications infrastructure presents a formidable challenge. To achieve delivery times on the order of seconds--as opposed to minutes or hours--for large-format high-resolution images, data compression on the order of 25:1 is necessary. This degree of data compression cannot be reached with lossless techniques. This paper reports on an adaptation of a standardized technique for lossy image compression (the JPEG approach), which provides high compression ratios for radiographic images with minimal apparent loss of diagnostic quality.

Journal ArticleDOI
TL;DR: Several parallel pipelined digital signal processor (DSP) architectures that implement the fast cosine transform (FCT)-based Joint Photographers Expert Group (JPEG) still picture image compression algorithm with arithmetic coding for entropy coding are described.
Abstract: Several parallel pipelined digital signal processor (DSP) architectures that implement the fast cosine transform (FCT)-based Joint Photographers Expert Group (JPEG) still picture image compression algorithm with arithmetic coding for entropy coding are described. The extended JPEG image compression algorithm's average execution time, when compressing and decompressing a 256*256 pixel monochrome still image, varied from 0.61 s to 0.12 s in architectures that contained from one to six processors. A common bus DSP multiprocessor system capable of meeting the critical timing requirements of digital image compression/decompression applications is also presented. In an effort to maximize DSP utilization, a simple static load distribution method is provided for assigning the load to the individual DSPs. These parallel pipelined DSP architectures can be used for a wide range of applications, including the MPEG implementation for video coding. >

Proceedings ArticleDOI
31 Oct 1993
TL;DR: Feature of the progressive image build-up of the JPEG progressive coding appears useful in medical image archiving and communication where fast search of image from huge image data base and urgent diagnosis from remote site are often in need.
Abstract: The international standard for digital compression and coding of continuous-tone still images known as JPEG(Joint Photographic Experts Groups) standard is implemented and tested for medical image archiving and communication For a series of head sections of magnetic resonance images, compression ratio of about 10 is obtained without noticeable image degradation Compared to existing full-frame bit-allocation technique, the JPEG standard achieves higher compression with higher signal-tc-noise ratio The reconstructed images by the JPEG standard slao show much less Gibb’s artifact Feature of the progressive image build-up of the JPEG progressive coding appears useful in medical image archiving and communication where fast search of image from huge image data base and urgent diagnosis from remote site are often in need

Journal ArticleDOI
TL;DR: A compression scheme that aims at improving the fidelity of reconstructed images through the parallel application of both lossless and lossy compression techniques and gives better fidelity than the raw DCT algorithm under equal compression ratios.
Abstract: We describe a compression scheme that aims at improving the fidelity of reconstructed images through the parallel application of both lossless and lossy compression techniques. The purpose of the scheme is to obtain compression ratios higher than those obtained by lossless compression schemes and at the same time produce reconstructed images with better fidelity than those normally obtained with lossy techniques. The straightforward (nonoptimized) application of this scheme consistently improves the fidelity of reconstructed images compared to a raw discrete cosine transform (DCT) algorithm. Using standard lossless compression utilities, the integrated scheme gives better fidelity (up to 62% improvement in mean square error) than the raw DCT algorithm under equal compression ratios. Further research is needed to optimize the integrated method, adapt it to the Joint Photographic Experts Group and other equivalent schemes, and evaluate the resulting performance. A discussion of ways that can further improve the efficiency of the integrated approach is given.

Proceedings ArticleDOI
TL;DR: It is concluded that lossless compression of seismic data can savesignificant amounts of storage in seismic data bases and archives, and significant amounts of bandwidth in real- time communication of instrumentation data.
Abstract: Lossless compression is never as profitable, in terms of compression ratio, as lossy compression of the same data. However, lossless techniques that produce significant compression of geophysical waveform data are possible. A two-stage technique for lossless compression of geophysical waveform data is described. The first and most important stage is a form of linear prediction that allows exact recovery of the original waveform data from the predictor residue sequence. The second stage is an encoder of the first-stage residue sequence which approximately maximizes the entropy of the latter, while allowing exact recovery during decompression. We review the overall two-stage technique, which has been described previously, and concentrate in this paper on some recent performance examples and results using the technique. To obtain the latter, a seismic waveform data base is introduced and made available. We conclude that lossless compression of seismic data can save significant amounts of storage in seismic data bases and archives, and significant amounts of bandwidth in real- time communication of instrumentation data.

Proceedings ArticleDOI
01 Nov 1993
TL;DR: The Local Cosine Transform is presented as a new method for the reduction and smoothing of the blocking effect that appears at low bit rates in image coding algorithms based on the Discrete Cosine transform.
Abstract: This paper presents the local cosine transform (LCT) as a new method for the reduction and smoothing of the blocking effect that appears at low bit rates in image coding algorithms based on the discrete cosine transform (DCT). In particular, the blocking effect appears in the JPEG baseline sequential algorithm.


Proceedings Article
01 Jan 1993
TL;DR: The objective of this research is to determine whether or not the color space selected will significantly improve the image compression, and to indicate that the device space, RGB, is the worst color space to compress images.
Abstract: The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psycho-physical techniques were used to assess the results. The final resuLts indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

Book ChapterDOI
13 Sep 1993
TL;DR: A new computational scheme for JPEG baseline system implementation in which the register shifts and additions are used only, and the discrete cosine transform is presented.
Abstract: The discrete cosine transform (DCT) is widely applied in various fields including image data compression and was chosen as a basis of International JPEG (Joint Photographic Experts Group) Still Image Compression Standard. All known DCT implementations are sufficiently complicated, because the multiplication operations are making use of. This paper presents a new computational scheme for JPEG baseline system implementation in which the register shifts and additions are used only.

Proceedings ArticleDOI
06 Sep 1993
TL;DR: The approach is to segment the image into regions with different spatial characteristics, and encode them independently, and the authors are able to achieve compression ratios of about 4.8 to 6.2 while preserving the lossless contents of important regions.
Abstract: This paper describes the studies on methods of lossless compression for medical ultrasonic scanned image (echocardiography) sequences. The aim is to find one, or a combination of methods, that achieves the highest overall compression performance. The approach is to segment the image into regions with different spatial characteristics, and encode them independently. In so doing the authors are able to achieve compression ratios of about 4.8 to 6.2 while preserving the lossless contents of important regions.

Proceedings ArticleDOI
16 Jun 1993
TL;DR: The Joint Photographic Expert Group (JPEG) compression standard and a type of vector quantification called residual vector quantization (RVQ) are evaluated for the real-time compression of precision approach radar (PAR) video data.
Abstract: The Joint Photographic Expert Group (JPEG) compression standard and a type of vector quantization called residual vector quantization (RVQ) are evaluated for the real-time compression of precision approach radar (PAR) video data. Experimental results obtained from simulated PAR imagery are presented which allow a performance and complexity comparison of the two compression methods. For low to moderate compression ratios, JPEG is the preferable solution because of its good performance and relatively low complexity. For higher compression ratios, JPEG gives unacceptable image quality and RVQ is the preferred solution. >

Proceedings ArticleDOI
29 Oct 1993
TL;DR: The potential applications of the various JPEG coding modes in a medical environment are evaluated and it is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.
Abstract: JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: The paper describes a real-time lossless data compression algorithm for both individual band and multi-spectral remote sensing images that uses prediction to reduce the redundancy and classical Huffman coding to reduction the average code length.
Abstract: The paper describes a real-time lossless data compression algorithm for both individual band and multi-spectral remote sensing images. This approach uses prediction to reduce the redundancy and classical Huffman coding to reduce the average code length. Several kinds of predictions were studied and the 2-D prediction was selected. Some simplifications were taken in the Huffman coding for real-time use. A comparison between the coding methods was discussed. The scheme was applied to the 8 bits/pixel, original images of LANDSAT TM, SPOT HRV, AIR SAR and imaging spectrometer by software emulation. An average compression ratio of 2, with the highest value of 3.1 and the lowest value of 1.2 for the radar images, was achieved.

Proceedings ArticleDOI
17 Jan 1993
TL;DR: An adaptive transform coding system that consists of five specialized functional blocks: An octave-based subband decomposition signal transformer [1,21, a bank of adaptive quantizers assisted by a bit allocator, and a lossless compressor coupled with a buffer.
Abstract: Real-time algorithms for the compression of high-fidelity audio are presented. The goal of these algorithms is to provide a compact, high fidelity, digital representation for an input stream of audio samples. We are developing an adaptive transform coding system that consists of five specialized functional blocks: An octave-based subband decomposition signal transformer [1,21, a bank of adaptive quantizers assisted by a bit allocator, and a lossless compressor coupled with a buffer.

Proceedings ArticleDOI
18 Jun 1993
TL;DR: Improvements of JPEG coding by pre and post processing are investigated in mixed images, and it is confirmed that the processed image is better than an unprocessed image in quality.
Abstract: Coding for full-color still-image is standardized internationally by JPEG (Joint Photographic Experts Group). The coding method is used in wide area applications. In many cases, continuous-tone images are accompanied by characters. So improvements of JPEG coding by pre and post processing are investigated in mixed images. In this process, we use sharpening before JPEG coding as a pre-processing, and we use smoothing after JPEG coding as a post- processing. In both pre and post processing, the degree of emphasis is controlled by a weighting factor and the quality is compared between processed and unprocessed images. As a result of this process, it is confirmed that the processed image is better than an unprocessed image in quality.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Book ChapterDOI
01 Jan 1993
TL;DR: This work presents a new model of error resilient communication where even though errors may not be detected, there are strong guarantees that their effects will not propagate.
Abstract: With dynamic communication a sender and receiver work in a “lock-step” cooperation to maintain identical copies of a dictionary D (which is constantly changing). A key application of dynamic communication is adaptive data compression. A potential drawback of dynamic communication is error propagation (that causes the sender and receiver dictionaries to diverge and possibly corrupt all data to follow). Protocols that require the receiver to request re-transmission from the sender when an error is detected can be impractical for many applications where such two way communication is not possible or self-defeating (e.g., with data compression, re-transmission is tantamount to losing the data that could have been transmitted in the mean time). We present a new model of error resilient communication where even though errors may not be detected, there are strong guarantees that their effects will not propagate.


Proceedings ArticleDOI
TL;DR: This paper describes a low-cost design tool that has been developed and is currently being successfully applied to design QMs for various sensors including IR, SAR, medical, scanned maps, and fingerprints.
Abstract: JPEG has already found wide acceptance for still frame image compression. The quantization matrices (QMs) play a critical role in the performance of the JPEG algorithm but there has been a lack of effective QM design tools. As a result, sub-optimal QMs have commonly been used and JPEG has been judged to be inappropriate for some applications. It is our contention that JPEG is even more widely applicable than `common knowledge' would admit. This paper describes a low-cost design tool that has been developed and is currently being successfully applied to design QMs for various sensors including IR, SAR, medical, scanned maps, and fingerprints.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: The limited data rate of low-power small satellites often requires that images be data- compressed before transmission, and several data compression techniques are currently being developed and improved.
Abstract: The limited data rate of low-power small satellites often requires that images be data- compressed before transmission. Several data compression techniques are currently being developed and improved. These methods include: vector quantization (VQ), Lempel-Ziv, fractal encoding, and discrete-cosine transform (DCT) methods such as JPEG and MPEG. JPEG (Joint Photographic Experts Group) is a still-image compression system. MPEG (Motion Picture Experts Group) is a compression and communications protocol which defines a syntax for transmitting several data types, including audio, user-data, and full-motion compressed video. MPEG allows `tolerable' NTSC full-motion video transmission at data rates as low as 1.2 Mbps, and video-conference-quality transmission at rates as low as 56 Kbps. However, since the MPEG standard includes JPEG as a subset, it allows the transmission of compressed still images as well.