scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1993"


Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations


Book
01 Mar 1993
TL;DR: Fractal Image Compression (FI) as discussed by the authorsractals are geometric or data structures which do not simplify under magnification and can be described in terms of a few succinct rules, while the fractal contains much or all the image information.
Abstract: Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

673 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed, based on a novel use of two neighboring pixels for both prediction and error modeling.
Abstract: A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter. >

259 citations


Proceedings ArticleDOI
30 Mar 1993
TL;DR: The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA, which is far beyond classical algorithms.
Abstract: The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA. The results obtained, although not satisfactory, are far beyond classical algorithms. >

202 citations


Proceedings ArticleDOI
J.M. Shapiro1
30 Mar 1993
TL;DR: The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source.
Abstract: This paper describes a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance. A fully embedded code represents a sequence of binary decisions that distinguish an image from the 'null' image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source. It is based on four key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression achieved via adaptive arithmetic coding. >

179 citations


Proceedings ArticleDOI
22 Oct 1993
TL;DR: A new image transformation suited for reversible (lossless) image compression is presented, which uses a simple pyramid multiresolution scheme which is enhanced via predictive coding, and which is comparable to other efficient lossy compression methods.
Abstract: In this paper a new image transformation suited for reversible (lossless) image compression is presented. It uses a simple pyramid multiresolution scheme which is enhanced via predictive coding. The new transformation is similar to the subband decomposition, but it uses only integer operations. The number of bits required to represent the transformed image is kept small through careful scaling and truncations. The lossless coding compression rates are smaller than those obtained with predictive coding of equivalent complexity. It is also shown that the new transform can be effectively used, with the same coding algorithm, for both lossless and lossy compression. When used for lossy compression, its rate-distortion function is comparable to other efficient lossy compression methods.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

166 citations


Patent
21 Dec 1993
TL;DR: In this article, a method for compressing video movie data to a specified target size using intra-frame and inter-frame compression schemes is proposed. But the method does not consider the quality of the original video data.
Abstract: A method for compressing video movie data to a specified target size using intraframe and interframe compression schemes. In intraframe compression, a frame of the movie is compressed by comparing adjacent pixels within the same frame. In contrast, interframe compression compresses by comparing similarly situated pixels of adjacent frames. The method begins by compressing the first frame of the video movie using intraframe compression. The first stage of the intraframe compression process does not degrade the quality of the original data, e.g., the method uses run length encoding based on the pixels' color values to compress the video data. However, in circumstances where lossless compression is not sufficient, the method utilizes a threshold value, or tolerance, to achieve further compression. In these cases, if the color variance between pixels is less than or equal to the tolerance, the method will encode the two pixels using a single color value--otherwise, the method will encode the two pixels using different color values. The method increases or decreases the tolerance to achieve compression within the target range. In cases where compression within the target range results in an image of unacceptable quality, the method will split the raw data in half and compress each portion of data separately. Frames after the first frame are generally compressed using a combination of intraframe and interframe compression. Additionally, the method periodically encodes frames using intraframe compression only in order to enhance random frame access.

140 citations


Paul G. Howard1
02 Jan 1993
TL;DR: It is shown that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding, and that greatly increased speed can be achieved at only a small cost in compression efficiency.
Abstract: Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that high-efficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a four-component paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed comes from simplified coding and modeling. Coding is simplified by using prefix codes when arithmetic coding is not necessary, and by using a new practical version of arithmetic coding, called quasi-arithmetic coding, when the precision of arithmetic coding is needed. We simplify image modeling by using small prediction contexts and making plausible assumptions about the distributions of pixel intensity values. For text modeling we use self-organizing-list heuristics and low-precision statistics.

133 citations


Patent
26 May 1993
TL;DR: In this paper, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary to reduce the loss in data compression caused by dictionary resets, and data is compressed/decompressed according to the address location of data entries contained within a dictionary built in a content addressable memory (CAM).
Abstract: A class of lossless data compression algorithms use a memory-based dictionary of finite size to facilitate the compression and decompression of data. To reduce the loss in data compression caused by dictionary resets, a standby dictionary is used to store a subset of encoded data entries previously stored in a current dictionary. In a second aspect of the invention, data is compressed/decompressed according to the address location of data entries contained within a dictionary built in a content addressable memory (CAM). In a third aspect of the invention, the minimum memory/high compression capacity of the standby dictionary scheme is combined with the fast single-cycle per character encoding/decoding capacity of the CAM circuit. The circuit uses multiple dictionaries within the storage locations of a CAM to reduce the amount of memory required to provide a high data compression ratio.

121 citations


Patent
21 May 1993
TL;DR: A data compression/decompression processor (a single-chip VLSI Data Compression/Decompression Engine) for use in applications including but not limited to data storage and communications is described in this paper.
Abstract: A data compression/decompression processor (a single-chip VLSI data compression/decompression engine) for use in applications including but not limited to data storage and communications The processor is highly versatile such that it can be used on a host bus or housed in host adapters, so that all devices such as magnetic disks, tape drives, optical drives and the like connected to it can have substantial expanded capacity and/or higher data transfer rate The processor employs an advanced adaptive data compression algorithm with string-matching and link-list techniques so that it is completely adaptive, and a dictionary is constructed on the fly No prior knowledge of the statistics of the characters in the data is needed During decompression, the dictionary is reconstructed at the same time as the decoding occurs The compression converges very quickly and the compression ratio approaches the theoretical limit The processor is also insensitive to error propagation

93 citations


Journal ArticleDOI
TL;DR: A class of hardware algorithms for implementing the Lempel-Ziv-based data compression technique is described, a powerful technique for lossless data compression that gives high compression efficiency for text as well as image data.
Abstract: A class of hardware algorithms for implementing the Lempel-Ziv-based data compression technique is described. The Lempel-Ziv-based compression method is a powerful technique for lossless data compression that gives high compression efficiency for text as well as image data. The proposed hardware algorithms exploit the principles of pipelining and parallelism in order to obtain high speed and throughput. A prototype CMOS VLSI chip was designed and fabricated using 2- mu m CMOS technology implementing a systolic array of nine processors. The chip gives a compression rate of 13.3 MB/s operating at 40 MHz. Two hardware algorithms for the decompression process are also described. The data compression hardware can be integrated into real-time systems so that data can be compressed and decompressed on-the-fly. >

Journal ArticleDOI
TL;DR: Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations.
Abstract: A two-stage technique for lossless waveform data compression is described. The first stage is a modified form of linear prediction with discrete coefficients, and the second stage is bilevel sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bilevel sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The algorithm is lossless, allowing exact, bit-for-bit recovery of the original data sequence. The performance of the algorithm at each stage is analyzed. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations. >

Book
01 Jan 1993
TL;DR: In this article, the authors present a development of rate-distortion theory and pattern-matching algorithms for lossy data compression, centered around a lossy version of the asymptotic equipartition property (AEP).
Abstract: We present a development of parts of rate-distortion theory and pattern-matching algorithms for lossy data compression, centered around a lossy version of the asymptotic equipartition property (AEP). This treatment closely parallels the corresponding development in lossless compression, a point of view that was advanced in an important paper of Wyner and Ziv in 1989. In the lossless case, we review how the AEP underlies the analysis of the Lempel-Ziv algorithm by viewing it as a random code and reducing it to the idealized Shannon code. This also provides information about the redundancy of the Lempel-Ziv algorithm and about the asymptotic behavior of several relevant quantities. In the lossy case, we give various versions of the statement of the generalized AEP and we outline the general methodology of its proof via large deviations. Its relationship with Barron (1985) and Orey's (1985, 1986) generalized AEP is also discussed. The lossy AEP is applied to (i) prove strengthened versions, of Shannon's(1948, 1974) direct source-coding theorem and universal coding theorems; (ii) characterize the performance of "mismatched" codebooks in lossy data compression; ( iii) analyze the performance of pattern-matching algorithms for lossy compression (including Lempel-Ziv schemes); and (iv) determine the first-order asymptotic of waiting times between stationary processes. A refinement to the lossy AEP is then presented, and it is used to (i) prove second-order (direct and converse) lossy source-coding theorems, including universal coding theorems; (ii) characterize which sources are quantitatively easier to compress; (iii) determine the second-order asymptotic of waiting times between stationary processes; and (iv) determine the precise asymptotic behavior of longest match-lengths between stationary processes. Finally, we discuss extensions of the above framework and results to random fields.

Journal ArticleDOI
TL;DR: In this article, the frequency-dependent propagation characteristics of lossless and lossy open coupled polygonal conductor transmission lines in a multilayered medium are determined based on a rigorous full-wave analysis.
Abstract: The frequency-dependent propagation characteristics of lossless and lossy open coupled polygonal conductor transmission lines in a multilayered medium are determined based on a rigorous full-wave analysis. A boundary integral equation technique is used in conjunction with the method of moments. Losses in conductors and layers are included in an exact way without making use of a perturbation approach. Dispersion curves for the complex propagation constants and impedances are presented for a number of relevant examples and, where possible, compared with published data. >

Journal ArticleDOI
TL;DR: The two-dimensional method of Langdon and Rissanen for compression of black and white images is extended to handle the exact lossless compression of grey-scale images, using the JPEG lossless mode predictors.
Abstract: The two-dimensional method of Langdon and Rissanen for compression of black and white images is extended to handle the exact lossless compression of grey-scale images. Neighbouring pixel values are used to define contexts and probabilities associated with these contexts are used to compress the image. The problem of restricting the number of contexts, both to limit the storage requirements and to be able to obtain sufficient data to generate meaningful probabilities, is addressed. Investigations on a variety of images are carried out using the JPEG lossless mode predictors

Journal ArticleDOI
TL;DR: New methods for lossless predictive coding of medical images using two dimensional multiplicative autoregressive models and experimental results indicate that the proposed schemes achieve higher compression compared to the lossless image coding techniques considered.
Abstract: Presents new methods for lossless predictive coding of medical images using two dimensional multiplicative autoregressive models. Both single-resolution and multi-resolution schemes are presented. The performances of the proposed schemes are compared with those of four existing techniques. The experimental results clearly indicate that the proposed schemes achieve higher compression compared to the lossless image coding techniques considered. >

Proceedings ArticleDOI
30 Mar 1993
TL;DR: A data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images and application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance.
Abstract: This paper presents a data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images. The loss of information ranges from a perceptually lossless level, achieved at 20-30:1 compression ratios, to a one where exploitation of the images is still possible (over 100:1 ratios). A one-dimensional transform coder removes the spectral redundancy, and a two-dimensional wavelet transform removes the spatial redundancy of multispectral images. The transformed images are subsequently divided into active regions that contain significant wavelet coefficients. Each active block is then hierarchically encoded using multidimensional bitmap trees. Application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance. Landsat Thematic Mapper data are used to illustrate the performance of the proposed algorithm. >

Journal ArticleDOI
TL;DR: In this paper, a time-scale representation of (acoustic) signals, motivated by the structure of the mammalian auditory system, is presented, and a theoretical framework is developed in which an iterative algorithm for reconstruction is constructed.

Proceedings ArticleDOI
07 Mar 1993
TL;DR: This study covers data compression algorithms, file format schemes, and fractal image compression, examining in depth how an interactive approach to image compression is implemented.
Abstract: Data compression as it is applicable to image processing is addressed. The relative effectiveness of several image compression strategies is analyzed. This study covers data compression algorithms, file format schemes, and fractal image compression. An overview of the popular LZW compression algorithm and its subsequent variations is also given. Several common image file formats are surveyed, highlighting the differing approaches to image compression. Fractal compression is examined in depth to reveal how an interactive approach to image compression is implemented. The performance of these techniques is compared for a variety of landscape images, considering such parameters as data reduction ratios and information loss.

Proceedings ArticleDOI
23 May 1993
TL;DR: Simulations show that the proposed technique for redundancy removal of quantized coefficents of a wavelet transform performs better than classical methods, while maintaining an efficient implementation complexity.
Abstract: A novel technique for redundancy removal of quantized coefficents of a wavelet transform is discussed. This technique rests on the coding of the address of nonzero coefficients using blocks in both lossy and lossless approach. Simulations show that the proposed technique performs better than classical methods, while maintaining an efficient implementation complexity. >

Journal ArticleDOI
TL;DR: B‐spline methods are described as good candidates for data array compression and the mathematical relation between the maximum entropy method for compression of data tables and the B‐ Spline of zeroth degree is described together with the generalization of B‐splines compression to nth‐order data array tables in matrix and tensor algebra.
Abstract: For efficient handling of very large data arrays, pretreatment by compression is mandatory. In the present paper B-spline methods are described as good candidates for such data array compression. The mathematical relation between the maximum entropy method for compression of data tables and the B-spline of zeroth degree is described together with the generalization of B-spline compression to nth-order data array tables in matrix and tensor algebra.

Proceedings ArticleDOI
01 Jan 1993
TL;DR: This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers.
Abstract: This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.

Proceedings ArticleDOI
30 Mar 1993
TL;DR: A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding.
Abstract: A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression. >

Journal ArticleDOI
TL;DR: A lossless data compression and decompression (LDCD) algorithm based on the notion of textural substitution has been implemented in silicon using a linear systolic array architecture.
Abstract: A lossless data compression and decompression (LDCD) algorithm based on the notion of textural substitution has been implemented in silicon using a linear systolic array architecture. This algorithm employs a model in which the encoder and decoder each have a finite amount of memory which is referred to as the dictionary. Compression is achieved by finding matches between the dictionary and the input data stream whereby a substitution is made in the data stream by an index referencing the corresponding dictionary entry. The LDCD system is built using 30 application-specific integrated circuits (ASICs), each containing 126 identical processing elements (PEs) which perform both the encoding and decoding function at clock rates up to 20 MHz. >

01 Jan 1993
TL;DR: In this article, the H-transform was used for image compression of sky survey images with no noticeable losses in the astrometric and photometric properties of the compressed images, and the method was designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1.
Abstract: Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

Proceedings ArticleDOI
01 Nov 1993
TL;DR: The authors have found that their distortion measure's ranking matches the subjective ranking accurately whereas the mean-square error and its variants performed poorly in matching the subjectiveranking on an average.
Abstract: Describes quantitative distortion measures for compressed monochrome image and video based on a psycho-visual model. The model follows the human vision perception in that the distortion as perceived by a human viewer is dominated by the compression error uncorrelated with the local features of the original image and for a video sequence the distortion is perceived from two sources, the still areas and the motion areas of a video frame. The authors have performed subjective tests to obtain ranking results for compressed images and video sequences which were compressed using different compression algorithms and compared the results with the rankings obtained using their distortion measure and other existing mean-square error based measures. They have found that their distortion measure's ranking matches the subjective ranking accurately whereas the mean-square error and its variants performed poorly in matching the subjective ranking on an average. >

Proceedings ArticleDOI
30 Mar 1993
TL;DR: A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization and typically equals or exceeds that of the JPEG standard.
Abstract: A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization. The authors have tested this algorithm with a host of standard test images (e.g. gray scale magazine images, medical images, space and scientific images, fingerprint images, and handwriting images) and with no prior knowledge of the data or training, for a given fidelity the compression achieved typically equals or exceeds that of the JPEG standard. The only information that must be specified in advance is the fidelity criterion. >

Proceedings ArticleDOI
31 Oct 1993
TL;DR: The authors discuss the use of lossless compression hardware in a modern PET 3D acquisition system that uses an implementation of Lempel-Ziv compression with an estimated sustained throughput of 20 megabytes per second and compression ratios of 3 to 10 for short duration 3D projections arrays.
Abstract: H/sub 2/O/sup 15/ 3D bolus studies and other dynamic 3D protocols in positron emission tomography (PET) require frame durations of five seconds or less as the protocol begins, with required frame durations increasing as the study progresses. A goal in PET acquisition is sustained 3D frame duration of ten seconds, with shorter frame durations allowed for a limited time. Transfer of projection arrays to an acquisition hard disk is a major limit of frame duration. Data compression can be used to increase the effective disk throughput and therefore decrease the sustained frame duration. The authors discuss the use of lossless compression hardware in a modern PET 3D acquisition system. The hardware uses an implementation of Lempel-Ziv compression with an estimated sustained throughput of 20 megabytes per second and compression ratios of 3 to 10 for short duration 3D projections arrays. Compression use can decrease the minimum sustainable frame duration to less than 10 seconds for an ECAT EXACT HR. >

30 Mar 1993
TL;DR: The proposed data compression algorithms are a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform scheme and a Lempel-Ziv-Welch variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table and a simple and efficient hashing function to perform the string search.
Abstract: The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.

Journal ArticleDOI
Yiyan Wu1, D.C. Coll1
TL;DR: An encoding technique called multilevel block truncation coding that preserves the spatial details in digital images while achieving a reasonable compression ratio is described and an adaptive quantizer-level allocation scheme which minimizes the maximum quantization error in each block is introduced.
Abstract: An encoding technique called multilevel block truncation coding that preserves the spatial details in digital images while achieving a reasonable compression ratio is described. An adaptive quantizer-level allocation scheme which minimizes the maximum quantization error in each block and substantially reduces the computational complexity in the allocation of optimal quantization levels is introduced. A 3.2:1 compression can be achieved by the multilevel block truncation coding itself. The truncated, or requantized, data are further compressed in a second pass using combined predictive coding, entropy coding, and vector quantization. The second pass compression can be lossless or lossy. The total compression ratios are about 4.1:1 for lossless second-pass compression, and 6.2:1 for lossy second-pass compression. The subjective results of the coding algorithm are quite satisfactory, with no perceived visual degradation. >