scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1995"


Journal ArticleDOI
Davis Y. Pan1
TL;DR: This tutorial covers the theory behind MPEG/audio compression and the basics of psychoacoustic modeling and the methods the algorithm uses to compress audio data with the least perceptible degradation.
Abstract: This tutorial covers the theory behind MPEG/audio compression. While lossy, the algorithm can often provide "transparent", perceptually lossless compression, even with factors of 6-to-1 or more. It exploits the perceptual properties of the human auditory system. The article also covers the basics of psychoacoustic modeling and the methods the algorithm uses to compress audio data with the least perceptible degradation. >

382 citations


Journal ArticleDOI
01 Feb 1995
TL;DR: This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records and examines current compression technology in the field of medical imaging.
Abstract: The objective of radiologic image compression is to reduce the data volume of and to achieve a low bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, we first describe the fundamental concepts of radiologic imaging and digitization. Then, we examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. We conclude with a summary of future challenges and research directions. >

306 citations


Proceedings ArticleDOI
28 Mar 1995
TL;DR: CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossed compression of 8 bit deep images with a single system.
Abstract: Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a "reversible" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.

252 citations


Proceedings ArticleDOI
28 Mar 1995
TL;DR: A new algorithm is described, PPM*, which exploits contexts of unbounded length and reliably achieves compression superior to PPMC, although the current implementation uses considerably greater computational resources (both time and space).
Abstract: The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.

218 citations


Patent
30 Jun 1995
TL;DR: In this article, a reversible wavelet filter is used to generate coefficients from input data such as image data and an entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.
Abstract: A compression and decompression system in which a reversible wavelet filter are used to generates coefficients from input data such as image data. The reversible wavelet filter is an efficient transform implemented with integer arithmetic that has exact reconstruction. The present invention uses the reversible wavelet filter in a lossless system (or lossy system) in which an embedded codestream is generated from the coefficients produced by the filter. An entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.

159 citations


Patent
17 Apr 1995
TL;DR: In this paper, an entropy encoding technique using multiple Huffman code tables was used for audio signal compression and decompression, and a block structure for the compressed data and a decoder for reconstructing the original audio signal from compressed data were also disclosed.
Abstract: An audio signal compression and decompression method and apparatus that provide lossless, realtime performance. The compression/decompression method and apparatus are based on an entropy encoding technique using multiple Huffman code tables. Uncompressed audio data samples are first processed by a prediction filter which generates prediction error samples. An optimum coding table is then selected from a number of different preselected tables which have been tailored to different probability density functions of the prediction error. For each frame of prediction error samples, an entropy encoder selects the one Huffman code table which will yield the shortest encoded representation of the frame of prediction error samples. The frame of prediction error samples is then encoded using the selected Huffman code table. A block structure for the compressed data and a decoder for reconstructing the original audio signal from the compressed data are also disclosed.

125 citations


Proceedings ArticleDOI
17 Feb 1995
TL;DR: A number of simple techniques that can be used to assess perceived image quality are discussed and it is demonstrated that the results from a numerical scaling experiment depend on the specific nature of the subject's task in combination with the nature ofThe images to be judged.
Abstract: The large variety of algorithms for data compression has created a growing need for methods to judge (new) compression algorithms. The results of several subjective experiments illustrate that numerical category scaling techniques provide an efficient and valid way not only to obtain compression ratio versus quality curves that characterize coder performance over a broad range of compression ratios, but also to assess perceived image quality in a much smaller range (e.g. close to threshold level). Our first object is to discuss a number of simple techniques that can be used to assess perceived image quality. We show how to analyze data obtained from numerical category scaling experiments and how to set up such experiments. Second, we demonstrate that the results from a numerical scaling experiment depend on the specific nature of the subject's task in combination with the nature of the images to be judged. As results from subjective scaling experiments depend on many factors, we conclude that one should be very careful in selecting an appropriate assessment technique.

121 citations


Journal ArticleDOI
TL;DR: Experiments with 500 Mb of newspaper articles show that in full‐text retrieval environments compression not only saves space, it can also yield faster query processing ‐ a win‐win situation.
Abstract: We describe the implementation of a data compression scheme as an integral and transparent layer within a full-text retrieval system. Using a semi-static word-based compression model, the space needed to store the text is under 30 per cent of the original requirement. The model is used in conjunction with canonical Huffman coding and together these two paradigms provide fast decompression. Experiments with 500 Mb of newspaper articles show that in full-text retrieval environments compression not only saves space, it can also yield faster query processing - a win-win situation.

103 citations


Proceedings ArticleDOI
03 Mar 1995
TL;DR: The critical task in lossless data compression is finding good models for the data under consideration, and algorithms that 'learn' the parameters of the model that best describes the data and achieve rates that are asymptotically optimal are known as oplimal universal coding schemes.
Abstract: Given a finite sequence x1, x2, .. . , x, the essential problem in lossless data compression is to process the symbols in some order and assign a conditional probability distribution for the current symbol based on the previously processed symbols [44]. For example, if we are to process x1, x2, .. . , x in a sequenal manner [53] then we need to estimate the distributions p(x+iIxi, x2, . . . , xe), 1 < j < fl The number of bits needed to optimally encode the sequence x1, . . , x, is then given by —log flp(x+iIx,...,x). Coding techniques that can encode the sequence at rates close to the optimal are known [43]. Hence, higher the probabilities assigned in the above product, the lesser the number of bits that are needed to encode the sequence. A model, in this context is then simply a scheme for assigning conditional probability distributions [53]. Clearly, it is the model that determines the rate at which we can encode the sequence. Hence the critical task in lossless data compression is finding good models for the data under consideration. Finding good models for a given data set is a difficult problem. In lossless compression applications some structure is usually imposed on the data in the form offinile-state models, Markov-models, tree-models, finiie-con1ex models etc to make the problem mathematically and/or computationally tractable. Algorithms are then designed that encode the given data, in an optimal or sub-optimal manner. Algorithms that 'learn' the parameters of the model that best describes the data and achieve rates that are asymptotically optimal are known as oplimal universal coding schemes. Such schemes have been applied very successfully in text or string compression applications. Unfortunately, universal coding schemes and other standard modelling techniques do not work well in practice when applied to gray-scale image data.

94 citations


Proceedings ArticleDOI
30 Oct 1995
TL;DR: The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images and the compression ratios are compared with state-of-the-art algorithms available in the literature.
Abstract: Inspired by theoretical results on universal modeling, a general framework for sequential modeling of gray-scale images is proposed and applied to lossless compression. The model is based on stochastic complexity considerations and is implemented with a tree structure. It is efficiently estimated by a modification of the universal algorithm context. The sequential, lossless compression schemes obtained when the context modeler is used with an arithmetic coder, are tested with a representative set of gray-scale images. The compression ratios are compared with those obtained with state-of-the-art algorithms available in the literature, with the results of the comparison, showing the potential of the proposed approach.

88 citations


Proceedings ArticleDOI
22 Aug 1995
TL;DR: In this article, a buffering scheme which allows a one-pass implementation with reasonable memory is described, which takes advantage of the spatial and spectral information available in the wavelet domain and adapts well to the lessor significant bits.
Abstract: Compression with reversible embedded wavelets (CREW) is a unified lossless and lossy continuous-tone still image compression system. 'Reversible' wavelets are nonlinear filters which implement exact-reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit-significance embedded order, allowing lossy compression by truncating the compressed data. Lossless coding of wavelet coefficients is unique to CREW. In fact, most of the coded data is created by the lessor significant bits of the coefficients. CREW's context-based coding, called Horizon coding, takes advantage of the spatial and spectral information available in the wavelet domain and adapts well to the lessor significant bits. In applications where the size of an image is large, it is desirable to perform compression in one pass using far less workspace memory than the size of the image. A buffering scheme which allows a one-pass implementation with reasonable memory is described.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A method for applying arithmetic coding to lossless waveform compression is discussed and a formula for selecting ranges of waveform values is provided.
Abstract: A method for applying arithmetic coding to lossless waveform compression is discussed. Arithmetic coding has been used widely in lossless text compression and is known to produce compression ratios that are nearly optimal when the symbol table consists of an ordinary alphabet. In lossless compression of digitized waveform data, however, if each possible sample value is viewed as a "symbol" the symbol table would be typically very large and impractical. the authors therefore define a symbol to be a certain range of possible waveform values, rather than a single value, and develop a coding scheme on this basis. The coding scheme consists of two compression stages. The first stage is lossless linear prediction, which removes coherent components from a digitized waveform and produces a residue sequence that is assumed to have a white spectrum and a Gaussian amplitude distribution. The prediction is lossless in the sense that the original digitized waveform can be recovered by processing the residue sequence. The second stage, which is the subject of the present paper, is arithmetic coding used as just described. A formula for selecting ranges of waveform values is provided. Experiments with seismic and speech waveforms that produce near-optimal results are included. >

01 Jan 1995
TL;DR: In this article, a combination of differential pulse-code modulation (DPCM) and Huffman coding is used to compress volume data files, achieving a compression ratio of around 50%.
Abstract: Data in volume form consumes an extraordinary amount of storage space. For efficient storage and transmission of such data, compression algorithms are imperative. However, most volumetric datasets are used in biomedicine and other scientific applications where lossy compression is unacceptable. We present a lossless data-compression algorithm which, being oriented specifically for volume data, achieves greater compression performance than generic compression algorithms that are typically available on modern computer systems. Our algorithm is a combination of differential pulse-code modulation (DPCM) and Huffman coding and results in compression of around 50% for a set of volume data files.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: A near-lossless image compression scheme with a mechanism incorporated to minimize the entropy of the quantized prediction error sequence and an algorithm that produces minimum entropy conditioned on the contexts is presented.
Abstract: A near-lossless image compression scheme is presented. It is essentially a DPCM system with a mechanism incorporated to minimize the entropy of the quantized prediction error sequence. With a "near-lossless" criterion of no more than a d gray level error for each pixel, where d is a small non-negative integer, trellises describing all allowable quantized prediction error sequences are constructed. A set of "contexts" is defined for the conditioning prediction error model and an algorithm that produces minimum entropy conditioned on the contexts is presented. Finally, experimental results are given.

Patent
16 Jun 1995
TL;DR: In this article, a page description language is separated into two types of instructions: first instructions generate solid regions on the printed output and second instructions resulting in halftoned regions, and the first type of instructions generate a contone map, representing images having contone levels and graphics having intermediate levels.
Abstract: Digital input commands defined in a page description language are separated in two types of instructions : first instructions resulting in solid regions on the printed output and second instructions resulting in halftoned regions. The first instructions generate a binary bitmap indicating a high or low density of the solid regions and a binary bitmask indicating whether recorder elements belong to a solid or screened region. The bitmap and bitmask are preferentially compressed by a lossless compression method. The second type of instructions generate a contone map, representing images having contone levels and graphics having intermediate levels. The contone map is preferentially compressed by a lossy compression method, stored on a storage medium and retrieved once the output device must render the rasterized image. The rasterized image is constructed on the fly from the compressed bitmap, bitmask and contone map, which is halftoned before combination with the bitmap. A serious amount of memory can be saved, while keeping up with the speed of the rendering process in the output devise and without deteriorating the quality of the reproduction.

Journal ArticleDOI
TL;DR: Experiments on several Landsat-TM images show that using both the spectral and the spatial nature of the remotely sensed data results in significant improvement over spatial decorrelation alone, resulting in higher compression ratios and computationally inexpensive.
Abstract: Presents some new techniques of spectral and spatial decorrelation in lossless data compression of remotely sensed imagery. These techniques provide methods to efficiently compute the optimal band combination and band ordering based on the statistical properties of Landsat-TM data. Experiments on several Landsat-TM images show that using both the spectral and the spatial nature of the remotely sensed data results in significant improvement over spatial decorrelation alone. These techniques result in higher compression ratios and are computationally inexpensive. >

Patent
David J. Craft1
23 Mar 1995
TL;DR: In this paper, a dual-stage data lossless compressor for optimally compressing bit mapped imaged data is presented, where the first stage compresses data bits representing pixel positions along a scan line of a video image to data units of fixed length.
Abstract: A dual stage data lossless compressor for optimally compressing bit mapped imaged data. The first stage run length compresses data bits representing pixel positions along a scan line of a video image to data units of fixed length. The units alternate to represent runs of alternate video image data values. The run length compressed data units are subject to second stage compression using a sliding window Lempel-Ziv compressor. The output from the Lempel-Ziv compressor includes raw tokens of fixed length and compressed tokens of varying lengths. The combination of a run length precompressor and a sliding window Lempel-Ziv post compressor, in which the run length compressor output is a succession of data units of fixed length, provides an optimum match between the capabilities and idiosyncracies of the two compressors, and related decompressors, when processing business form data images. Furthermore, the asymmetric simplicity of Lempel-Ziv sliding window decompression and run length decompression simplicity leads to a decompression speed compatible with contemporary applications.

Book ChapterDOI
28 May 1995
TL;DR: It is shown that searching for optimal wavelet does not always offer a substantial improvement in coding performance over “good” standard wavelets, and some guidelines for determining the need to search for the “optimal” wavelets based on the statistics of the image to be coded are proposed.
Abstract: In wavelet-based image coding the choice of wavelets is crucial and determines the coding performance. Current techniques use computationally intensive search procedures to find the optimal basis (type, order and tree). In this paper, we show that searching for optimal wavelet does not always offer a substantial improvement in coding performance over “good” standard wavelets. We propose some guidelines for determining the need to search for the “optimal” wavelets based on the statistics of the image to be coded. In addition, we propose an adaptive wavelet packet decomposition algorithm based on the local transform gain of each stage of the decomposition. The proposed algorithm provides a good coding performance at a substantially reduced complexity.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: A fast lossy Internet image transmission scheme (FLIIT) for compressed images which eliminates retransmission delays by strategically shielding important portions of the image with redundancy bits is introduced.
Abstract: Images are usually transmitted across the Internet using a lossless protocol such as TCP/IP. Lossless protocols require retransmission of lost packets, which substantially increases transmission time. We introduce a fast lossy Internet image transmission scheme (FLIIT) for compressed images which eliminates retransmission delays by strategically shielding important portions of the image with redundancy bits. We describe a joint source and channel coding algorithm for images which minimizes the expected distortion of transmitted images. The algorithm efficiently allocates quantizer resolution bits and redundancy bits to control quantization errors and expected packet transmission losses. We describe an implementation of this algorithm and compare its performance on the Internet to lossless TCP/IP transmission of the same images. In our experiments, the FLIIT scheme transmitted images five times faster than TCP/IP during the day, with resulting images of equivalent quality.

Journal ArticleDOI
TL;DR: A lossless image compression scheme that exploits redundancy both at local and global levels in order to obtain maximum compression efficiency is proposed.
Abstract: The redundancy in digital image representation can be classified into two categories: local and global. In this paper, we present an analysis of two image characteristics that give rise to local and global redundancy in image representation. Based on this study, we propose a lossless image compression scheme that exploits redundancy both at local and global levels in order to obtain maximum compression efficiency. The proposed algorithm segments the image into variable size blocks and encodes them depending on the characteristics exhibited by the pixels within the block. The proposed algorithm is implemented in software and its performance is better than other lossless compression schemes such as the Huffman, the arithmetic, the Lempel-Ziv and the JPEG. >

Proceedings ArticleDOI
28 Mar 1995
TL;DR: Practical implementations for using non-greedy parsing in LZ77 and LZ78 compression are explored and some experimental measurements are presented.
Abstract: Most practical compression methods in the LZ77 and LZ78 families parse their input using a greedy heuristic. However the popular gzip compression program demonstrates that modest but significant gains in compression performance are possible if non-greedy parsing is used. Practical implementations for using non-greedy parsing in LZ77 and LZ78 compression are explored and some experimental measurements are presented.

Proceedings ArticleDOI
06 Mar 1995
TL;DR: This paper designs and implements a novel database compression technique based on vector quantization (VQ), and shows how one may use a lossless version ofvector quantization to reduce database space storage requirements and improve disk I/O bandwidth.
Abstract: Data compression is one way to alleviate the I/O bottleneck problem faced by I/O-intensive applications such as databases. However, this approach is not widely used because of the lack of suitable database compression techniques. In this paper, we design and implement a novel database compression technique based on vector quantization (VQ). VQ is a data compression technique with wide applicability in speech and image coding, but it is not directly suitable for databases because it is lossy. We show how one may use a lossless version of vector quantization to reduce database space storage requirements and improve disk I/O bandwidth. >

Journal ArticleDOI
TL;DR: Experiments were conducted to compare these structures in terms of their first order entropy and RMS errors in the reconstruction process and results indicate that the mean-sampling with circular-difference method yields the lowest entropy, comparable to that with 1-D lossless DPCM predictive coding.
Abstract: Algorithms for constructing differential images with hierarchical data structure are presented. The data structures are simple, efficient, and ideal for viewing images in progressive transmission using lossless compression. Unlike conventional pyramidal structures, the total number of nodes required to build the structure is the same as the number of pixels in an image at the same time its hierarchy is preserved. These structures are constructed using subsampling or mean-sampling methods for predictors with block sizes of 2/spl times/2 or 3/spl times/3. Experiments were conducted to compare these structures in terms of their first order entropy and RMS errors in the reconstruction process. Results indicate that the mean-sampling with circular-difference method yields the lowest entropy, comparable to that with 1-D lossless DPCM predictive coding. Lastly, hardware for the efficient construction and access of the hierarchical structures is discussed and evaluated. >

Journal ArticleDOI
A. Manduca1
TL;DR: Software modules that perform wavelet-based compression on both 2-D and 3-D gray scale images and extensions of the current approach to still more efficient compression schemes are discussed.
Abstract: We have developed software modules (both stand-alone and in the biomedical image analysis and display package ANALYZE) that perform wavelet-based compression on both 2-D and 3-D gray scale images. We present examples of such compression on a variety of medical images, and comparisons with JPEG and other compression schemes. We also show examples of the improvements gained by true 3-D compression of a 3-D image (as opposed to 2-D compression of each slice), and discuss issues such as the treatment of edge effects and human visual system response in the context of a wavelet-based approach. Finally, we discuss extensions of the current approach to still more efficient compression schemes. >

Proceedings ArticleDOI
27 Apr 1995
TL;DR: A 3D wavelet compression algorithm for medical images that achieves a good reconstruction quality at high compression ratios and a parallel version of the 3D compression algorithm in a local area network environment.
Abstract: We have developed a 3D wavelet compression algorithm for medical images that achieves a good reconstruction quality at high compression ratios. The algorithm applies a 3D wavelet transformation to a volume image set, followed by a scalar quantization and entropy coding to the wavelet coefficients. We also implemented a parallel version of the 3D compression algorithm in a local area network environment. Multiple processors on different workstations on the network are utilized to speed up the compression or decompression process. The 3D wavelet transform has been applied to 3D MR volume images and the results are compared with the results obtained using a 2D wavelet compression. Compression ratios achieved with the 3D algorithm are 40 - 90% higher than that of using the 2D compression algorithm. The results of applying parallel computing to the 3D compression algorithm indicate that the efficiency of the parallel algorithm ranges from 80 - 90%.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
13 Feb 1995
TL;DR: In this paper, a method for preprocessing a binary file for data compression under a dictionary-based data compression algorithm takes advantage of redundancy in a two-dimensional binary image, i.e. a representation based on pixels of horizontal lines, to achieve an improvement of compression ratio.
Abstract: A method for preprocessing a binary file for data compression under a dictionary-based data compression algorithm takes advantage of redundancy in a two-dimensional binary image. The method rearranges a linear representation of a binary image, i.e. a representation based on pixels of horizontal lines, to a two-dimensional representation, i.e. a representation based on a sequence of adjoining picture areas, to achieve an improvement of compression ratio. The present invention is applicable to dictionary-based data compression methods, such as LZW, LZ77 and LZ78.

Patent
13 Jan 1995
TL;DR: In this paper, a method and apparatus for performing video image compression and decompression is described, in which the wavelets applied to sample locations at the boundaries of image intervals are different from those applied to the sample locations within the intervals.
Abstract: A method and apparatus for performing video image compression and decompression are disclosed. The video image compression is performed using boundary-spline-wavelet decomposition, in which the wavelets applied to sample locations at the boundaries of image intervals are different from those applied to sample locations within the intervals. The decomposition is performed first for horizontal rows of the image data, and then in a vertical direction upon the results of the first decomposition. Quantization serves to locally round off the higher frequency components of the decomposition, and the decomposition is repeated until the desired compression ratio is obtained. Lossless compression may then be applied to the decomposed image data, and the compressed image is transmitted or stored, depending upon the application. Decompression is effected by lossless decompression of the received data, followed by reconstruction of the image using boundary-spline-wavelets, repeated as necessary to fully reconstruct the image. The reconstructed image can then be displayed on a conventional video display.

Journal Article
TL;DR: The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process.
Abstract: Image compression is a necessity for the utilization of large digital images, e.g., digitized aerial color images. The JPEG still-picture compression algorithm is one alternative for carrying out the image compression task. The JPEG method itself and its suitability for photogrammetric work are studied, with special attention being paid to the geometric degradation of digital images due to the compression process. In our experience, the JPEG algorithm seems to be a good choice for image compression. For color images, it gives a compression ratio of about 1:10 without considerable degradation in the visual or geometric quality of the image.

Journal ArticleDOI
TL;DR: This paper presents a prediction scheme that partitions an image into blocks and for each block selects a scan from a codebook of scans such that the resulting prediction error is minimized and results compare very favorably with the JPEG lossless compression standard.
Abstract: When applying predictive compression on image data there is an implicit assumption that the image is scanned in a particular order. Clearly, depending on the image, a different scanning order may give better compression. In earlier work, we had defined the notion of a prediction tree (or scan) which defines a scanning order for an image. An image can be decorrelated by taking differences among adjacent pixels along any traversal of a scan. Given an image, an optimal scan that minimizes the absolute sum of the differences encountered can be computed efficiently. However, the number of bits required to encode an optimal scan turns out to be prohibitive for most applications. In this paper we present a prediction scheme that partitions an image into blocks and for each block selects a scan from a codebook of scans such that the resulting prediction error is minimized. Techniques based on clustering are developed for the design of a codebook of scans. Design of both semiadaptive and adaptive codebooks is considered. We also combine the new prediction scheme with an effective error modeling scheme. Implementation results are then given, which compare very favorably with the JPEG lossless compression standard. >

Proceedings ArticleDOI
23 Oct 1995
TL;DR: The proposed algorithm works with a completely different philosophy summarized in the following four key points: a perfect reconstruction hierarchical morphological subband decomposition yielding only integer coefficients, prediction of the absence of significant information across scales using zerotrees of wavelet coefficients, and lossless data compression via adaptive arithmetic coding.
Abstract: In this paper the problem of progressive lossless image coding is addressed. Many applications require a lossless compression of the image data. The possibility of progressive decoding of the bitstream adds a new functionality for those applications using data browsing. In practice, the proposed scheme can be of intensive use when accessing large databases of images requiring a lossless compression (especially for medical applications). The international standard JPEG allows a lossless mode. It is based on an entropy reduction of the data using various kinds of estimators followed by source coding. The proposed algorithm works with a completely different philosophy summarized in the following four key points: 1) a perfect reconstruction hierarchical morphological subband decomposition yielding only integer coefficients, 2) prediction of the absence of significant information across scales using zerotrees of wavelet coefficients, 3) entropy-coded successive-approximation quantization, and 4) lossless data compression via adaptive arithmetic coding. This approach produces a completely embedded bitstream. Thus, it is possible to decode only partially the bitstream to reconstruct an approximation of the original image.