scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2000"


Journal ArticleDOI
TL;DR: LOCO-I as discussed by the authors is a low complexity projection of the universal context modeling paradigm, matching its modeling unit to a simple coding unit, which is based on a simple fixed context model, which approaches the capability of more complex universal techniques for capturing high-order dependencies.
Abstract: LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS.

1,668 citations


Journal ArticleDOI
TL;DR: It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.

1,485 citations


Proceedings ArticleDOI
01 Jul 2000
TL;DR: A new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning, coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction sees improvements in error by a factor four compared to other progressive coding schemes.
Abstract: We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the reduction of error in a compression setting. Using semi-regular meshes, parameter and connectivity information can be virtually eliminated. Coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction we see improvements in error by a factor four (12dB) compared to other progressive coding schemes.

630 citations


Journal ArticleDOI
TL;DR: At low bit rates, reversible integer-to-integer and conventional versions of transforms were found to often yield results of comparable quality, with the best choice for a given application depending on the relative importance of the preceding criteria.
Abstract: In the context of image coding, a number of reversible integer-to-integer wavelet transforms are compared on the basis of their lossy compression performance, lossless compression performance, and computational complexity. Of the transforms considered, several were found to perform particularly well, with the best choice for a given application depending on the relative importance of the preceding criteria. Reversible integer-to-integer versions of numerous transforms are also compared to their conventional (i.e., nonreversible real-to-real) counterparts for lossy compression. At low bit rates, reversible integer-to-integer and conventional versions of transforms were found to often yield results of comparable quality. Factors affecting the compression performance of reversible integer-to-integer wavelet transforms are also presented, supported by both experimental data and theoretical arguments.

410 citations


Journal ArticleDOI
TL;DR: A method for radical linear compression of data sets where the data are dependent on some number M of parameters is presented, and it is shown that, if the noise in the data is independent of the parameters, it can form M linear combinations of the data which contain as much information about all the parameters as the entire data set.
Abstract: We present a method for radical linear compression of data sets where the data are dependent on some number M of parameters. We show that, if the noise in the data is independent of the parameters, we can form M linear combinations of the data which contain as much information about all the parameters as the entire data set, in the sense that the Fisher information matrices are identical; i.e. the method is lossless. We explore how these compressed numbers fare when the noise is dependent on the parameters, and show that the method, though not precisely lossless, increases errors by a very modest factor. The method is general, but we illustrate it with a problem for which it is well-suited: galaxy spectra, the data for which typically consist of similar to 10(3) fluxes, and the properties of which are set by a handful of parameters such as age, and a parametrized star formation history. The spectra are reduced to a small number of data, which are connected to the physical processes entering the problem. This data compression offers the possibility of a large increase in the speed of determining physical parameters. This is an important consideration as data sets of galaxy spectra reach 10(6) in size, and the complexity of model spectra increases. In addition to this practical advantage, the compressed data may offer a classification scheme for galaxy spectra which is based rather directly on physical processes.

225 citations


Journal ArticleDOI
TL;DR: Two light-field compression schemes are presented and the first proposed coder is based on video-compression techniques that have been modified to code the four-dimensional light-fields data structure efficiently, establishing a hierarchical structure among the light- field images.
Abstract: Two light-field compression schemes are presented. The codecs are compared with regard to compression efficiency and rendering performance. The first proposed coder is based on video-compression techniques that have been modified to code the four-dimensional light-field data structure efficiently. The second coder relies entirely on disparity-compensated image prediction, establishing a hierarchical structure among the light-field images. The coding performance of both schemes is evaluated using publicly available light fields of synthetic, as well as real-world, scenes. Compression ratios vary between 100:1 and 2000:1, depending on the reconstruction quality and light-field scene characteristics.

223 citations


Journal ArticleDOI
TL;DR: It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone.
Abstract: This paper proposes an interband version of CALIC (context-based, adaptive, lossless image codec) which represents one of the best performing, practical and general purpose lossless image coding techniques known today. Interband coding techniques are needed for effective compression of multispectral images like color images and remotely sensed images. It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone. The proposed interband CALIC exploits both interband and intraband statistical redundancies, and obtains significant compression gains over its intrahand counterpart. On some types of multispectral images, interband CALIC can lead to a reduction in bit rate of more than 20% as compared to intraband CALIC. Interband CALIC only incurs a modest increase in computational cost as compared to intraband CALIC.

215 citations


Proceedings Article
01 Sep 2000
TL;DR: In this paper, the performance of JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, was evaluated by comparing the principles behind each algorithm.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper puts into perspective the performance of these by evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG. The study concentrates on compression efficiency, although complexity and set of supported functionalities are also evaluated. Lossless compression efficiency as well as the lossy rate-distortion behavior is discussed. The principles behind each algorithm are briefly described and an outlook on the future of image coding is given. The results show that the choice of the “best” standard depends strongly on the application at hand.

197 citations


Patent
14 Jul 2000
TL;DR: MemoryF/X as discussed by the authors is a memory module including parallel data compression and decompression engines for improved performance, which includes multiple novel techniques such as: 1) parallel lossless compression/decompression; 2) selectable compression modes such as lossless, lossy or no compression; 3) priority compression mode; 4) data cache techniques; 5) variable compression block sizes; 6) compression reordering; and 7) unique address translation, attribute, and address caches.
Abstract: An memory module including parallel data compression and decompression engines for improved performance. The memory module includes MemoryF/X Technology. To improve latency and reduce performance degradations normally associated with compression and decompression techniques, the MemoryF/X Technology encompasses multiple novel techniques such as: 1) parallel lossless compression/decompression; 2) selectable compression modes such as lossless, lossy or no compression; 3) priority compression mode; 4) data cache techniques; 5) variable compression block sizes; 6) compression reordering; and 7) unique address translation, attribute, and address caches. The parallel compression and decompression algorithm allows high-speed parallel compression and high-speed parallel decompression operation. The memory module-integrated data compression and decompression capabilities remove system bottlenecks and increase performance. This allows lower cost systems due to smaller data storage, reduced bandwidth requirements, reduced power and noise.

159 citations


Proceedings ArticleDOI
01 Oct 2000
TL;DR: This work describes a compression algorithm whose principle is completely different: the coding order of the vertices is used to compress their coordinates, and then the topology of the mesh is reconstructed from the Vertices.
Abstract: The compression of geometric structures is a relatively new field of data compression. Since about 1995, several articles have dealt with the coding of meshes, using for most of them the following approach: the vertices of the mesh are coded in an order that partially contains the topology of the mesh. In the same time, some simple rules attempt to predict the position of each vertex from the positions of its neighbors that have been previously coded. We describe a compression algorithm whose principle is completely different: the coding order of the vertices is used to compress their coordinates, and then the topology of the mesh is reconstructed from the vertices. This algorithm achieves compression ratios that are slightly better than those of the currently available algorithms, and moreover, it allows progressive and interactive transmission of the meshes.

156 citations


Journal ArticleDOI
TL;DR: It is proved that for some nonstationary sources, the proposed context-dependent algorithms can achieve better expected redundancies than any existing CFG-based codes, including the Lempel-Ziv (1978) algorithm, the multilevel pattern matching algorithm, and the context-free algorithms in Part I of this series of papers.
Abstract: For pt. I see ibid., vol.46, p.755-88 (2000). The concept of context-free grammar (CFG)-based coding is extended to the case of countable-context models, yielding context-dependent grammar (CDG)-based coding. Given a countable-context model, a greedy CDG transform is proposed. Based on this greedy CDG transform, two universal lossless data compression algorithms, an improved sequential context-dependent algorithm and a hierarchical context-dependent algorithm, are then developed. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source with a finite alphabet. Moreover, it is proved that these algorithms' worst case redundancies among all individual sequences of length n from a finite alphabet are upper-bounded by d log log n/log n, as long as the number of distinct contexts grows with the sequence length n in the order of O(n/sup a/), where 0 < /spl alpha/ < 1 and d are positive constants. It is further shown that for some nonstationary sources, the proposed context-dependent algorithms can achieve better expected redundancies than any existing CFG-based codes, including the Lempel-Ziv (1978) algorithm, the multilevel pattern matching algorithm, and the context-free algorithms in Part I of this series of papers.

Journal ArticleDOI
TL;DR: A perceptual-based image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate, which is based on a locally adaptive perceptual quantization scheme for compressing the visual data.
Abstract: Most existing efforts in image and video compression have focused on developing methods to minimize not perceptual but rather mathematically tractable, easy to measure, distortion metrics. While nonperceptual distortion measures were found to be reasonably reliable for higher bit rates (high-quality applications), they do not correlate well with the perceived quality at lower bit rates and they fail to guarantee preservation of important perceptual qualities in the reconstructed images despite the potential for a good signal-to-noise ratio (SNR). This paper presents a perceptual-based image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder is based on a locally adaptive perceptual quantization scheme for compressing the visual data. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion based on a subband decomposition. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels to the local amount of masking present at the level of each subband transform coefficient. Compared to the existing non-locally adaptive perceptual quantization methods, the new locally adaptive algorithm exhibits superior performance and does not require additional side information. This is accomplished by estimating the amount of available masking from the already quantized data and linear prediction of the coefficient under consideration. By virtue of the local adaptation, the proposed quantization scheme is able to remove a large amount of perceptually redundant information. Since the algorithm does not require additional side information, it yields a low entropy representation of the image and is well suited for perceptually lossless image compression.

Patent
26 May 2000
TL;DR: In this paper, a method for compressing input data comprising a plurality of data blocks comprises the steps of: detecting if the input data comprises a run-length sequence of data block, outputting an encoded runlength sequence, if a run length sequence is detected; maintaining a dictionary comprising of code words, wherein each code word in the dictionary is associated with a unique data block string.
Abstract: Systems and methods for providing lossless data compression and decompression are disclosed which exploit various characteristics of run-length encoding, parametric dictionary encoding, and bit packing to comprise an encoding/decoding process having an efficiency that is suitable for use in real-time lossless data compression and decompression applications In one aspect, a method for compressing input data comprising a plurality of data blocks comprises the steps of: detecting if the input data comprises a run-length sequence of data blocks; outputting an encoded run-length sequence, if a run-length sequence of data blocks is detected; maintaining a dictionary comprising a plurality of code words, wherein each code word in the dictionary is associated with a unique data block string; building a data block string from at least one data block in the input data that is not part of a run-length sequence; searching for a code word in the dictionary having a unique data block string associated therewith that matches the built data block string; and outputting the code word representing the built data block string

Journal ArticleDOI
TL;DR: A O(1/log n) maximal redundancy/sample upper bound is established for the multilevel pattern matching code with respect to any class of finite state sources of uniformly bounded complexity in processing a finite-alphabet data string of length n.
Abstract: A universal lossless data compression code called the multilevel pattern matching code (MPM code) is introduced. In processing a finite-alphabet data string of length n, the MPM code operates at O(log log n) levels sequentially. At each level, the MPM code detects matching patterns in the input data string (substrings of the data appearing in two or more nonoverlapping positions). The matching patterns detected at each level are of a fixed length which decreases by a constant factor from level to level, until this fixed length becomes one at the final level. The MPM code represents information about the matching patterns at each level as a string of tokens, with each token string encoded by an arithmetic encoder. From the concatenated encoded token strings, the decoder can reconstruct the data string via several rounds of parallel substitutions. A O(1/log n) maximal redundancy/sample upper bound is established for the MPM code with respect to any class of finite state sources of uniformly bounded complexity. We also show that the MPM code is of linear complexity in terms of time and space requirements. The results of some MPM code compression experiments are reported.

Proceedings ArticleDOI
28 Dec 2000
TL;DR: Evaluating JPEG 2000 versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG, shows that the choice of the “best” standard depends strongly on the application at hand.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and more recent PNG. The study concentrates on compression efficiency and functionality set, while addressing other aspects such as complexity. Lossless compression efficiency as well as the fixed and progressive lossy rate-distortion behaviors are evaluated. Robustness to transmission errors, Region of Interest coding and complexity are also discussed. The principles behind each algorithm are briefly described. The results show that the choice of the "best" standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

Journal ArticleDOI
01 Nov 2000
TL;DR: In this paper, the authors survey some of the recent advances in lossless compression of continuous-tone images and discuss the modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design.
Abstract: In this paper, we survey some of the recent advances in lossless compression of continuous-tone images. The modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared.

Proceedings Article
01 Sep 2000
TL;DR: A new algorithm able to achieve a one-to-one mapping of a digital image onto another image that takes into account the limited resolution and the limited quantization of the pixels is presented.
Abstract: This paper present a new algorithm able to achieve a one-to-one mapping of a digital image onto another image. This mapping takes into account the limited resolution and the limited quantization of the pixels. The mapping is achieved in a multiresolution way. Performing small modifications on statistics of the details allows to build a lossless, i.e. revertible, watermarking authenticating procedure.

PatentDOI
TL;DR: In this article, a lossless encoder and decoder are provided for transmitting a multichannel signal on a medium such as DVD-Audio, where the encoder accepts a downmix specification and splits the encoded stream into two substreams.
Abstract: A lossless encoder and decoder are provided for transmitting a multichannel signal on a medium such as DVD-Audio. The encoder accepts additionally a downmix specification and splits the encoded stream into two substreams, such that a two-channel decoder of meagre computational power can implement the downmix specification by decoding one substream, while a multichannel decoder can decode the original multichannel signal losslessly using both substreams. Further features provide for efficient implementation on 24-bit processors, for confirmation of lossless reproduction to the user, and for benign behaviour in the case of downmix specifications that result in overload. The principle is also extended to mixed-rate signals, where for example some input channels are sampled at 48kHz and some are sampled at 96kHz

Proceedings ArticleDOI
28 Mar 2000
TL;DR: A lossless algorithm of delta compression that attempts to predict the next point from previous points using higher-order polynomial extrapolation, in contrast to traditional predictive coding that takes into account varying (non-equidistant) domain steps.
Abstract: Summary form only given. We propose a lossless algorithm of delta compression (a variant of predictive coding) that attempts to predict the next point from previous points using higher-order polynomial extrapolation. In contrast to traditional predictive coding our method takes into account varying (non-equidistant) domain (typically, time) steps. To save space and guarantee lossless compression, the actual and predicted values are converted to 64-bit integers. The residual (difference between actual and predicted values) is computed as difference of integers. The unnecessary bits of the residual are truncated, e.g., 1111110101 is replaced by 10101. The length of the bit sequence (5/sub 10/=(000101)/sub 2/) is prepended.

Journal ArticleDOI
TL;DR: A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented, which efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream.
Abstract: A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

Journal ArticleDOI
TL;DR: The resulting one‐dimensional representation of the image has improved autocorrelation compared with universal scans such as the Peano‐Hilbert space filling curve and the potential of improved autOCorrelation of context‐based space filling curves for image and video lossless compression is discussed.
Abstract: A context-based scanning technique for images is presented. An image is scanned along a context-based space filling curve that is computed so as to exploit inherent coherence in the image. The resulting one-dimensi onal representation of the image has improved autocorrelation compared with universal scans such as the PeanoHilbert space filling curve. An efficient algorithm for computing context-based space filling curves is presented. We also discuss the potential of improved autocorrelation of context-based space filling curves for image and video lossless compression.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: As the results show, JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, is about to be finished Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC This paper compares the set of features offered by JPEG 2000, and how well they are fulfilled, versus JPEG-LS and MPEG-4 VTC, as well as the older but widely used JPEG and the PNG The study concentrates on the set of supported features, although lossless and lossy progressive compression efficiency results are also reported Each standard, and the principles of the algorithms behind them, are also described As the results show, JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance

Proceedings ArticleDOI
J. Seward1
28 Mar 2000
TL;DR: This paper presents the base algorithm using a more formal framework, describes two important optimisations not present in the original paper, and measures performance of the variants on a set of 14 files to give a clearer picture of which variants perform well, and why.
Abstract: In recent years lossless text compression based on the Burrows-Wheeler transform (BWT) has grown popular. The expensive activity during compression is sorting of all the rotations of the block of data to compress. Burrows and Wheeler (1994) describe an efficient implementation of rotation sorting but give little analysis of its performance. This paper addresses that need. We present the base algorithm using a more formal framework, describe two important optimisations not present in the original paper, and measure performance of the variants on a set of 14 files. For completeness, a tuned implementation of Sadakane's (1998) sorting algorithm was also tested. Merely measuring running times gives poor insight into the finer aspects of performance on contemporary machine architectures. We report measurements of instruction counts and cache misses for the algorithms giving a clearer picture of which variants perform well, and why.

Patent
21 Dec 2000
TL;DR: In this paper, the authors present a method of motion estimation for enhanced video compression, which is based on comparing the error between segments of a current frame and a predicted frame, if the error exceeds a predetermined threshold, which can be based on program content, the next frame will be an I frame.
Abstract: Methods and apparatuses for still image compression, video compression and automatic target recognition are disclosed. The method of still image compression uses isomorphic singular manifold projection whereby surfaces of objects having singular manifold representations are represented by best match canonical polynomials to arrive at a model representation. The model representation is compared with the original representation to arrive at a difference. If the difference exceeds a predetermined threshold, the difference data are saved and compressed using standard lossy compression. The coefficients from the best match polynomial together with the difference data, if any, are then compressed using lossless compression. The method of motion estimation for enhanced video compression sends I frames on an “as-needed” basis, based on comparing the error between segments of a current frame and a predicted frame. If the error exceeds a predetermined threshold, which can be based on program content, the next frame sent will be an I frame. The method of automatic target recognition (ATR) including tracking, zooming, and image enhancement, uses isomorphic singular manifold projection to separate texture and sculpture portions of an image. Soft ATR is then used on the sculptured portion and hard ATR is used on the texture portion.

Journal ArticleDOI
TL;DR: At x2 magnification, images compressed with either JPEG or WTCQ algorithms were indistinguishable from unaltered original images for most observers at compression ratios between 8:1 and 16:1, indicating that 10:1 compression is acceptable for primary image interpretation.
Abstract: PURPOSE: To determine the degree of irreversible image compression detectable in conservative viewing conditions. MATERIALS AND METHODS: An image-comparison workstation, which alternately displayed two registered and magnified versions of an image, was used to study observer detection of image degradation introduced by irreversible compression. Five observers evaluated 20 16-bit posteroanterior digital chest radiographs compressed with Joint Photographic Experts Group (JPEG) or wavelet-based trellis-coded quantization (WTCQ) algorithms at compression ratios of 8:1–128:1 and ×2 magnification by using (a) traditional two-alternative forced choice; (b) original-revealed two-alternative forced choice, in which the noncompressed image is identified to the observer; and (c) a resolution-metric method of matching test images to degraded reference images. RESULTS: The visually lossless threshold was between 8:1 and 16:1 for four observers. JPEG compression resulted in performance as good as that with WTCQ compres...

Proceedings Article
01 Jan 2000
TL;DR: A new frequency domain approach is proposed for an automatic near-lossless time stretching of audio for time-scale modification of sounds.
Abstract: [Published in the Proceedings of the ICMC 2000] Abstract Time-scale modification of sounds has been a topic of interest in computer music since its very beginning. Many different techniques both in time and frequency domain have been proposed to solve the problem. Some frequency domain techniques yield high-quality results and can work with large modification factors. However, they present some artifacts, like phasiness, loss of attack sharpness and loss of stereo image. In this paper we propose a new frequency domain approach for an automatic near-lossless time stretching of audio.

Journal ArticleDOI
TL;DR: This analysis shows that, under certain reasonable assumptions, the raster scan is indeed better than the Hilbert scan, thereby dispelling the popular notion that using a Hilbert scan would always lead to improved performance.
Abstract: Though most image coding techniques use a raster scan to order pixels prior to coding, Hilbert and other scans have been proposed as having better performance due to their superior locality preserving properties. However, a general understanding of the merits of various scans has been lacking. This paper develops an approach for quantitatively analyzing the effect of pixel scan order for context-based, predictive lossless image compression and uses it to compare raster, Hilbert, random and hierarchical scans. Specifically, for a quantized-Gaussian image model and a given scan order, it shows how the encoding rate can be estimated from the frequencies with which various pixel configurations are available as previously scanned contexts, and from the corresponding conditional differential entropies. Formulas are derived for such context frequencies and entropies. Assuming an isotropic image model and contexts consisting of previously scanned adjacent pixels, it is found that the raster scan is better than the Hilbert scan which is often used in compression applications due to its locality preserving properties. The hierarchical scan is better still, though it is based on nonadjacent contexts. The random scan is the worst of the four considered. Extensions and implications of the results to lossy coding are also discussed.

Patent
28 Apr 2000
TL;DR: In this paper, a method for compression coding of a digitally represented video image is presented, where each pixel value is represented as a linear combination of actual pixel values, drawn from one frame or from two or more adjacent frames.
Abstract: Method and system for compression coding of a digitally represented video image. The video image is expressed as one or more data blocks in two or more frames, each block having a sequence of pixels with pixel values. Within each block of a frame, an intra-frame predictor index or inter-frame predictor index is chosen that predicts a pixel value as a linear combination of actual pixel values, drawn from one frame or from two or more adjacent frames. The predicted and actual pixel values are compared, and twice the predicted value is compared with the sum of the actual value and a maximum predicted value, to determine a value index, which is used to represent each pixel value in a block in compressed format in each frame. The compression ratios achieved by this coding approach compare favorably with, and may improve upon, the compression achieved by other compression methods. Several processes in determination of the compressed values can be performed in parallel to increase throughput or to reduce processing time.

Proceedings ArticleDOI
01 Feb 2000
TL;DR: A novel compression paradigm is devised—training for lossless compression— which assumes that the data exhibit dependencies that can be learned by examining a small amount of training material, and which outperforms gzip in compression size and both compression and uncompression time for various tabular data.
Abstract: We study the problem of compressing massive tables. We devise a novel compression paradigm—training for lossless compression— which assumes that the data exhibit dependencies that can be learned by examining a small amount of training material. We develop an experimental methodology to test the approach. Our result is a system, pzip, which outperforms gzip by factors of two in compression size and both compression and uncompression time for various tabular data. Pzip is now in production use in an AT&T network traffic data warehouse.

Journal ArticleDOI
TL;DR: A thorough analysis of the Burrows-Wheeler Transformation from an information theoretic point of view is provided and it is shown that the program achieves a better compression rate than other programs that have similar requirements in space and time.
Abstract: A very interesting recent development in data compression is the Burrows-Wheeler Transformation. The idea is to permute the input sequence in such a way that characters with a similar context are grouped together. We provide a thorough analysis of the Burrows-Wheeler Transformation from an information theoretic point of view. Based on this analysis, the main part of the paper systematically considers techniques to efficiently implement a practical data compression program based on the transformation. We show that our program achieves a better compression rate than other programs that have similar requirements in space and time.