scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 2002"


Proceedings ArticleDOI
10 Dec 2002
TL;DR: A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.
Abstract: We present a novel reversible (lossless) data hiding (embedding) technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.

1,126 citations


Patent
22 Aug 2002
TL;DR: In this paper, a reversible wavelet filter is used to generate coefficients from input data, such as image data, and an entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.
Abstract: A compression and decompression system in which a reversible wavelet filter are used to generates coefficients from input data, such as image data. The reversible wavelet filter is an efficient transform implemented with integer arithmetic that has exact reconstruction. The present invention uses the reversible wavelet filter in a lossless system (or lossy system) in which an embedded codestream is generated from the coefficients produced by the filter. An entropy coder performs entropy coding on the embedded codestream to produce the compressed data stream.

218 citations


Book
01 Jan 2002
TL;DR: This monograph presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and expensive process of manually designing and implementing compression systems.
Abstract: Preface. 1. Data Compression Systems. 2. Fundamental Limits. 3. Static Codes 4. Minimum-Redundancy Coding. 5. Arithmetic Coding. 6. Adaptive Coding. 7. Additional Constraints. 8. Compression Systems. 9. What Next? References. Index.

162 citations


Patent
17 Sep 2002
TL;DR: In this paper, a disparity prediction stage and a motion prediction stage predict disparity vector and motion vector by extending a MPEG-2 structure into a view axis and using spatial/temporal correlation.
Abstract: A disparity prediction stage and a motion prediction stage predict a disparity vector and a motion vector by extending a MPEG-2 structure into a view axis and using spatial/temporal correlation. A disparity/motion compensation stage compensates an image reconstructed by the disparity prediction stage and the motion prediction stage by using a sub-pixel compensation method. A residual image encoding stage performs an encoding to provide a better visual quality and a three-dimensional effect of an original image and the reconstructed image. A bit rate control stage controls a bit rate for assigning an effective amount of bit to each frame on the reconstructed image according to a bit rate. An entropy encoding stage generates a bit stream on multi-view video source data according to the bit rate.

126 citations


Book
01 Jan 2002
TL;DR: What is information theory?
Abstract: What is information theory? Basics of information theory Source and coding Arithmetic code Universal coding of integers Universal coding of texts Universal coding of compound sources Data analysis and MDL principle Bibliography Index

91 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: An efficient architecture composed of pass-parallel context modeling scheme and low-cost pass switching arithmetic encoder for EBCOT entropy encoder used in JPEG2000 that reduces the processing time by more than 25% and reduces 4K bits of internal memories.
Abstract: In this paper we propose an efficient architecture composed of pass-parallel context modeling scheme and low-cost pass switching arithmetic encoder (PSAE) for EBCOT entropy encoder used in JPEG2000. The pass-parallel context modeling scheme merges the three coding passes of bit-plane coding process into a single pass to improve the system performance. Instead of using three arithmetic encoders, the PSAE needs only one arithmetic encoder, and thus reduces the hardware cost. The proposed architecture therefore has three main advantages: 1) fast computation, 2) low internal memory accesses, and 3) compared with the conventional architectures it reduces 4K bits of internal memories. The experimental results show that the proposed architecture reduces the processing time by more than 25% compared with the previous methods.

89 citations


Journal ArticleDOI
TL;DR: In this paper, vector quantization is used to accelerate the computation of linear vertex transformations, which can be used for complexity reduction by approximately 60 percent of the time required by a conventional method without compression.
Abstract: Rendering geometrically detailed 3D models requires the transfer and processing of large amounts of triangle and vertex geometry data. Compressing the geometry bit stream can reduce bandwidth requirements and alleviate transmission bottlenecks. In this paper, we show vector quantization to be an effective compression technique for triangle mesh vertex data. We present predictive vector quantization methods using unstructured code books as well as a product code pyramid vector quantizer. The technique is compatible with most existing mesh connectivity encoding schemes and does not require the use of entropy coding. In addition to compression, our vector quantization scheme can be used for complexity reduction by accelerating the computation of linear vertex transformations. Consequently, an encoded set of vertices can be both decoded and transformed in approximately 60 percent of the time required by a conventional method without compression.

59 citations


PatentDOI
Jin Li1
TL;DR: The embedded audio coder (EAC) as discussed by the authors is a fully scalable psychoacoustic audio coding method which uses a novel perceptual audio coding approach termed "implicit auditory masking" which is intermixed with a scalable entropy coding process.
Abstract: The embedded audio coder (EAC) is a fully scalable psychoacoustic audio coder which uses a novel perceptual audio coding approach termed “implicit auditory masking” which is intermixed with a scalable entropy coding process. When encoding and decoding an audio file using the EAC, auditory masking thresholds are not sent to a decoder. Instead, the masking thresholds are automatically derived from already coded coefficients. Furthermore, in one embodiment, rather than quantizing the audio coefficients according to the auditory masking thresholds, the masking thresholds are used to control the order that the coefficients are encoded. In particular, in this embodiment, during the scalable coding, larger audio coefficients are encoded first, as the larger components are the coefficients that contribute most to the audio energy level and lead to a higher auditory masking threshold.

54 citations


Proceedings ArticleDOI
02 Apr 2002
TL;DR: The work presented here considers the case when decompression must be done from compressed data corrupted by additive white Gaussian noise (AWGN), and the use of parallel concatenated codes and iterative decoding for fixed-length to fixed- length source coding, i.e., turbo coding for data compression purposes.
Abstract: Summary form only given. All traditional data compression techniques, such as Huffman coding, the Lempel-Ziv algorithm, run-length limited coding, Tunstall coding and arithmetic coding are highly susceptible to residual channel errors and noise. We have previously proposed the use of parallel concatenated codes and iterative decoding for fixed-length to fixed-length source coding, i.e., turbo coding for data compression purposes. The work presented here extends these results and also considers the case when decompression must be done from compressed data corrupted by additive white Gaussian noise (AWGN).

51 citations


Journal ArticleDOI
TL;DR: A novel image compression technique is presented that incorporates progressive transmission and near-lossless compression in a single framework and proves to be competitive with the state-of-the-art compression schemes.
Abstract: A novel image compression technique is presented that incorporates progressive transmission and near-lossless compression in a single framework. Experimental performance of the proposed coder proves to be competitive with the state-of-the-art compression schemes.

50 citations


Journal ArticleDOI
TL;DR: This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding using partitioning of prediction errors into homogeneous classes before arithmetic coding.
Abstract: This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding. The proposed method is based on partitioning of prediction errors into homogeneous classes before arithmetic coding. A context function is measured on prediction errors lying within a two-dimensional (2-D) causal neighborhood, comprising the prediction support of the current pixel, as the root mean square (RMS) of residuals weighted by the reciprocal of their Euclidean distances. Its effectiveness is demonstrated in comparative experiments concerning both lossless and near-lossless coding. The proposed context coding/decoding is strictly real-time.

Journal ArticleDOI
TL;DR: The performance of state-of-the-art lossless image coding methods can be considerably improved by a recently introduced preprocessing technique that can be applied whenever the images have sparse histograms, and this letter addresses this issue.
Abstract: The performance of state-of-the-art lossless image coding methods [such as JPEG-LS, lossless JPEG-2000, and context-based adaptive lossless image coding (CALIC)] can be considerably improved by a recently introduced preprocessing technique that can be applied whenever the images have sparse histograms. Bitrate savings of up to 50% have been reported, but so far no theoretical explanation of the fact has been advanced. This letter addresses this issue and analyzes the effect of the technique in terms of the interplay between histogram packing and the image total variation, emphasizing the lossless JPEG-2000 case.

Proceedings ArticleDOI
10 Dec 2002
TL;DR: A new adaptive entropy coding scheme for video compression is presented that utilizes an adaptive arithmetic coding technique to better match the first order entropy of the coded symbols and to keep track of nonstationary symbol statistics.
Abstract: In this paper a new adaptive entropy coding scheme for video compression is presented. It utilizes an adaptive arithmetic coding technique to better match the first order entropy of the coded symbols and to keep track of nonstationary symbol statistics. In addition, remaining symbol redundancies are exploited by context modeling to further reduce the bit-rate. A novel approach for coding of transform coefficients and a table look-up method for probability estimation and arithmetic coding is presented. Our new approach has been integrated in the current JVT test model (JM) to demonstrate the performance gain, and it was adopted as a part of the current JVT/H.26L draft.

Proceedings ArticleDOI
Jin Li1
01 Dec 2002
TL;DR: Extensive experimental results demonstrate that the EAC coder substantially outperforms existing scalable audio coders and audio compression standards (MP3 and MPEG-4), and rivals the best available commercial audio coder.
Abstract: An embedded audio coder (EAC) is proposed with compression performance rivals the best available non-scalable audio coder. The key technology that empowers the EAC with high performance is the implicit auditory masking. Unlike the common practice, where an auditory masking threshold is derived from the input audio signal, transmitted to the decoder and used to quantize (modify) the transform coefficients; the EAC integrates the auditory masking process into the embedded entropy coding. The auditory masking threshold is derived from the encoded coefficients and used to change the order of coding. There is no need to store or send the auditory masking threshold in the EAC. By eliminating the overhead of the auditory mask, EAC greatly improves the compression efficiency, especially at low bitrate. Extensive experimental results demonstrate that the EAC coder substantially outperforms existing scalable audio coders and audio compression standards (MP3 and MPEG-4), and rivals the best available commercial audio coder. Yet the EAC compressed bitstream is fully scalable, in term of the coding bitrate, number of audio channels and audio sampling rate.

Journal ArticleDOI
TL;DR: A thorough comparison with the most advanced methods in the literature, as well as an investigation of performance trends and computing times to work parameters, highlight the advantages of the proposed fuzzy approach to data compression.
Abstract: This paper presents an application of fuzzy-logic techniques to the reversible compression of grayscale images. With reference to a spatial differential pulse code modulation (DPCM) scheme, prediction may be accomplished in a space-varying fashion either as adaptive, i.e., with predictors recalculated at each pixel, or as classified, in which image blocks or pixels are labeled in a number of classes, for which fitting predictors are calculated. Here, an original tradeoff is proposed; a space-varying linear-regression prediction is obtained through fuzzy-logic techniques as a problem of matching pursuit, in which a predictor different for every pixel is obtained as an expansion in series of a finite number of prototype nonorthogonal predictors, that are calculated in a fuzzy fashion as well. To enhance entropy coding, the spatial prediction is followed by context-based statistical modeling of prediction errors. A thorough comparison with the most advanced methods in the literature, as well as an investigation of performance trends and computing times to work parameters, highlight the advantages of the proposed fuzzy approach to data compression.

Journal ArticleDOI
TL;DR: A new 3D subband coding framework that achieves a good balance between high compression performance and channel error resilience and is able to achieve highly competitive performance relative to MPEG-2 in both noiseless and noisy environments is presented.
Abstract: We present a new 3D subband coding framework that achieves a good balance between high compression performance and channel error resilience. Various data transform methods for video decorrelation were examined and compared. The coding stage of the algorithm is based on a generalized adaptive quantization framework which is applied to the 3D transformed coefficients. It features a simple coding structure based on quadtree coding and lattice vector quantization. In typical applications, good performance at high compression ratios is obtained, often without entropy coding. Because temporal decorrelation is absorbed by the transform, traditional motion compensated prediction becomes unnecessary, resulting in a significant computational advantage over standard video coders. Error-resilience is achieved through classifying the compressed data streams into separated sub-streams with different error sensitivity levels. This enables a good adaptation to different channel models according to their noise statistics and error-protection protocols. Experimental results have shown that the subband video coder is able to achieve highly competitive performance relative to MPEG-2 in both noiseless and noisy environments. Lapped transforms are shown experimentally to outperform other transforms in the 3D subband environment. The subband coding framework provides a practical solution for video communications over wireless channels, where efficiency, error resilience and computational simplicity are vital in providing superior quality of service.

Proceedings ArticleDOI
20 Oct 2002
TL;DR: This paper considers a situation where the Slepian-Wolf rate for source coding with side information at the receiver is not known, possibly since the source is broadcasted to many heterogeneous receivers.
Abstract: The Slepian-Wolf scheme for source coding with side information at the receiver, assures that the sender can send the source X at a rate of only the conditional entropy H(X|Y-) bits per source symbol, which is the minimal possible rate even if the sender knew the side information Y. However, the Slepian-Wolf result requires knowledge of the optimal required rate. In this paper we consider a situation where this rate is not known, possibly since the source is broadcasted to many heterogeneous receivers. The approach is based on recent results regarding sending a common information over a broadcast channel.

Journal ArticleDOI
TL;DR: This paper aims to identify the potential improvements to compression performance through improved decorrelation through two adaptive prediction schemes presented that aim to provide the highest possible decorrelation of the prediction error data.
Abstract: Lossless image compression is often performed through decorrelation, context modelling and entropy coding of the prediction error. This paper aims to identify the potential improvements to compression performance through improved decorrelation. Two adaptive prediction schemes are presented that aim to provide the highest possible decorrelation of the prediction error data. Consequently, complexity is overlooked and a high degree of adaptivity is sought. The adaptation of the respective predictor coefficients is based on training of the predictors in a local causal area adjacent to the pixel to be predicted. The causal nature of the training means no transmission overhead is required and also enables lossless coding of the images. The first scheme is an adaptive neural network, trained on the actual data being coded enabling continuous updates of the network weights. This results in a highly adaptive predictor, with localised optimisation based on stochastic gradient learning. Training for the second scheme is based on the recursive LMS (RLMS) algorithm incorporating feedback of the prediction error. In addition to the adaptive prediction, the results presented here also incorporate an arithmetic coding scheme, producing results which are better than CALIC.

Jin Li1
01 Jan 2002
TL;DR: The mathematics in the coding engine of JPEG 2000, a state-of-the-art image compression system, is reviewed, focusing in depth on the transform, entropy coding and bitstream assembler modules.
Abstract: We briefly review the mathematics in the coding engine of JPEG 2000, a state-of-the-art image compression system. We focus in depth on the transform, entropy coding and bitstream assembler modules. Our goal is to pass the readers a good understanding of the modern scalable image compression technologies without being swarmed by the details.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: Two new accelerating schemes are proposed and applied to the prototyping design which turns out to be powerful enough to fulfill the demand of computational requirement of the most advanced digital still camera.
Abstract: Embedded block coding with optimized truncation (EBCOT) is the entropy coding algorithm adopted by the new still image compression standard JPEG 2000. It is composed of a multi-pass fractional bit-plane context scanning along with an arithmetic coding procedure. GPP (general purpose processor) or DSP fails to accelerate this kind of bit-level operation, which is proven to occupy most of the computational time of the JPEG 2000 system. In this paper, two new accelerating schemes are proposed and applied to our prototyping design which turns out to be powerful enough to fulfill the demand of computational requirement of the most advanced digital still camera.

Patent
08 Nov 2002
TL;DR: In this article, a wavelet transform unit is used to hierarchically transform a low frequency component repetitively in the vertical and horizontal directions of an input image signal in the horizontal direction.
Abstract: According to the present invention, not only still images but also moving images can be compressed with high quality. In an image coding apparatus, a wavelet transform unit low-pass filters and high-pass filters an input image signal in the vertical and horizontal directions to hierarchically transform a low frequency component repetitively. In the case of an interlaced image, the wavelet transform unit further decomposes a subband including, for example, a horizontal low frequency component and a vertical high frequency component in consideration of the characteristics of the image signal. A code-block generation unit divides quantized coefficients into code-blocks each having a predetermined size. The code-block serves as a unit subjected to entropy coding. In the case of the interlaced image, each code-block is set to, for example, 32×32 so as to be smaller than each code-block of a progressive image.

Proceedings ArticleDOI
01 Aug 2002
TL;DR: In this article, an interband version of the linear prediction approach for hyperspectral images was proposed, which achieved a compression ratio in the range of 3.02 to 3.14 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images.
Abstract: This paper proposes an interband version of the linear prediction approach for hyperspectral images. Linear prediction represents one of the best performing and most practical and general purpose lossless image compression techniques known today. The interband linear prediction method consists of two stages: predictive decorrelation producing residuals and entropy coding of the residuals. Our method achieved a compression ratio in the range of 3.02 to 3.14 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
13 May 2002
TL;DR: This new lossless video encoder outperforms state-of-the-art lossless image compression techniques, enabling more efficient video storage and communications.
Abstract: We present our new low-complexity compression algorithm for lossless coding of video sequences. This new coder produces better compression ratios than lossless compression of individual images by exploiting temporal as well as spatial and spectral redundancy. Key features of the coder are a pixel-neighborhood backward-adaptive temporal predictor, an intra-frame spatial predictor and a differential coding scheme of the spectral components. The residual error is entropy coded by a context-based arithmetic encoder. This new lossless video encoder outperforms state-of-the-art lossless image compression techniques, enabling more efficient video storage and communications.

Proceedings ArticleDOI
Gabriel Taubin1
27 Oct 2002
TL;DR: A new and simple algorithm to compress isosurface data from scalar functions defined on volume grids, and used to generate polygon meshes or alternative representations is introduced.
Abstract: In this paper we introduce a new and simple algorithm to compress isosurface data. This is the data extracted by isosurface algorithms from scalar functions defined on volume grids, and used to generate polygon meshes or alternative representations. In this algorithm the mesh connectivity and a substantial proportion of the geometric information are encoded to a fraction of a bit per marching cubes vertex with a context based arithmetic coder closely related to the JBIG binary image compression standard. The remaining optional geometric information that specifies the location of each marching cubes vertex more precisely along its supporting intersecting grid edge, is efficiently encoded in scan-order with the same mechanism. Vertex normals can optionally be computed as normalized gradient vectors by the encoder and included in the bitstream after quantization and entropy encoding, or computed by the decoder in a postprocessing smoothing step. These choices are determined by trade-offs associated with an in-core vs. out-of-core decoder structure. The main features of our algorithm are its extreme simplicity and high compression rates.

Patent
13 Feb 2002
TL;DR: In this paper, a scalable motion image compression system for a digital motion image signal having an associated transmission rate is proposed, where a decomposition module is used to decompose the digital motion images into component parts and send the components.
Abstract: A scalable motion image compression system for a digital motion image signal having an associated transmission rate. The scalable motion image compression system includes a decomposition module for receiving the digital motion image signal, decomposing the digital motion image signal into component parts and sending the components. The decomposition module may further perform color rotation, spatial decomposition and temporal decomposition. The system further includes a compression module for receiving each of the component parts from the decomposition module, compressing the component part, and sending the compressed component part to a memory location. The compression module may perform sub-band wavelet compression and may further include functionality for quantization and entropy encoding.

Journal ArticleDOI
01 Aug 2002
TL;DR: It is shown, through simulations, that wireless image transmission using turbo coded OFDM is better suited to a multipath channel.
Abstract: A robust image transmission system incorporating perceptually-based coding and error protection is proposed for wireless channels. The perceptually-based image compression coder consists of a wavelet transform, adaptive quantization and variable-length entropy encoding. Uniform quantization, adaptive quantization were all computed for the wavelet transform and comparisons of the methods are presented. The performance of adaptive quantization is superior to that of uniform quantization The quantization data are encoded using turbo code. It is shown, through simulations, that wireless image transmission using turbo coder OFDM is better suited to a multipath channel than a single carrier transmission technique.

Proceedings ArticleDOI
19 Aug 2002
TL;DR: This paper provides an overview of important compression methods and techniques, including lossless entropy coding techniquesdesigned to reduce the redundancy in the critical multimedia material, as well as lossy coding techniques designed to preserve the relevancy of the noncritical multimedia material.
Abstract: Multimedia involves a myriad of data and multidimensional signals, including not only plain and formatted text, but also mathematical and other symbols, tables, vector and bitmap graphics, images, sound, animation, video, and interactive virtual reality objects. Compression of such signals is usually necessary to fit them into the available communications channels and digital storage, or for data mining. This paper provides an overview of important compression methods and techniques, including lossless entropy coding techniques designed to reduce the redundancy in the critical multimedia material, as well as lossy coding techniques designed to preserve the relevancy of the noncritical multimedia material. Modern lossy techniques often employ wavelets, wavelet packets, fractals, and neural networks. Progressive image transmission is also employed to deliver the material quickly. The paper also addresses several approaches to blind separation of signal from noise (denoising) to improve the compression, and to the difficult question of objective and subjective image quality assessment through complexity metrics.

Journal ArticleDOI
TL;DR: This paper presents several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulatedData from the future CERN LHC experiment ALICE.
Abstract: In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: The compression performance of JPEG which has been encrypted with the zig-zag permutation algorithm is evaluated, a security enhancement to the scheme is suggested, and an alternative to entropy coding recommended by JPEG is proposed to compensate for the compression drop occurring due to permutation.
Abstract: Recent development in the Internet and Web based technologies require faster communication of multimedia data in a secure form. A number of encryption schemes for MPEG have been proposed. In this paper, we evaluate the compression performance of JPEG which has been encrypted with the zig-zag permutation algorithm, suggest a security enhancement to the scheme, and propose an alternative to entropy coding recommended by JPEG to compensate for the compression drop occurring due to permutation.

Patent
Sung-jin Kim1, Shin-Jun Lee1
25 Feb 2002
TL;DR: In this paper, an encoding method and apparatus of deformation information of a 3D (3D) object are provided, in which information on vertices forming the shape of the 3D object is described by a key framing method.
Abstract: An encoding method and apparatus of deformation information of a 3-dimensional (3D) object are provided. The encoding method of deformation information of a 3-Dimensional (3D) object, in which information on vertices forming the shape of the 3D object is described by a key framing method for performing deformation of the 3D object, the encoding method includes: (a) extracting keys indicating positions of key frames on a time axis, key values indicating characteristics information of key frames, and relation information, by parsing node information of the 3D object; (b) generating vertex connectivity information from the related information; (c) generating differential values for each of the keys from which temporal data redundancy is to be removed, and key values from which spatiotemporal data redundancy is to be removed, based on the vertex connectivity information; (d) quantizing the differential values; and (e) removing redundancy among bits and generating compressed bit stream through entropy encoding.