scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 2001"


Journal ArticleDOI
TL;DR: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling, and the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Abstract: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

145 citations


Proceedings ArticleDOI
22 Aug 2001
TL;DR: A motion compensated lifting (MCLIFT) framework is proposed for the 3D wavelet video coder that can be scalable in frame rate and quality level and outperforms MPEG-4 coder with proper entropy coding and bitstream packaging schemes.
Abstract: A motion compensated lifting (MCLIFT) framework is proposed for the 3D wavelet video coder. By using bi-directional motion compensation in each lifting step of the temporal direction, the video frames are effectively de-correlated. With proper entropy coding and bitstream packaging schemes, the MCLIFT wavelet video coder can be scalable in frame rate and quality level. Experimental results show that the MCLIFT video coder outperforms the 3D wavelet video coder with the same entropy coding scheme by an average of 1.1-1.6dB, and outperforms MPEG-4 coder by an average of 0.9-1.4dB.

114 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: By using the new entropy coding scheme instead of the variable length code approach of the current TML, large bit-rate savings up to 32% can be achieved and it is observed that high gains are reached not only at high bit-rates, but also at very low rates.
Abstract: A new entropy coding scheme for video compression is presented. Context models are utilized for efficient prediction of the coding symbols. A novel binary adaptive arithmetic coding technique is employed to match the conditional entropy of the coding symbols given the context model estimates. The adaptation is also employed to keep track of non-stationary symbol statistics. Our new approach has been integrated into the current ITU-T H.26L test model (TML) to demonstrate the performance gain. By using our new entropy coding scheme instead of the variable length code approach of the current TML, large bit-rate savings up to 32% can be achieved. As a remarkable outcome of our experiments, we observed that high gains are reached not only at high bit-rates, but also at very low rates.

93 citations


Book
01 Sep 2001
TL;DR: This chapter discusses discrete memoryless channels and their capacity-cost functions, and the source-channel coding theorem, which addresses the problem of variable-length source coding.

91 citations


Journal ArticleDOI
TL;DR: The estimation-theoretic approach is first developed for basic DPCM compression and demonstrates the power of the technique in a simple setting that only involves straightforward prediction, scalar quantization, and entropy coding.
Abstract: A method is proposed for efficient scalability in predictive coding, which overcomes known fundamental shortcomings of the prediction loop at enhancement layers. The compression efficiency of an enhancement-layer is substantially improved by casting the design of its prediction module within an estimation-theoretic framework, and thereby exploiting all information available at that layer for the prediction of the signal, and encoding of the prediction error. While the most immediately important application is in video compression, the method is derived in a general setting and is applicable to any scalable predictive coder. Thus, the estimation-theoretic approach is first developed for basic DPCM compression and demonstrates the power of the technique in a simple setting that only involves straightforward prediction, scalar quantization, and entropy coding. Results for the scalable compression of first-order Gauss-Markov and Laplace-Markov signals illustrate the performance. A specific estimation algorithm is then developed for standard scalable DCT-based video coding. Simulation results show consistent and substantial performance gains due to optimal estimation at the enhancement-layers.

82 citations


Book ChapterDOI
21 May 2001
TL;DR: A blind watermarking method integrated in the JPEG2000 coding pipeline that is robust to compression and other image processing attacks, and demonstrates two application scenarios: image authentication and copyright protection.
Abstract: In this paper, we propose a blind watermarking method integrated in the JPEG2000 coding pipeline. Prior to the entropy coding stage, the binary watermark is placed in the independent code-blocks using Quantization Index Modulation (QIM). The quantization strategy allows to embed data in the detail subbands of low resolution as well as in the approximation image. Watermark recovery is performed without reference to the original image during image decompression. The proposed embedding scheme is robust to compression and other image processing attacks. We demonstrate two application scenarios: image authentication and copyright protection.

53 citations


Patent
05 Sep 2001
TL;DR: In this paper, the statistical information for each pixel value, extracted from a plurality of encoded blocks adjacent to an encoding target block, at each corresponding position within the respective blocks, is used to predict the value of a target pixel from the selected reference pixel.
Abstract: Image data is encoded using a block consisting of a plurality of pixels as a unit of processing. A statistics section extracts statistical information for each pixel value, from a plurality of encoded blocks adjacent to an encoding target block, at each corresponding position within the respective blocks. An encoding processing section performs encoding on the encoding target block using the statistical information extracted by the statistics section. The encoding processing section comprises a prediction section that predicts a value of an encoding target pixel from the selected reference pixel based on the statistical information, a comparison section that detects an error between the predicted value and the value of the encoding target pixel, and an encoding section that performs entropy encoding on the prediction error.

40 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: 2D DCT coding of MFCCs (mel-frequency cepstral coefficients) together with a method for variable frame rate analysis and peak isolation maintains the noise robustness of these algorithms at low SNRs even at 624 bps.
Abstract: A 2D DCT-based approach to compressing acoustic features for remote speech recognition applications is presented. The coding scheme involves computing a 2D DCT on blocks of feature vectors followed by uniform scalar quantization, runlength and Huffman coding. Digit recognition experiments were conducted in which training was done with unquantized cepstral features from clean speech and testing used the same features after coding and decoding with 2D DCT and entropy coding and in various levels of acoustic noise. The coding scheme results in recognition performance comparable to that obtained with unquantized features at low bitrates. 2D DCT coding of MFCCs (mel-frequency cepstral coefficients) together with a method for variable frame rate analysis (Zhu and Alwan, 2000) and peak isolation (Strope and Alwan, 1997) maintains the noise robustness of these algorithms at low SNRs even at 624 bps. The low-complexity scheme is scalable resulting in graceful degradation in performance with decreasing bit rate.

38 citations


Patent
Hiroshi Kajiwara1
09 Apr 2001
TL;DR: In this article, an image encode encoding apparatus consisting of a generation means for generating a prediction error from an encode encoding target pixel value and a prediction value of the encoder-decoder relationship was provided.
Abstract: There is provided an image encode encoding apparatus which comprises a generation means for generating a prediction error from an encode encoding target pixel value and a prediction value of the encode encoding target pixel value, a judgment means for generating a generation frequency distribution of the prediction error to judge whether or not the generation frequency distribution is discrete and an entropy encode encoding means for changing encode encoding data corresponding to the prediction error and performing entropy encoding on the obtained encode encoding data, in accordance with a judged result by the judgment means. Therefore, the encoding can be effectively performed even on such the image data having the discrete pixel value as in a CG image, a limited-color image or the like.

36 citations


Patent
08 Feb 2001
TL;DR: The prefix code is a binary representation of the algorithm used to compress and decompress the data as discussed by the authors, and prefix zeros represent the number of significant binary digits that follow the first one.
Abstract: The present invention provides an entropy coding scheme using an adaptable prefix code. The prefix code is a binary representation of the algorithm used to compress and decompress the data. There are prefix zeros that represent the number of significant binary digits that follow the first one. According to one embodiment, this scheme works on both positive and negative integers and encodes lower order integers with a smaller length of codeword. In another embodiment, the zero integer is encoded as a special case with the shortest codeword. In yet another embodiment, the present scheme is preferred by data sets that are clustered about zero, such as image data sets that have been transformed via a wavelet transform or a discrete cosine transform.

35 citations


Patent
Ulrich Lauther1
13 Jul 2001
TL;DR: In this article, a CT scan system with a compressing unit that compresses the data acquired by means of an X-ray source, before the data is being transmitted to a central processing unit.
Abstract: A CT scan system with a compressing unit that compresses the data acquired by means of an X-ray source, before the data is being transmitted to a central processing unit. The data compressing unit utilizes an entropy coding method for data compression. Four arrays of sensors are being used in one reading to create the data with a certain periodicity superimposed to the matrix of data that must be taken into account to achieve significant compression improvements. The present invention yields compression rates in the range of 20% to 30% while maintaining fast operation.

Patent
Louis Joseph Kerofsky1, Shijun Sun1
12 Dec 2001
TL;DR: In this article, the syntax of a bit stream input to a variable length coder is altered if the bit stream is likely to include a symbol with a high occurrence probability, and the compression efficiency of variable length coding is preserved.
Abstract: The compression efficiency of variable length coding is preserved by altering the syntax of a bit stream input to a variable length coder if the bit stream is likely to include a symbol with a high occurrence probability.

Patent
06 Dec 2001
TL;DR: In this paper, a discrete wavelet transform (DWT) engine, a code block manager, and an entropy encoder are used for buffering and decoding the transformed coefficients prior to entropy encoding.
Abstract: The apparatus comprises a discrete wavelet transform (DWT) engine, a code block manager, and an entropy encoder. The code block manager comprises at least one controller, which losslessly compresses the transform coefficients and stores them in a code block storage for buffering. The entropy coder comprises at least one entropy encoder, each comprising a decoder for decoding the losslessly compressed transformed coefficients prior to entropy encoding.

Proceedings ArticleDOI
Henrique S. Malvar1
27 Mar 2001
TL;DR: A new bi-level image compression coder is presented that does not use arithmetic encoding, but whose performance is close to that of state-of-the-art coders such as JBIG,JBIG-2, and JB2.
Abstract: We present a new bi-level image compression coder that does not use arithmetic encoding, but whose performance is close to that of state-of-the-art coders such as JBIG, JBIG-2, and JB2. The proposed bi-level coder (BLC) uses two simple adaptation rules: the first to compute context-dependent probability estimates that control a pixel prediction module, and the second to adjust a run-length parameter in a run-length-Rice (RLR) coder. That's contrary to the usual approach where the context-dependent probability estimate controls both pixel prediction and adaptive entropy coding. Due to its simplicity, in many applications BLC may be a better choice than JBIG or JBIG-2.

Patent
04 Jun 2001
TL;DR: In this paper, a method is adapted for compressing an image data block, and includes the steps of: (a) subjecting the image block to discrete cosine transformation so as to generate discrete Cosine transform data; (b) quantizing the data in accordance with a quantizer matrix that consists of an array of quantizing coefficients, and (c) encoding the quantized data using an entropy coding algorithm so that to generate an encoded bitstream; and (d) when the length of the encoded bit stream does not fall within a predetermined range, adjusting the quant
Abstract: A method is adapted for compressing an image data block, and includes the steps of: (a) subjecting the image data block to discrete cosine transformation so as to generate discrete cosine transform data; (b) quantizing the discrete cosine transform data in accordance with a quantizer matrix that consists of an array of quantizing coefficients so as to generate quantized data; (c) encoding the quantized data using an entropy coding algorithm so as to generate an encoded bitstream; and (d) when the length of the encoded bitstream does not fall within a predetermined range, adjusting the quantizing coefficients in the quantizer matrix and repeating steps (b) and (c) until the length of the encoded bitstream falls within the predetermined range.

Patent
05 Mar 2001
TL;DR: In this article, an image processing apparatus and method perform an efficient wavelet transform to provide sub-bands having a unit size for being encoded in an encoding device in a post-stage.
Abstract: An image processing apparatus and method perform an efficient wavelet transform to provide sub-bands having a unit size for being encoded in an encoding device in a post-stage. In accordance with one embodiment of the present invention, the image processing apparatus includes a transform unit for performing a two-dimensional discrete wavelet transform on an input image to generate a plurality of frequency components, and an entropy encoding unit for performing entropy encoding on each of the frequency components in M×N-sized units. In a first encoding mode, the transform unit performs the two-dimensional discrete wavelet transform on the image either a predetermined number of times or for a number of times which allows a lowest frequency component (LL) to have a size of M×N, and in a second encoding mode, the transform unit performs the two-dimensional discrete wavelet transform on the image the predetermined number of times.

Journal ArticleDOI
TL;DR: It is found that a large saving in complexity, execution time, and memory size is achieved when the commonly used source encoding algorithms are applied to the n th-order extension of the resulting binary source.

Proceedings ArticleDOI
07 May 2001
TL;DR: Experiments show that the proposed lossless coder (which needs about 2 bit/sample for pre-filtered signals) outperforms competing lossless coders, WaveZip, Shorten, LTAC and LPAC, in terms of compression ratios.
Abstract: A novel predictive lossless coding scheme is proposed. The prediction is based on a new weighted cascaded least mean squared (WCLMS) method. WCLMS is especially designed for music/speech signals. It can be used either in combination with psycho-acoustically pre-filtered signals to obtain perceptually lossless coding, or as a stand-alone lossless coder. Experiments on a database of moderate size and a variety of pre-filtered mono-signals show that the proposed lossless coder (which needs about 2 bit/sample for pre-filtered signals) outperforms competing lossless coders, WaveZip, Shorten, LTAC and LPAC, in terms of compression ratios.

Proceedings ArticleDOI
07 Dec 2001
TL;DR: This paper presents an efficient VLSI architecture for embedded bit-plane coding in JPEG2000 that reduces the number of memory accesses and presents a system level architecture for efficient implementation of JPEG2000 in hardware.
Abstract: To overcome many drawbacks in the current JPEG standard for still image compression, a new standard, JPEG2000, is under development by the International Standard Organization. Embedded bit plane coding is the heart of the JPEG2000 encoder. This encoder is more complex and has significantly higher computational requirements compared to the entropy encoding in current JPEG standard. Because of the inherent bit-wise processing of the entropy encoder in JPEG2000, memory traffic is a substantial component in software implementation. However, in hardware implementation, the lookup tables can be mapped to logic gates and memory accesses for the state bit computation can be reduced significantly by careful design. In this paper, we present an efficient VLSI architecture for embedded bit-plane coding in JPEG2000 that reduces the number of memory accesses. To better understand the interaction of this architecture with the rest of the coder, we also present a system level architecture for efficient implementation of JPEG2000 in hardware.

Patent
Yair Shoham1
23 Aug 2001
TL;DR: In this article, a method and apparatus for performing entropy coding and decoding of a sequence of coded symbols representative of, for example, a speech, audio or video signal, in which variable-size vectors are coded and decoded based on radix arithmetic is presented.
Abstract: A method and apparatus for performing entropy coding and decoding of a sequence of coded symbols representative of, for example, a speech, audio or video signal, in which variable-size vectors are coded and decoded based on radix arithmetic. The encoding technique uses a first radix and the numerical values of individual symbols to be coded, in order to determine the length of a first subsequence of symbols, which is then coded with use of a single (a first) combined symbol, and uses a second radix and the numerical values of other individual symbols to be coded, in order to determine the length of a second subsequence of symbols, which is then also coded with a single (a second) combined symbol, wherein the length of the first and second subsequences of symbols are also based on the size of the set from which the combined symbols are selected. The number of symbols in the first subsequence and the second subsequence are unequal—that is, the vectors (i.e., subsequences) which are combined for coding have a variable size. The first radix and the second radix may be equal and fixed to a predetermined value, or they may each be determined based on the corresponding subsequence of symbols to be coded. Correspondingly, the decoding technique of the present invention determines from the bit stream the number of symbols which have been coded with use of a single combined symbol (i.e., the length of a coded subsequence), and based on that number, on the combined symbol itself, and on a given radix (which may be fixed or may also be determined from the bit stream), determines the values of the individual symbols which were coded together as the combined symbol.

Patent
25 May 2001
TL;DR: In this paper, a radiation area extract section 302 extracts an X-ray radiation area of an Xray image, analyzing by a histogram section 303 detects a void image space in its radiation field.
Abstract: PROBLEM TO BE SOLVED: To provide an image processor that inserts a prescribed pattern being an electronic watermark to an X-ray image without revising its important part. SOLUTION: A radiation area extract section 302 extracts an X-ray radiation area of an X-ray image, analyzing by a histogram section 303 detects a void image space in its radiation field. The prescribed pattern is inserted to the detected void image space, the X-ray image to which the prescribed pattern is inserted is subject to discrete wavelet transform, a quantization section 3 quantizes the transform coefficient and an entropy coding section 4 applies entropy coding to the output of the quantization section 3.

Proceedings ArticleDOI
07 May 2001
TL;DR: Two methods of entropy coding for the lattice codevectors are presented, using the multiple scale lattice VQ (MSLVQ) for quantization, which reduces the rate gain of the entropy coding method when compared with the fixed rate case, but allows a dynamic allocation of the bits in the whole speech coding scheme.
Abstract: We present two methods of entropy coding for the lattice codevectors. We compare our entropy coding methods with one method previously presented in the literature from the point of view of rate-distortion as well as of the computation complexity and memory requirements. The results are presented for artificial Laplacian and Gaussian data, as well as for LSF parameters of speech signals. In the latter case, the multiple scale lattice VQ (MSLVQ) is used for quantization, which reduces the rate gain of the entropy coding method when compared with the fixed rate case, but allows a dynamic allocation of the bits in the whole speech coding scheme.

DOI
01 Jan 2001
TL;DR: This paper extends the block-sorting mechanism to word-based models, and considers other transformations as an alternative to MTF, and is able to show improved compression results compared to M TF.
Abstract: Block sorting is an innovative compression mechanism introduced in by M. Burrows and D.J. Wheeler (1994). It involves three steps: permuting the input one block at a time through the use of the Burrows-Wheeler transform (BWT); applying a move-to-front (MTF) transform to each of the permuted blocks; and then entropy coding the output with a Huffman or arithmetic coder. Until now, block-sorting implementations have assumed that the input message is a sequence of characters. In this paper, we extend the block-sorting mechanism to word-based models. We also consider other transformations as an alternative to MTF, and are able to show improved compression results compared to MTF. For large text files, the combination of word-based modelling, BWT and MTF-like transformations allows excellent compression effectiveness to be attained within reasonable resource costs.

Proceedings ArticleDOI
27 Mar 2001
TL;DR: It is shown that in some situations the introduction of a simple parsing stage allows improved compression to be obtained compared to an otherwise equivalent character-based BWT implementation, and an MTF-like ranking transformation is described that caters better to large-alphabet situations than does the strict MTF rule used in conventional BWT implementations.
Abstract: Block-sorting is an innovative compression mechanism introduced by Burrows and Wheeler (1994), and has been the subject of considerable scrutiny in the years since it first became public. Block-sorting compression is usually described as involving three steps: permuting the input one block at a time through the use of the Burrows-Wheeler transform (BWT); applying a move-to-front (MTF) transform to each of the permuted blocks; and then entropy coding the output with a Huffman or arithmetic coder. In this paper we prepend a fourth transformation to this sequence: parsing. In the BWT implementations that have been considered to date the unit of transmission has been taken to be the ASCII character. But there is no particular reason why this should be so, and a range of other strategies can be used to construct the sequence of symbols that is fed into the BWT process. We consider some of the issues associated with making this change, and show that in some situations the introduction of a simple parsing stage allows improved compression to be obtained compared to an otherwise equivalent character-based BWT implementation. We also describe an MTF-like ranking transformation that caters better to large-alphabet situations than does the strict MTF rule used in conventional BWT implementations.

Journal ArticleDOI
TL;DR: The results of several experiments presented in this paper demonstrate the importance of context modeling in the EZW framework and show that appropriate context modeling improves the performance of compression algorithm after a multilevel subband decomposition is performed.
Abstract: Previous research advances have shown that wavelet-based image-compression techniques offer several advantages over traditional techniques in terms of progressive transmission capability, compression efficiency, and bandwidth utilization. The embedded zerotree wavelet (EZW) coding technique suggested by Shapiro (1992), and its modification-set partitioning in hierarchical trees (SPIHT), suggested by Said and Pearlman (19996)-demonstrate the competitive performance of wavelet-based compression schemes. The EZW-based lossless image coding framework consists of three stages: (1) reversible discrete wavelet transform; (2) hierarchical ordering and selection of wavelet coefficients; and (3) context-modeling-based entropy (arithmetic) coding. The performance of the compression algorithm depends on the choice of various parameters and the implementation strategies employed in all the three stages. This paper proposes different context modeling and selection techniques for efficient entropy encoding of wavelet coefficients, along with the modifications performed to the SPIHT algorithm. The results of several experiments presented in this paper demonstrate the importance of context modeling in the EZW framework. Furthermore, this paper shows that appropriate context modeling improves the performance of compression algorithm after a multilevel subband decomposition is performed.

Patent
18 Jul 2001
TL;DR: In this article, a method of compressing a stereoscopic image, using entropy coding, was proposed, where a model derived from a first-eye image was used to encode an image for a second-eye.
Abstract: A method of compressing a stereoscopic image, using entropy coding, wherein a model derived from a first eye image is used to encode an image for a second eye, and wherein the model is determined from a first eye difference image of a first frame and a second frame of the first eye image; and a second eye difference image of a first frame and a second frame of the second eye image, will be encoded using the model.

Patent
13 Aug 2001
TL;DR: In this paper, a method of entropy coding symbols representative of a code block comprising transform coefficients of a digital image was proposed, which comprises a significance propagation pass, a magnitude refinement pass, and a cleanup pass.
Abstract: A method of entropy coding symbols representative of a code block comprising transform coefficients of a digital image. The method comprises a significance propagation pass 314, a magnitude refinement pass 316, and a cleanup pass 318 for entropy coding the symbols. The method generates ( 1210,1211 ), prior to the significance propagation pass 314 of the current bitplane, a first list of positions of those coefficients in the code block that have symbols to be entropy coded during the significance propagation pass of the current bitplane. The method also generates ( 1208 ), prior to the magnitude refinement pass 316 of the current bitplane, a second list of positions of those said coefficients in the code block that have symbols to be entropy coded during the magnitude refinement pass of the current bitplane. The method further generates ( 916 ), prior to the cleanup pass 318 of the current bitplane, a third list of positions of those said coefficients in the code block that have symbols to be entropy coded during the cleanup pass of the current bitplane.

Proceedings ArticleDOI
27 Mar 2001
TL;DR: A solution giving the minimum adaptive code length for a given data set is presented (when the cost of the context quantizer is neglected) and the optimal context quantization is also used to evaluate existing heuristic context quantizations.
Abstract: Context based entropy coding often faces the conflict of a desire for large templates and the problem of context dilution We consider the problem of finding the quantizer Q that quantizes the K-dimensional causal context C/sub i/=(X(i-t/sub 1/), X(i-t/sub 2/), , X(i-t/sub K/)) of a source symbol X/sub i/ into one of M conditioning states A solution giving the minimum adaptive code length for a given data set is presented (when the cost of the context quantizer is neglected) The resulting context quantizers can be used for sequential coding of the sequence X/sub 0/, X/sub 1/, X/sub 2/, A coding scheme based on binary decomposition and context quantization for coding the binary decisions is presented and applied to digital maps and /spl alpha/-plane sequences The optimal context quantization is also used to evaluate existing heuristic context quantizations

Patent
02 Oct 2001
TL;DR: In this paper, a data encoding and decoding system that comprises a composite fixed-variable-length coding process and an offset-difference coding process for improving data compression performance is presented.
Abstract: A data encoding and decoding system that comprises a composite fixed-variable-length coding process and an offset-difference coding process for improving data compression performance. A composite fixed-variable-length coding process encodes an input data by first comparing the input data to a predetermined threshold, then selecting a coding scheme from two preselected coding schemes and encoding the input data in accordance with the selected coding scheme. The composite fixed-variable-length coding process also generates an identifier to indicate the selected coding scheme to decode the coded output data if a response to comparing the input data with a predetermined threshold differs from a statistically determined response. An offset-difference coding process encodes a paired input data by first determining the greater of the two input data, then calculating the difference between the larger input data and the smaller input data and replacing the larger input data with the calculated difference. The offset-difference coding process also generates an indicator to indicate the input data that has been replaced if said input data is not statistically larger. The composite fixed-variable-length coding process and offset-difference coding process may be used independently or together depending on the applications.

Journal ArticleDOI
TL;DR: A controlled method for trading compression loss for coding speed by approximating symbol frequencies with a geometric distribution is presented and the result is an adaptive MRP coder that is asymptotically efficient and also fast in practice.
Abstract: Semistatic minimum-redundancy prefix (MRP) coding is fast compared with rival coding methods, but requires two passes during encoding. Its adaptive counterpart, dynamic Huffman coding, requires only one pass over the input message for encoding and decoding, and is asymptotically efficient. Dynamic Huffman (1952) coding is, however, notoriously slow in practice. By removing the restriction that the code used for each message symbol must have minimum-redundancy and thereby admitting some compression loss, it is possible to improve the speed of adaptive MRP coding. This paper presents a controlled method for trading compression loss for coding speed by approximating symbol frequencies with a geometric distribution. The result is an adaptive MRP coder that is asymptotically efficient and also fast in practice.