scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 2008"


Proceedings ArticleDOI
Yan Ye1, Marta Karczewicz1
12 Dec 2008
TL;DR: Together the improvements can bring on average 7% and 10% coding gain for CABAC and for CAVLC, respectively, with average coding gain of 12% for HD sequences.
Abstract: In this paper, a novel intra coding scheme is proposed. The proposed scheme improves H.264 intra coding from three aspects: 1) H.264 intra prediction is enhanced with additional bi-directional intra prediction modes; 2) H.264 integer transform is supplemented with directional transforms for some prediction modes; and 3) residual coefficient coding in CAVLC is improved. Compared to H.264, together the improvements can bring on average 7% and 10% coding gain for CABAC and for CAVLC, respectively, with average coding gain of 12% for HD sequences.

259 citations


Patent
Yan Ye1, Marta Karczewicz1
13 Jun 2008
TL;DR: In this article, the authors describe techniques for scanning coefficients of video blocks, which adapt a scan order used to scan a two dimensional block of coefficients into a one dimensional coefficient vector based on statistics associated with one or more previously coded blocks.
Abstract: This disclosure describes techniques for scanning coefficients of video blocks. In particular, the techniques of this disclosure adapt a scan order used to scan a two dimensional block of coefficients into a one dimensional coefficient vector based on statistics associated with one or more previously coded blocks. For example, statistics that indicate the likelihood that a given coefficient value in each position of a two dimensional block is zero or non zero may be collected for one or more previously coded blocks. At some point, an adjustment to the scan order can be made in order to better ensure that non-zero coefficients are grouped together near the front of the one dimensional coefficient vector, which can improve the effectiveness of entropy coding. The collection of statistics and adjustment of scan order may be made separately for each possible prediction mode.

181 citations


Patent
Sridhar Srinivasan1
19 May 2008
TL;DR: In this paper, alpha image data is quantized as a first step before differential encoding, and the quantized alpha data is then differential encoded and represented in a modulo domain before entropy encoding to take advantage of certain distribution features of typical alpha images.
Abstract: Alpha images are efficiently encoded for inclusion in video bitstreams. During encoding, alpha image data is quantized as a first step before differential encoding. The quantized alpha image data is then differential encoded and represented in a modulo domain before entropy encoding to take advantage of certain distribution features of typical alpha image data. During decoding, a decoder performs differential decoding of encoded alpha image data before dequantization. During differential decoding, the data is converted from a modulo domain to a continuous domain. Dequantization is performed using a technique which results in reconstructed alpha image values which include 0 and maximal values within the acceptable alpha image data range so that the process preserves these values for reconstructed alpha images.

103 citations


Journal ArticleDOI
TL;DR: The simulation results indicate that the intensity gradient based algorithm for intra-prediction contributes better tradeoff between rate-distorion performance and encoding complexity than the previous algorithms.
Abstract: This study presents an intensity gradient approach for intra-prediction in H.264 encoding system, which enhances the performance and efficiency of previous fast algorithms. We propose a preprocessing stage in which eight orientation features are extracted from a macro block by the intensity gradient filters. The orientation features are utilized to select a subset of prediction modes to be involved in the rate-distortion calculation so that the encoding time can be reduced. The simulation results indicate that the intensity gradient based algorithm for intra-prediction contributes better tradeoff between rate-distorion performance and encoding complexity than the previous algorithms. Compared to H.264 reference software, the proposed algorithm introduces slight PSNR degradation and bit rate increase but saves around 76% of the total encoding time with all intra-frame coding.

102 citations


Patent
26 Jun 2008
TL;DR: In this article, a prediction set determining section selects a set from a group including a plurality of prediction sets having different combinations of prediction modes corresponding to different prediction directions, and an entropy encoding section encodes the prediction set, the prediction mode thus selected, and residual data between an input image and a predicted image formed on the basis of the prediction sets and the prediction modes.
Abstract: A prediction set determining section selects a prediction set from a prediction set group including a plurality of prediction sets having different combinations of prediction modes corresponding to different prediction directions. Further, a prediction mode determining section selects a prediction mode from the prediction set thus selected. An entropy encoding section encodes the prediction set thus selected, the prediction mode thus selected, and residual data between an input image and a predicted image formed on the basis of the prediction set and the prediction mode. This allows an image encoding device to carry out predictions from more various angles, thereby improving prediction efficiency.

59 citations


Journal ArticleDOI
TL;DR: Simulation results on several records from the MIT-BIH arrhythmia database show that the proposed coding algorithm outperforms some recently developed ECG compression algorithms.

49 citations


Journal ArticleDOI
TL;DR: In this paper, a computer program of the Hilbert space-filling curve ordering generated from a tensor product formula is used to rearrange pixels of medical images to enhance pixel locality.

45 citations


Patent
02 Jun 2008
TL;DR: In this article, an image prediction encoding device 10 has a block dividing unit 102, an intra-frame prediction signal generation method determining unit 105, a subtractor 108, a transform unit 109, a quantization unit 110, and an entropy encoding unit 115.
Abstract: An object is to efficiently depress mode information for identification of a prediction method even if there are a lot of prediction methods provided as to methods of generating intra-frame prediction signals in a pixel domain. An image prediction encoding device 10 has a block dividing unit 102, an intra-frame prediction signal generation method determining unit 105, an intra-frame prediction signal generating unit 106, a subtractor 108, a transform unit 109, a quantization unit 110, and an entropy encoding unit 115; the intra-frame prediction signal generation method determining unit 105 derives an adjacent region prediction method for generation of an intra-frame prediction signal highly correlated with pixel signals of an adjacent region, using a proximate pixel group to a target region, out of a plurality of first prediction methods, and predicts a target region prediction method for target pixel signals, based on the derived adjacent region prediction method; the intra-frame prediction signal generating unit 106 generates an intra-frame prediction signal for the target region, based on the target region prediction method.

45 citations


Proceedings ArticleDOI
12 Dec 2008
TL;DR: A novel parallel CABAC scheme is presented which enables a throughput increase of N-fold (depending on the degree parallelism), reducing the frequency requirement and expected power consumption of the coding engine.
Abstract: With the growing presence of high definition video content on battery-operated handheld devices such as camera phones, digital still cameras, digital camcorders, and personal media players, it is becoming ever more important that video compression be power efficient. A popular form of entropy coding called context-based adaptive binary arithmetic coding (CABAC) provides high coding efficiency but has limited throughput. This can lead to high operating frequencies resulting in high power dissipation. This paper presents a novel parallel CABAC scheme which enables a throughput increase of N-fold (depending on the degree parallelism), reducing the frequency requirement and expected power consumption of the coding engine. Experiments show that this new scheme (with N=2) can deliver ~2x throughput improvement at a cost of 0.76% average increase in bit-rate or equivalently a decrease in average PSNR of 0.025 dB on five 720 p resolution video clips when compared with H.264/AVC.

39 citations


Journal ArticleDOI
TL;DR: This two-part monograph is to present a tutorial on set partition coding, with emphasis and examples on image wavelet transform coding systems, and describe their use in modern image coding systems.
Abstract: The purpose of this two-part monograph is to present a tutorial on set partition coding, with emphasis and examples on image wavelet transform coding systems, and describe their use in modern image coding systems. Set partition coding is a procedure that recursively splits groups of integer data or transform elements guided by a sequence of threshold tests, producing groups of elements whose magnitudes are between two known thresholds, therefore, setting the maximum number of bits required for their binary representation. It produces groups of elements whose magnitudes are less than a certain known threshold. Therefore, the number of bits for representing an element in a particular group is no more than the base-2 logarithm of its threshold rounded up to the nearest integer. SPIHT (Set Partitioning in Hierarchical Trees) and SPECK (Set Partitioning Embedded blocK) are popular state-of-the-art image coders that use set partition coding as the primary entropy coding method. JPEG2000 and EZW (Embedded Zerotree Wavelet) use it in an auxiliary manner. Part I elucidates the fundamentals of set partition coding and explains the setting of thresholds and the block and tree modes of partitioning. Algorithms are presented for the techniques of AGP (Amplitude and Group Partitioning), SPIHT, SPECK, and EZW. Numerical examples are worked out in detail for the latter three techniques. Part II describes various wavelet image coding systems that use set partitioning primarily, such as SBHP (Subband Block Hierarchical Partitioning), SPIHT, and EZBC (Embedded Zero-Block Coder). The basic JPEG2000 coder is also described. The coding procedures and the specific methods are presented both logically and in algorithmic form, where possible. Besides the obvious objective of obtaining small file sizes, much emphasis is placed on achieving low computational complexity and desirable output bitstream attributes, such as embeddedness, scalability in resolution, and random access decodability. This monograph is extracted and adapted from the forthcoming textbook entitled Digital Signal Compression: Principles and Practice by William A. Pearlman and Amir Said, Cambridge University Press, 2009.

38 citations


Patent
22 Aug 2008
TL;DR: In this article, techniques and tools for encoding and decoding data values that are hierarchically organized are presented, where an encoder encodes data as a set that has a hierarchy of subsets with set symbols.
Abstract: Techniques and tools for encoding and decoding data values that are hierarchically organized are presented. For example, an encoder encodes data as a set that has a hierarchy of subsets with set symbols. In the encoding, the encoder evaluates the data values of the set and selectively encodes a symbol combination code that indicates the set symbols of multiple subsets of the set. Then, for each of the multiple subsets considered as a new set, the encoder selectively repeats the evaluating, selective encoding and selective repetition for the new set. In corresponding decoding, a decoder decodes data encoded as a set that has a hierarchy of subsets with set symbols. In some implementations, the encoding and decoding are adaptive and use a symbol alphabet with nested elements.

Patent
Min Lei Shaw1, Wen Huang Yu, Xun Guo1
12 Dec 2008
TL;DR: In this paper, an encoder for receiving a video frame and performing encoding processes to generate an encoded bitstream includes: a fidelity enhancement block, for performing a fidelity enhancing technique on the video frame utilizing a quad-tree partition, and generating fidelity enhancement information including a parameter associated with the quad tree partition structure.
Abstract: An encoder for receiving a video frame and performing encoding processes to generate an encoded bitstream includes: a fidelity enhancement block, for performing a fidelity enhancement technique on the video frame utilizing a quad- tree partition, and generating fidelity enhancement information including a parameter associated with the quad-tree partition structure; and an entropy coding block, coupled to the fidelity enhancement block, for encoding the fidelity enhancement information, and embedding the encoded fidelity enhancement information into the encoded bitstream.

Journal ArticleDOI
TL;DR: Simulations show that the proposed joint source-channel coding scheme can outperform the separated baseline scheme for finite coding length and comparable complexity and, as expected, it is much more robust to channel errors in the case of channel capacity mismatch.
Abstract: The straightforward application of Shannon's separation principle may entail a significant suboptimality in practical systems with limited coding delay and complexity. This is particularly evident when the lossy source code is based on entropy-coded quantization. In fact, it is well known that entropy coding is not robust to residual channel errors. In this paper, a joint source-channel coding scheme is advocated that combines the advantages and simplicity of entropy-coded quantization with the robustness of linear codes. The idea is to combine entropy coding and channel coding into a single linear encoding stage. If the channel is symmetric, the scheme can asymptotically achieve the optimal rate-distortion limit. However, its advantages are more clearly evident under finite coding delay and complexity. The sequence of quantization indices is decomposed into bitplanes, and each bitplane is independently mapped onto a sequence of channel coded symbols. The coding rate of each bitplane is chosen according to the bitplane conditional entropy rate. The use of systematic raptor encoders is proposed, in order to obtain a continuum of coding rates with a single basic encoding algorithm. Simulations show that the proposed scheme can outperform the separated baseline scheme for finite coding length and comparable complexity and, as expected, it is much more robust to channel errors in the case of channel capacity mismatch.

Journal ArticleDOI
TL;DR: The whole architecture of the binary coder is described in VHDL and synthesized for different configurations to show the implementation cost of some coding options and results show that the parallel symbol encoding allows higher efficiency.
Abstract: H.264/AVC offers critical advantages over other video compression schemes at the price of increased computational complexity. The efficiency of hardware video encoders depends on all modules embedded in the processing path. This paper presents the architecture of the H.264/AVC binary coder, which is the last stage of the video coder. The module conforms to H.264/AVC High Profile and supports two binary coding modes: context adaptive binary arithmetic coding (CABAC) and context adaptive variable-length coding (CAVLC). The architecture saves a considerable amount of hardware resources since two coding modes share the same logic and storage elements. Five versions of the arithmetic coding path are developed to study the area/performance tradeoff related to parallel symbol encoding. The implementation results show that the parallel symbol encoding allows higher efficiency. The whole architecture of the binary coder is described in VHDL and synthesized for different configurations to show the implementation cost of some coding options. For both CAVLC and CABAC modes, the architecture achieves the similar throughput able to support HDTV in real time.

Journal ArticleDOI
TL;DR: A novel rate-distortion-complexity (R-D-C) analysis for state-of-the-art wavelet video coding methods by explicitly modeling several aspects found in operational coders, i.e., embedded quantization, quadtree decompositions of block significance maps and context-adaptive entropy coding of subband blocks is presented.
Abstract: Analytical modeling of the performance of video coders is essential in a variety of applications, such as power-constrained processing, complexity-driven video streaming, etc., where information concerning rate, distortion, or complexity (and their interrelation) is required. In this paper, we present a novel rate-distortion-complexity (R-D-C) analysis for state-of-the-art wavelet video coding methods by explicitly modeling several aspects found in operational coders, i.e., embedded quantization, quadtree decompositions of block significance maps and context-adaptive entropy coding of subband blocks. This paper achieves two main goals. First, unlike existing R-D models for wavelet video coders, the proposed derivations reveal for the first time the expected coding behavior of specific coding algorithms (e.g., quadtree decompositions, coefficient significance, and refinement coding) and, therefore, can be used for a variety of coding mechanisms incorporating some or all the coding algorithms discussed. Second, the proposed modeling derives for the first time analytical estimates of the expected number of operations (complexity) of a broad class of wavelet video coding algorithms based on stochastic source models, the coding algorithm characteristics and the system parameters. This enables the formulation of an analytical model characterizing the complexity of various video decoding operations. As a result, this paper complements prior complexity-prediction research that is based on operational measurements. The accuracy of the proposed analytical R-D-C expressions is justified against experimental data obtained with a state-of-the-art motion-compensated temporal filtering based wavelet video coder, and several new insights are revealed on the different tradeoffs between rate-distortion performance and the required decoding complexity.

Posted Content
TL;DR: It is shown how universal codes can be used for solving some of the most important statistical problems for time series and it turns out, that quite often the suggested methods and tests are more powerful than known ones when they are applied in practice.
Abstract: We show how universal codes can be used for solving some of the most important statistical problems for time series. By definition, a universal code (or a universal lossless data compressor) can compress any sequence generated by a stationary and ergodic source asymptotically to the Shannon entropy, which, in turn, is the best achievable ratio for lossless data compressors. We consider finite-alphabet and real-valued time series and the following problems: estimation of the limiting probabilities for finite-alphabet time series and estimation of the density for real-valued time series, the on-line prediction, regression, classification (or problems with side information) for both types of the time series and the following problems of hypothesis testing: goodness-of-fit testing, or identity testing, and testing of serial independence. It is important to note that all problems are considered in the framework of classical mathematical statistics and, on the other hand, everyday methods of data compression (or archivers) can be used as a tool for the estimation and testing. It turns out, that quite often the suggested methods and tests are more powerful than known ones when they are applied in practice.

Patent
01 Aug 2008
TL;DR: In this paper, an adaptive scan method for image/video coding in accordance with the present invention comprises acts of calculating an average power of the transformed coefficients vector and acts rearranging the powers of the transform coefficients with descending order in the data block according to the power of transformed coefficients.
Abstract: An adaptive scan method for image/video coding in accordance with the present invention comprises acts of calculating an average power of the transformed coefficients vector and acts rearranging the powers of the transformed coefficients with descending order in the data block according to the power of transformed coefficients; Therefore, the adaptive scan method is dependent on different prediction mode witch has been coded in H.264 standard, and provides better rate-distortion performance in entropy coding to the conventional zig-zag scan.

Patent
Lidong Xu1, Yi-Chen Chiu1
09 Jul 2008
TL;DR: In this paper, the adaptive Wiener filter is used to determine the parameters of an adaptive WF to apply to a video region and the parameters associated with the lowest rate distortion cost of the encoder are selected for transmission with the encoded video.
Abstract: Techniques are described that can be used to determine parameters of an adaptive Wiener filter to apply to a video region. The following parameters of the Wiener filter may be adjusted: coefficients, coefficient quantization, filter type, filter size, prediction mode, entropy encoding, and number of filter tables. The parameters associated with the lowest rate distortion cost of the encoder are selected for transmission with the encoded video. If not using adaptive Wiener filtering results in a lowest rate distortion cost, then adaptive Wiener filtering is not used for the video region. If using adaptive Wiener filtering results in a lowest rate distortion cost, then the parameters applied by the adaptive Wiener filtering that result in the lowest rate distortion cost are communicated with the filtered video region.

Patent
27 Mar 2008
TL;DR: An entropy coding apparatus as discussed by the authors includes a renormalization process and an encode-decision process that communicates with the renormalisation process, and the encoder is an entropy encoder that is H.264 compliant.
Abstract: An entropy coding apparatus. In a specific embodiment, the entropy coding apparatus includes a renormalization process and an encode-decision process that communicates with the renormalization process. The encode-decision process is adapted to run in parallel with the renormalization process without the renormalization process being nested therein. In a more specific embodiment, the entropy coding apparatus includes an entropy encoder that is H.264 compliant. The encode-decision process includes a first mechanism for pre-computing certain parameters to eliminate the need to nest the renormalization process within the encode-decision process. The renormalization process and the encode-decision process are components of a Context Adaptive Binary Arithmetic Coding (CABAC) module.

Patent
25 Jan 2008
TL;DR: In this paper, the authors proposed a method to compress and transmit borehole image data trace by trace to the surface using a single pixilated trace at a time, where the compression methodology typically includes transform, quantization and entropy encoding steps.
Abstract: Borehole image data is compressed and transmitted to the surface one pixilated trace at a time The compression methodology typically includes transform, quantization, and entropy encoding steps The invention advantageously provides for sufficient data compression to enable conventional telemetry techniques (eg, mud pulse telemetry) to be utilized for transmitting borehole images to the surface By compressing and transmitting sensor data trace by trace the invention also tends to significantly reduce latency

Book
02 Dec 2008
TL;DR: A modification of the tree-based coding in SPIHT (Set Partitioning In Hierarchical Trees) is described, whose output bitstream can be decoded partially corresponding to a designated region of interest and is simultaneously quality and resolution scalable.
Abstract: This monograph describes current-day wavelet transform image coding systems. As in the first part, steps of the algorithms are explained thoroughly and set apart. An image coding system consists of several stages: transformation, quantization, set partition or adaptive entropy coding or both, decoding including rate control, inverse transformation, de-quantization, and optional processing (see Figure 1.6). Wavelet transform systems can provide many desirable properties besides high efficiency, such as scalability in quality, scalability in resolution, and region-of-interest access to the coded bitstream. These properties are built into the JPEG2000 standard, so its coding will be fully described. Since JPEG2000 codes subblocks of subbands, other methods, such as SBHP (Subband Block Hierarchical Partitioning) [3] and EZBC (Embedded Zero Block Coder) [8], that code subbands or its subblocks independently are also described. The emphasis in this part is the use of the basic algorithms presented in the previous part in ways that achieve these desirable bitstream properties. In this vein, we describe a modification of the tree-based coding in SPIHT (Set Partitioning In Hierarchical Trees) [15], whose output bitstream can be decoded partially corresponding to a designated region of interest and is simultaneously quality and resolution scalable. This monograph is extracted and adapted from the forthcoming textbook entitled Digital Signal Compression: Principles and Practice by William A. Pearlman and Amir Said, Cambridge University Press, 2009.

Patent
22 Apr 2008
TL;DR: In this article, a method and apparatus for entropy-encoding/entropy-decoding video data is presented, where the coefficients of the frequency domain are binarized adaptively according to whether the frequencies of the coefficients are high or low.
Abstract: Provided are a method and apparatus for entropy-encoding/entropy-decoding video data. The method of entropy-encoding video data includes binarizing coefficients of the frequency domain, which are generated by transforming a residual block of a current block into the frequency domain, using different binarization methods and performing binary arithmetic coding on the binarized coefficients. In this way, the coefficients are binarized adaptively according to whether the frequencies of the coefficients are high or low, thereby improving the compression efficiency of the video data.

Journal ArticleDOI
TL;DR: This letter presents two approaches to accelerate the coding operation substantially, component-level parallelism and pipeline techniques capable of processing high-bitrate video data in a macroblock (MB)-level pipelined codec architecture and a specific part of the coding process, i.e., residual block coding.
Abstract: In H.264/AVC and the variants, the coding of context-based adaptive variable length codes (CAVLC) requires demanding operations, particularly at high bitrates such as 100 Mbps. This letter presents two approaches to accelerate the coding operation substantially. Firstly, in the architectural aspect, we propose component-level parallelism and pipeline techniques capable of processing high-bitrate video data in a macroblock (MB)-level pipelined codec architecture. The second approach focuses on a specific part of the coding process, i.e., the residual block coding, in which the coefficient levels are coded without using look-up tables so we minimize the pertaining logic depth in the critical path, and we achieve higher operating clock frequencies. Additionally, two coefficient levels are processed in parallel by exploiting a look-ahead technique. The resulting architecture, merged in the MB-level pipelined codec system, is capable of coding up to 100 Mbps bitstreams in real-time, thus accommodating the real-time encoding of 1080p@60 Hz video.

Patent
01 Aug 2008
TL;DR: In this article, the image is divided into a plurality of arrays, each of which is separately transformed using a wavelet transformation, and the resulting wavelet coefficients are then encoded using an entropy encoding scheme to provide a compressed data set.
Abstract: The invention provides a system and method for compressing an electronic image data set. The image is divided into a plurality of arrays, each of which are separately transformed using a wavelet transformation. The resulting wavelet coefficients are then encoded using an entropy encoding scheme to provide a compressed data set.

Patent
Sanjeev Mehrotra1, Wei-ge Chen1
16 May 2008
TL;DR: In this paper, an audio encoder determines a Huffman code from a table to use for encoding a vector of audio data symbols, where the determining is based on a sum of values of the audio symbols.
Abstract: An audio encoder performs entropy encoding of audio data For example, an audio encoder determines a Huffman code from a Huffman code table to use for encoding a vector of audio data symbols, where the determining is based on a sum of values of the audio data symbols An audio decoder performs corresponding entropy decoding

Journal ArticleDOI
TL;DR: In this article, the authors investigated the high resolution quantization and entropy coding problem for solutions of stochastic differential equations under L p [ 0, 1 ] -norm distortion and found explicit high resolution formulas in terms of the average diffusion coefficient seen by the process.

Book ChapterDOI
01 Jan 2008
TL;DR: This chapter explores various schemes of entropy encoding and how they work mathematically where applicable, and investigates their applications in lossless compression.
Abstract: Entropy encoding is a method of lossless compression that is performed on an image after the quantization stage. It enables one to represent an image in a more efficient way with less memory needed for storage or transmission. In this chapter, we will explore various schemes of entropy encoding and how they work mathematically where applicable.

Journal ArticleDOI
TL;DR: Experimental results from applying this scheme and attacking in various ways such as blurring, sharpening, cropping, Gaussian noise addition, and geometrical modification showed that the watermark embedded by this scheme has very high imperceptibility and robustness to the attacks.
Abstract: This paper is to propose a digital watermarking to protect the ownership of a video content which is compressed by H.264/AVC main profile. This scheme intends to be performed during the CABAC (Context-based Adaptive Binary Arithmetic Coding) process which is the entropy coding of the main profile. It uses the contexts extracted during the context modeling process of CABAC to position the watermark bits by simply checking the context values and determining the coefficients. The watermarking process is also as simple as replacing the watermark bit with the LSB (Least Significant Bit) of the corresponding coefficient to be watermarked. Experimental results from applying this scheme and attacking in various ways such as blurring, sharpening, cropping, Gaussian noise addition, and geometrical modification showed that the watermark embedded by this scheme has very high imperceptibility and robustness to the attacks. Thus, we expect it to be used as a good watermarking scheme, especially in the area that the watermarking should be performed during the compression process with requiring minimal amount of process for watermarking.

Patent
30 Dec 2008
TL;DR: In this article, a method and system for encoding a plurality of integers with variable-length code tables constructed by combining of structured code tables is presented, where each code table has an associated set of integer values; the sets are disjoint and exhaustive.
Abstract: A method and system are provided for encoding a plurality of integers with variable-length code tables constructed by combining a plurality of structured code tables. Each code table has an associated set of integer values; the sets are disjoint and exhaustive, so that every integer appears in exactly one set. An integer is encoded using the codebook associated with the set in which the integer appears.

Patent
11 Jun 2008
TL;DR: In this paper, a method for decoding entropy coding residual data and so on which is specially used in an H.264 decoding MAE and a device is described, and the method adopts collaborative work of software and hardware, parts which occupy a large number of resources during decoding process are realized by means of a software method.
Abstract: The invention discloses a method for decoding entropy coding residual data and so on which is specially used in an H.264 decoding MAE and a device. Because the method of the invention adopts collaborative work of software and hardware, parts which occupy a large number of resources during the decoding process are realized by means of a software method. The software program method is that: by the means of utilizing an embedded cpu to replace hardware circuits for operation, structural complexity and computation complexity are greatly reduced under the precondition that reliable decoding efficiency and decoding quality of an entropy decoder are guaranteed to be obtained, thereby the contradiction between structure and efficiency is effectively solved and decoding of CAVLC and CABAC entropy coding bit streams is smoothly realized. The entropy decoder comprises a BSI access module, a ue/se/te decoding unit, a CAVLC residual decoding module, a software module, a binary arithmetic decoding unit, an entropy decoding control module, a CABAC residual decoding module and an anti-sweep RAM.