scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 2005"


Journal ArticleDOI
TL;DR: According to the experimental results, the image quality is better than Jpeg-Jsteg and its improved scheme, and the secret data is embedded into the difference values of a given image after the prediction stage is performed.

130 citations


Journal ArticleDOI
TL;DR: This paper describes a method for compressing floating-point coordinates with predictive coding in a completely lossless manner and reports compression results using the popular parallelogram predictor, but the approach will work with any prediction scheme.
Abstract: The size of geometric data sets in scientific and industrial applications is constantly increasing. Storing surface or volume meshes in standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Scientists and engineers often refrain from using mesh compression because currently available schemes modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid to enable efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe a method for compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.

111 citations


Patent
Zoran Fejzo1
21 Mar 2005
TL;DR: In this article, a lossless audio codec segments audio data within each frame to improve compression performance subject to a constraint that each segment must be fully decodable and less than a maximum size.
Abstract: A lossless audio codec segments audio data within each frame to improve compression performance subject to a constraint that each segment must be fully decodable and less than a maximum size. For each frame, the codec selects the segment duration and coding parameters, e.g., a particular entropy coder and its parameters for each segment, that minimizes the encoded payload for the entire frame subject to the constraints. Distinct sets of coding parameters may be selected for each channel or a global set of coding parameters may be selected for all channels. Compression performance may be further enhanced by forming M/2 decorrelation channels for M-channel audio. The triplet of channels (basis, correlated, decorrelated) provides two possible pair combinations (basis, correlated, decorrelated) provides two possible pair combinations (basis, correlated) and basis, decorrelated) that can be considered during the segmentation and entropy coding optimization to further improve compression performance.

99 citations


Proceedings Article
01 Dec 2005
TL;DR: Experimental results show that the proposed coding scheme attains the best coding performance among the current state-of-the-art lossless coding schemes.
Abstract: This paper proposes an efficient lossless coding scheme for still images. The scheme employs a block-adaptive prediction technique to remove spatial redundancy in a given image. The resulting prediction errors are encoded using context-adaptive arithmetic coding. Several coding parameters, which must be sent to a decoder as side information, are iteratively optimized for each image so that the number of coding bits including the side information can have a minimum. Moreover, quadtree-based variable block-size partitioning is introduced into the above adaptive prediction technique. Experimental results show that the proposed coding scheme attains the best coding performance among the current state-of-the-art lossless coding schemes.

86 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: JPG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images and are based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency.
Abstract: Lossless compression is necessary for many high performance applications such as geophysics, telemetry, nondestructive evaluation, and medical imaging, which require exact recoveries of original images. Lossless image compression can be always modeled as a two-stage procedure: decorrelation and entropy coding. The first stage removes spatial redundancy or inter-pixel redundancy by means of run-length coding, SCAN language based methodology, predictive techniques, transform techniques, and other types of decorrelation techniques. The second stage, which includes Huffman coding, arithmetic coding, and LZW, removes coding redundancy. Nowadays, the performances of entropy coding techniques are very close to its theoretical bound, and thus more research activities concentrate on decorrelation stage. JPEG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images. JPEG-LS is based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency. Another technique proposed for JPEG-LS was CALIC. JPEG-2000 was designed with the main objective of providing efficient compression for a wide range of compression ratios.

81 citations


Patent
Sridhar Srinivasan1
30 Nov 2005
TL;DR: In this paper, the adaptive scan order re-arranges the scan order by applying a conditional exchange operation on adjacently ordered coefficient locations via a single traversal of scan order per update of the statistical analysis.
Abstract: A digital media codec adaptively re-arranges a coefficient scan order of transform coefficients in accordance with the local statistics of the digital media, so that the coefficients can be encoded more efficiently using entropy encoding The adaptive scan ordering is applied causally at encoding and decoding to avoid explicitly signaling the scan order to the decoder in the compressed digital media stream For computational efficiency, the adaptive scan order re-arranges the scan order by applying a conditional exchange operation on adjacently ordered coefficient locations via a single traversal of the scan order per update of the statistical analysis

74 citations


Proceedings ArticleDOI
14 Nov 2005
TL;DR: An efficient algorithm for compression of dynamic time-consistent 3D meshes that contains a large degree of temporal statistical dependencies that can be exploited for compression using DPCM is presented.
Abstract: An efficient algorithm for compression of dynamic time-consistent 3D meshes is presented. Such a sequence of meshes contains a large degree of temporal statistical dependencies that can be exploited for compression using DPCM. The vertex positions are predicted at the encoder from a previously decoded mesh. The difference vectors are further clustered in an octree approach. Only a representative for a cluster of difference vectors is further processed providing a significant reduction of data rate. The representatives are scaled and quantized and finally entropy coded using CABAC, the arithmetic coding technique used in H.264/MPEG4-AVC. The mesh is then reconstructed at the encoder for prediction of the next mesh. In our experiments we compare the efficiency of the proposed algorithm in terms of bit-rate and quality compared to static mesh coding and interpolator compression indicating a significant improvement in compression efficiency.

73 citations


Journal Article
TL;DR: The basic elements of the ALS codec are described with a focus on prediction, entropy coding, and related tools and the most important applications of this new lossless audio format are pointed out.
Abstract: MPEG-4 Audio Lossless Coding (ALS) is a new extension of the MPEG-4 audio coding family. The ALS core codec is based on forward-adaptive linear prediction, which offers remarkable compression together with low complexity. Additional features include long-term prediction, multichannel coding, and compression of floating-point audio material. In this paper authors who have actively contributed to the standard describe the basic elements of the ALS codec with a focus on prediction, entropy coding, and related tools. We also present latest developments in the standardization process and point out the most important applications of this new lossless audio format.

67 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: A novel type of optimized rateless code is proposed, called a Matrioshka code, to deal with the particular conditions of Slepian-Wolf encoding, to address two shortcomings of currently available SlePian- Wolf schemes.
Abstract: The design and optimization of rateless codes for Slepian-Wolf encoding are considered. Rateless codes are pro- posed to address two shortcomings of currently available Slepian- Wolf schemes: their fragility to changing source statistics, and their inability to guarantee successful decoding for practical block length. We propose a novel type of optimized rateless code, called a Matrioshka code, to deal with the particular conditions of Slepian-Wolf encoding.

54 citations


Journal ArticleDOI
TL;DR: Three rate control methods are proposed to efficiently reduce both the computational complexity and memory usage over the conventional PCRD method and suggest that they provide different tradeoff among visual quality, computational complexity, coding delay and working memory size.
Abstract: JPEG2000 is the new image coding standard which can provide superior rate-distortion performance over the old JPEG standard. However, the conventional post-compression rate-distortion (PCRD) optimization scheme in JPEG2000 is not efficient. It requires entropy encoding all available data even though a large portion of them will not be included in the final output. In this paper, three rate control methods are proposed to efficiently reduce both the computational complexity and memory usage over the conventional PCRD method. The first method, called successive bit-plane rate allocation (SBRA), allocates the bit rate by using the currently available rate-distortion information only. The second method, called priority scanning rate allocation (PSRA), allocates bits according to certain prioritized ordering. The third method uses PSRA to achieve optimal truncation as PCRD without encoding of all the image details and is called priority scanning with optimal truncation (PSOT). Simulation results suggest that the three proposed methods provide different tradeoff among visual quality, computational complexity, coding delay and working memory size. SBRA is memoryless and causal and requires the least computational complexity, lowest coding delay and achieves good visual quality. PSRA achieves higher PSNR than SBRA at the expense of larger working memory size and longer coding delay. PSOT gives the best PSNR but requires even more computation, delay and memory.

53 citations


Patent
12 Oct 2005
TL;DR: In this paper, the coefficients of independent spatial transforms can be arranged into subbands and the encoding of the subbands can utilize spatial information and coded block flags and end of block flags to reduce bit rate.
Abstract: A method, program product and device for encoding and/or decoding video data can include treating coefficients in the enhancement layer corresponding to a non-zero coefficient in the base layer differently than a coefficient in the enhancement layer corresponding to a zero coefficient in the base layer. The sign of the base layer quantized coefficient can also be used as it indicates how the reconstructed error differs from the original signal. The coefficient of independent spatial transforms can be arranged into subbands and the encoding of the subbands can utilize spatial information and coded block flags and end of block flags to reduce bit rate. Rather than feeding the coefficients into a context-based adaptive binary arithmetic coding engine on a block-by-block basis, the subbands can be passed into the engine. Subband coefficients may be removed in a controlled manner, leading to a reduced bit-rate.

Proceedings ArticleDOI
19 Jun 2005
TL;DR: In this paper, the authors provided efficient solutions for the arithmetic coder and the renormalizer and an FPGA implementation of the proposed scheme capable of 54 Mbps encoding rate and test results are presented.
Abstract: One key technique for improving the coding efficiency of H.264 video standard is the entropy coder, context-adaptive binary arithmetic coder (CABAC). However the complexity of the encoding process of CABAC is far higher than the table driven entropy encoding schemes such as the Huffman coding. CABAC is also bit serial and its multi-bit parallelization is extremely difficult. For a high definition video encoder, multi-giga hertz RISC processors will be needed to implement the CABAC encoder. In this paper, the authors provided efficient solutions for the arithmetic coder and the renormalizer. An FPGA implementation of the proposed scheme capable of 54 Mbps encoding rate and test results are presented. A 0.18 /spl mu/m ASIC synthesis and simulation shows 87 Mbps encoding rate utilizing an area of 0.42 mm/sup 2/.

Patent
15 Jul 2005
TL;DR: In this article, techniques and tools for reordering of spectral coefficients in encoding and decoding are described, for certain types and patterns of content, coefficient reordering reduces redundancy that is due to periodic patterns in the spectral coefficients.
Abstract: Techniques and tools for reordering of spectral coefficients in encoding and decoding are described herein. For certain types and patterns of content, coefficient reordering reduces redundancy that is due to periodic patterns in the spectral coefficients, making subsequent entropy encoding more efficient. For example, an audio encoder receives spectral coefficients logically organized along one dimension such as frequency, reorders at least some of the spectral coefficients, and entropy encodes the spectral coefficients after the reordering. Or, an audio decoder receives entropy encoded information for such spectral coefficients, entropy decodes the information, and reverses reordering of at least some of the spectral coefficients.

Patent
15 Jul 2005
TL;DR: In this article, techniques and tools for prediction of spectral coefficients in encoding and decoding are described, where the authors exploit correlation between adjacent spectral coefficients, making subsequent entropy encoding more efficient.
Abstract: Techniques and tools for prediction of spectral coefficients in encoding and decoding are described herein. For certain types and patterns of content, coefficient prediction exploits correlation between adjacent spectral coefficients, making subsequent entropy encoding more efficient. For example, an audio encoder predictively codes quantized spectral coefficients in the quantized domain and entropy encodes results of the predictive coding. Or, for a particular quantized spectral coefficient, an audio decoder entropy decodes a difference value, computes a predictor in the quantized domain, and combines the predictor and the difference value.

Patent
So-Young Kim1, Jeong-Hoon Park1, Sang-Rae Lee1, Seung-Ran Park1, Yu-mi Sohn1 
07 Sep 2005
TL;DR: In this article, the color image decoding method includes transforming chrominance components of a color image in each of two or more inter-prediction modes, calculating costs for the conversion values in each one using a predetermined cost function, and outputting conversion values of the selected interprediction mode; entropy encoding the output conversion values.
Abstract: A color image encoding and decoding method and apparatus use a correlation between chrominance components in order to improve coding efficiency. The color image decoding method includes: transforming chrominance components of a color image in each of two or more inter-prediction modes, calculating costs for the conversion values in each of the two or more inter-prediction modes using a predetermined cost function, selecting one of the two or more inter-prediction modes based on the calculation result, and outputting conversion values of the selected inter-prediction mode; entropy encoding the output conversion values.

Journal ArticleDOI
TL;DR: It is found that the spirit of their result, namely, the sufficiency of time-sharing scalar quantizers for attaining optimum performance within the family of causal source codes, extends to many scenarios involving availability of side information (at both encoder and decoder, or only on one side).
Abstract: We study the effect of the introduction of side information into the causal source coding setting of Neuhoff and Gilbert. We find that the spirit of their result, namely, the sufficiency of time-sharing scalar quantizers (followed by appropriate lossless coding) for attaining optimum performance within the family of causal source codes, extends to many scenarios involving availability of side information (at both encoder and decoder, or only on one side). For example, in the case where side information is available at both encoder and decoder, we find that time-sharing side-information-dependent scalar quantizers (at most two for each side-information symbol) attains optimum performance. This remains true even when the reproduction sequence is allowed noncausal dependence on the side information and even for the case where the source and the side information, rather than consisting of independent and identically distributed (i.i.d.) pairs, form, respectively, the output of a memoryless channel and its stationary ergodic input.

Journal ArticleDOI
TL;DR: A number of novel improvements to the mesh-based coding scheme for 3-D brain magnetic resonance images are proposed and evaluated, including elimination of the clinically irrelevant background leading to meshing of only the brain part of the image and content-based (adaptive) mesh generation using spatial edges and optical flow between two consecutive slices.
Abstract: We propose and evaluate a number of novel improvements to the mesh-based coding scheme for 3-D brain magnetic resonance images. This includes: 1) elimination of the clinically irrelevant background leading to meshing of only the brain part of the image; 2) content-based (adaptive) mesh generation using spatial edges and optical flow between two consecutive slices; 3) a simple solution for the aperture problem at the edges, where an accurate estimation of motion vectors is not possible; and 4) context-based entropy coding of the residues after motion compensation using affine transformations. We address only lossless coding of the images, and compare the performance of uniform and adaptive mesh-based schemes. The bit rates achieved (about 2 bits per voxel) by these schemes are comparable to those of the state-of-the-art three-dimensional (3-D) wavelet-based schemes. The mesh-based schemes have been shown to be effective for the compression of 3-D brain computed tomography data also. Adaptive mesh-based schemes perform marginally better than the uniform mesh-based methods, at the expense of increased complexity.

Patent
09 Mar 2005
TL;DR: In this paper, a PMV (prediction motion vector) is used to make a pipeline processing of a macroblock easy in an image encoding apparatus for encoding by using an intra prediction, and to parallel a motion vector search to a plurality of macroblocks in image encoding method.
Abstract: PROBLEM TO BE SOLVED: To make a pipeline processing of a macroblock easy in an image encoding apparatus for encoding by using an intra prediction, and to parallel a motion vector search to a plurality of macroblocks in an image encoding method which uses a PMV (prediction motion vector). SOLUTION: A macroblock order selecting means 13 selects the macroblock to be encoded. The 13 selects the order of the macroblock to be encoded so that a difference between order numbers of a present macroblock and the macroblock to which the present macroblock refers in the intra prediction or in the motion vector search is 2 and more. The intra prediction, a motion compensation prediction, a motion code fitting mode selection, a DCT, a quantization, and an entropy encoding are conducted to the selected macroblock. COPYRIGHT: (C)2006,JPO&NCIPI

Proceedings ArticleDOI
27 Apr 2005
TL;DR: An efficient CAVLC design with two-stage block pipelining scheme for parallel processing of two 4/spl times/4-blocks and a zero skipping technique is adopted to reduce up to 90% of cycles at low bitrates.
Abstract: Direct VLSI implementation of context-based adaptive variable length coding (CAVLC) for residues, as a modification from conventional run-length coding, will lead to low throughput and utilization. In this paper, an efficient CAVLC design is proposed. The main concept is the two-stage block pipelining scheme for parallel processing of two 4/spl times/4-blocks. When one block is processed by the scanning engine to collect the required symbols, its previous block is handled by the coding engine to translate symbols into bitstream. The dual-block-pipelined architecture doubles the throughput and utilization of CAVLC at high bitrates. Moreover, a zero skipping technique is adopted to reduce up to 90% of cycles at low bitrates. Last but not least, exponential-Golomb coding for other general symbols and bitstream encapsulation for network abstraction layer are integrated with CAVLC engine as a complete entropy coder for H.264/AVC baseline profile. Simulation results show that our design is capable of real-time processing for 1920 /spl times/ 1088 30fps videos with 23.6K logic gates at 100MHz.

Proceedings ArticleDOI
24 Jun 2005
TL;DR: A new algorithm for the coding of depth images is proposed that provides an efficient representation of smooth regions as well as geometric features such as object contours and achieves a bit-rate as low as 0.33 bit/pixel, without any entropy coding.
Abstract: An efficient way to transmit multi-view images is to send the texture image together with a corresponding depth image. The depth image specifies the distance between each pixel and the camera. With this information, arbitrary views can be generated at the decoder. In this paper, we propose a new algorithm for the coding of depth images that provides an efficient representation of smooth regions as well as geometric features such as object contours. Our algorithm uses a segmentation procedure based on a quadtree decomposition and models the depth image content with piecewise linear functions. We achieved a bit-rate as low as 0.33 bit/pixel, without any entropy coding. The attractivity of the coding algorithm is that, by exploiting specific properties of depth images, no degradations are shown along discontinuities, which is important for perceived depth.

Journal ArticleDOI
TL;DR: This work describes the minimum achievable composite rate for both the public and the private versions of the problem and demonstrates how this minimum can be approached in principle.
Abstract: We consider the problem of optimum joint information embedding and lossy compression with respect to a fidelity criterion. The goal is to find the minimum achievable compression (composite) rate R/sub c/ as a function of the embedding rate R/sub e/ and the average distortion level /spl Delta/ allowed, such that the average probability of error in decoding of the embedded message can be made arbitrarily small for sufficiently large block length. We characterize the minimum achievable composite rate for both the public and the private versions of the problem and demonstrate how this minimum can be approached in principle. We also provide an alternative single-letter expression of the maximum achievable embedding rate (embedding capacity) as a function of R/sub c/ and /spl Delta/, above which there exist no reliable embedding schemes.

Journal ArticleDOI
TL;DR: This work proposes a simple yet effective edge detector using only causal pixels that can achieve a noticeable reduction in complexity with only a minor degradation in the prediction results.
Abstract: In predictive image coding, the least squares (LS)-based adaptive predictor is noted as an efficient method to improve prediction result around edges. However pixel-by-pixel optimization of the predictor coefficients leads to a high coding complexity. To reduce computational complexity, we activate the LS optimization process only when the coding pixel is around an edge or when the prediction error is large. We propose a simple yet effective edge detector using only causal pixels. The system can look ahead to determine if the coding pixel is around an edge and initiate the LS adaptation to prevent the occurrence of a large prediction error. Our experiments show that the proposed approach can achieve a noticeable reduction in complexity with only a minor degradation in the prediction results.

Patent
01 Apr 2005
TL;DR: In this paper, an enhanced audio encoding device consisting of a signal type analyzing module, a psycho-acoustical analyzing module and a time-frequency mapping module, quantization and entropy encoding module, frequency-domain linear prediction and vector quantization module, and bit stream multiplexing module is presented.
Abstract: An enhanced audio encoding device, which is consisted of a signal type analyzing module, a psychoacoustical analyzing module, a time-frequency mapping module, a quantization and entropy encoding module, a frequency-domain linear prediction and vector quantization module, and a bit stream multiplexing module, in which the singal type analyzing module is configured to anaylze the signal type of the input audio signal and output to the psychoacoustical analyzing module, the time-frequency mapping module and the bit stream multiplexing module, the frequency-domain linear prediction and vector quantization module is configured to perform a linear prediction of the frequency-domain coefficients and a multi-level vector quantization, and output the residual sequences to the quantization and entropy encoding module while outputting a lateral information to the bit stream multiplexing module. The device is adapted to perform a compressive encoding of the audio signal with high fidelity, which is sampled at multi sampling rates and has multi audio channel configurations, the device can support the audio signal whose sampling rate is between 8kHz and 192kHz and all possible audio channel configurations, and the range of the objective code rate of the audio encode/decode is very wide.

Patent
27 Dec 2005
TL;DR: In this article, a preferred embodiment of the present invention introduces events combining the position of the last non-zero coefficient in the block with whether the absolute value is greater than 1, and no information from outside the macroblock is used to decide what VLC to use.
Abstract: The invention is related to entropy coding/decoding of transform coefficient data in video compression systems. For entropy coding coefficients representing a block in a video image, a preferred embodiment of the present invention introduces events combining the position of the last non-zero coefficient in the block with whether the absolute value is greater than 1. Further, no information from outside the macroblock is used to decide what VLC to use. Coefficients are typically coded by starting in a Run-mode and continuing in Level-mode when the first coefficient with absolute value >1 is found.


Patent
18 Jul 2005
TL;DR: In this article, a first circuit, a second circuit and an output circuit are configured to generate entropy coded output signals in response to one of the first sets of entropy coded input signals or the second set of entropy encoded output signals and the data path signal.
Abstract: An apparatus comprising a first circuit, a second circuit and an output circuit. The first circuit may be configured to generate (i) one of a first set of entropy coded input signals or a second set of entropy coded input signals and (ii) a data path signal. The second circuit may be configured to generate (i) a first set of entropy encoded output signals in response to decoding the second set of entropy coded input signals, or (ii) a second set of entropy coded output signals in response to decoding the first set of entropy coded input signals. The second circuit may provide real time decoding and encoding on a macroblock basis. The output circuit may be configured to present an output signal in response to (i) one of the first set of entropy coded output signals or the second set of entropy coded output signals and (ii) the data path signal.

Proceedings ArticleDOI
Zongwang Li1, Lei Chen, Lingqi Zeng, Shu Lin, W.H. Fong 
01 Jan 2005
TL;DR: It is shown that the encoding complexity of a QC-LDPC code is linearly proportional to the number of parity bits of the code for serial encoding, and to the length of thecode for high-speed parallel encoding.
Abstract: This paper presents methods for efficient encoding of quasi-cyclic LDPC codes. Based on these methods, encoding of quasi-cyclic LDPC codes can be implemented using simple shift-registers with complexity linearly proportional to the number of parity-check bits of a code for serial encoding and to the length of a code for parallel encoding. Various encoding circuits are devised and they provide a range of trade-offs between encoding complexity and speed.

Posted Content
TL;DR: It is concluded that entropy coding and quantization coincide asymptotically under supremum-norm distortion, and an explicit construction of efficient codebooks based on a particular entropy constrained coding scheme is used.
Abstract: We derive a high-resolution formula for the quantization and entropy coding approximation quantities for fractional Brownian motion, respective to the supremum norm and L^p[0,1]-norm distortions. We show that all moments in the quantization problem lead to the same asymptotics. Using a general principle, we conclude that entropy coding and quantization coincide asymptotically. Under supremum-norm distortion, our proof uses an explicit construction of efficient codebooks based on a particular entropy constrained coding scheme. This procedure can be used to construct close to optimal high resolution quantizers.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: An algorithm and a hardware architecture of a new type EC codec engine with multiple modes are presented and the proposed four-tree pipelining scheme can reduce 83% latency and 67% buffer size between transform and entropy coding.
Abstract: In a typical multi-chip handheld system for multi-media applications, external access, which is usually dominated by block-based video content, induces more than half of total system power. Embedded compression (EC) effectively reduces external access caused by video content by reducing the data size. In this paper, an algorithm and a hardware architecture of a new type EC codec engine with multiple modes are presented. Lossless mode, and lossy modes with rate control modes and quality control modes are all supported by single algorithm. The proposed four-tree pipelining scheme can reduce 83% latency and 67% buffer size between transform and entropy coding. The proposed EC codec engine can save 50%, 61%, and 77% external access at lossless mode, half-size mode, and quarter-size mode and can be used in various system power conditions. With Artisan 0.18 /spl mu/m cell library, the proposed EC codec engine can encode or decode at VGA@30fps with area and power consumption of 293,605 /spl mu/m/sup 2/ and 3.36 mW.

Proceedings ArticleDOI
16 May 2005
TL;DR: This work designs and implements the CABAC entropy encoder and decoder as an integral part of the real-time video coding system that conforms to the H.264 standard to achieve a real- time processing of SD (standard definition) video streams while maintaining low hardware complexity and a low power consumption level.
Abstract: We present the real-time FPGA implementation of the CABAC (context-based adaptive binary arithmetic coding) entropy coder. CABAC is the entropy coding tool specified in the main profile of MPEG-4 AVC/H.264, the newest international standard for video compression. We design and implement the CABAC entropy encoder and decoder as an integral part of the real-time video coding system that conforms to the H.264 standard. Our goal is to achieve a real-time processing of SD (standard definition) video streams while maintaining low hardware complexity and a low power consumption level. Our design and implementation is suitable for portable consumer electronic devices such as digital mobile TV and digital camcorders.