scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 2011"


Journal ArticleDOI
TL;DR: A novel method for the protection of bitstreams of state-of-the-art video codec H.264/AVC by keeping exactly the same bitrate, generating completely compliant bitstream and utilizing negligible computational power is presented.
Abstract: This paper presents a novel method for the protection of bitstreams of state-of-the-art video codec H.264/AVC. The problem of selective encryption (SE) is addressed along with the compression in the entropy coding modules. H.264/AVC supports two types of entropy coding modules. Context-adaptive variable length coding (CAVLC) is supported in H.264/AVC baseline profile and context-adaptive binary arithmetic coding (CABAC) is supported in H.264/AVC main profile. SE is performed in both types of entropy coding modules of this video codec. For this purpose, in this paper the encryption step is done simultaneously with the entropy coding CAVLC or CABAC. SE is performed by using the advanced encryption standard (AES) algorithm with the cipher feedback mode on a subset of codewords/binstrings. For CAVLC, SE is performed on equal length codewords from a specific variable length coding table. In case of CABAC, it is done on equal length binstrings. In our scheme, entropy coding module serves the purpose of encryption cipher without affecting the coding efficiency of H.264/AVC by keeping exactly the same bitrate, generating completely compliant bitstream and utilizing negligible computational power. Owing to no escalation in bitrate, our encryption algorithm is better suited for real-time multimedia streaming over heterogeneous networks. It is perfect for playback on handheld devices because of negligible increase in processing power. Nine different benchmark video sequences containing different combinations of motion, texture, and objects are used for experimental evaluation of the proposed algorithm.

149 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: The proposed compression scheme using RLS-DLA learned dictionaries in the 9/7 wavelet domain performs better than using dictionaries learned by other methods, and the compression rate is just below the JPEG-2000 rate which is promising considering the simple entropy coding used.
Abstract: The recently presented recursive least squares dictionary learning algorithm (RLS-DLA) is tested in a general image compression application. Dictionaries are learned in the pixel domain and in the 9/7 wavelet domain, and then tested in a straightforward compression scheme. Results are compared with state-of-the-art compression methods. The proposed compression scheme using RLS-DLA learned dictionaries in the 9/7 wavelet domain performs better than using dictionaries learned by other methods. The compression rate is just below the JPEG-2000 rate which is promising considering the simple entropy coding used.

123 citations


Journal ArticleDOI
01 Dec 2011
TL;DR: A chaos-based joint image compression and encryption algorithm using discrete cosine transformation (DCT) and Secure Hash Algorithm-1 (SHA-1) is proposed that is efficient and highly sensitive to both the key and the plain-image.
Abstract: A chaos-based joint image compression and encryption algorithm using discrete cosine transformation (DCT) and Secure Hash Algorithm-1 (SHA-1) is proposed. As SHA-1 is fast and input-sensitive, it is employed to enhance the diffusion effect on image pixels. The DCT coefficients of the whole image are separated into two sequences for mutual interaction. The sequence of low-frequency coefficients, together with the secret keys, generates a message digest to perturb another sequence composed of high-frequency coefficients. The last cipher block of the high-frequency sequence is used as a feedback to control the diffusion and permutation in the low-frequency sequence. Huffman code is chosen as the entropy coding to compress the encrypted chains. Experimental results confirm that our algorithm is efficient and highly sensitive to both the key and the plain-image.

88 citations


01 Jan 2011
TL;DR: A survey of different basic lossless data compression algorithms using Statistical compression techniques and Dictionary based compression techniques on text data is provided.
Abstract: Data Compression is the science and art of representing information in a compact form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are lot of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms. Experimental results and comparisons of the lossless compression algorithms using Statistical compression techniques and Dictionary based compression techniques were performed on text data. Among the statistical coding techniques the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered. Lempel Ziv scheme which is a dictionary based technique is divided into two families: those derived from LZ77 (LZ77, LZSS, LZH and LZB) and those derived from LZ78 (LZ78, LZW and LZFG). A set of interesting conclusions are derived on their basis.

80 citations


Journal ArticleDOI
TL;DR: 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes and were investigated and compared with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding.

76 citations


Book
05 Jan 2011
TL;DR: Based on the fundamentals of information and rate distortion theory, the most relevant techniques used in source coding algorithms are described: entropy coding, quantization as well as predictive and transform coding.
Abstract: Digital media technologies have become an integral part of the way we create, communicate, and consume information. At the core of these technologies are source coding methods that are described in this monograph. Based on the fundamentals of information and rate distortion theory, the most relevant techniques used in source coding algorithms are described: entropy coding, quantization as well as predictive and transform coding. The emphasis is put onto algorithms that are also used in video coding, which will be explained in the other part of this two-part monograph.

69 citations


Proceedings ArticleDOI
03 Oct 2011
TL;DR: This paper provides the achievable rate distortion region for two cases and demonstrates a relationship between the lossy multiterminal source coding problems with the authors' specific distortion measure and the canonical Slepian-Wolf lossless distributed source coding network.
Abstract: In this paper, we consider a class of multiterminal source coding problems, each subject to distortion constraints computed using a specific, entropy-based, distortion measure. We provide the achievable rate distortion region for two cases and, in so doing, we demonstrate a relationship between the lossy multiterminal source coding problems with our specific distortion measure and (1) the canonical Slepian-Wolf lossless distributed source coding network, and (2) the Ahlswede-Korner-Wyner source coding with side information problem in which only one of the sources is recovered losslessly.

57 citations


Book
30 Dec 2011
TL;DR: This book contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals.
Abstract: With clear and easy-to-understand explanations, this book covers the fundamental concepts and coding methods of signal compression, whilst still retaining technical depth and rigor. It contains a wealth of illustrations, step-by-step descriptions of algorithms, examples and practice problems, which make it an ideal textbook for senior undergraduate and graduate students, as well as a useful self-study tool for researchers and professionals. Principles of lossless compression are covered, as are various entropy coding techniques, including Huffman coding, arithmetic coding and Lempel-Ziv coding. Scalar and vector quantization and trellis coding are thoroughly explained, and a full chapter is devoted to mathematical transformations including the KLT, DCT and wavelet transforms. The workings of transform and subband/wavelet coding systems, including JPEG2000 and SBHP image compression and H.264/AVC video compression, are explained and a unique chapter is provided on set partition coding, shedding new light on SPIHT, SPECK, EZW and related methods.

53 citations


Journal ArticleDOI
TL;DR: This paper has implemented and tested Huffman and arithmetic algorithms, and implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huff man coding is higher than Arithmetic coding.
Abstract: Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.

52 citations


Patent
10 Feb 2011
TL;DR: In this article, a method of compressing digital image data is provided that includes, for each image data block in a plurality of image data blocks in the digital data, transforming image data in the image block to convert the image data to a low-frequency coefficient and quantizing the plurality of high-frequency coefficients, and entropy coding the residual low frequency coefficient and the quantized high frequency coefficients.
Abstract: A method of compressing digital image data is provided that includes, for each image data block in a plurality of image data blocks in the digital image data, transforming image data in the image data block to convert the image data to a low-frequency coefficient and a plurality of high-frequency coefficients, computing a predicted low-frequency coefficient for the image data block based on at least one neighboring image data block in the plurality of image data blocks, computing a residual low-frequency coefficient based on the predicted low-frequency coefficient and the low-frequency coefficient, quantizing the plurality of high-frequency coefficients, and entropy coding the residual low-frequency coefficient and the quantized high-frequency coefficients.

52 citations


Patent
15 Jul 2011
TL;DR: In this article, a method for parallel context processing for high coding efficient entropy coding in HEVC was proposed, comprising retrieving syntax element relating to a block of an image, grouping at least two bins belonging to similar context based on the syntax element, and coding the grouped bins in parallel.
Abstract: A method and apparatus for parallel context processing for example for high coding efficient entropy coding in HEVC. The method comprising retrieving syntax element relating to a block of an image, grouping at least two bins belonging to similar context based on the syntax element, and coding the grouped bins in parallel.

Proceedings ArticleDOI
16 Sep 2011
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the ultraspectral sounder data which features thousands of channels at each observation location, lossless compression is desirable to save storage space and transmission time without losing precision in retrieval of geophysical parameters. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit-depth partitioning, vector quantization, and entropy coding. In our previous work, the two most time consuming stages of linear prediction and vector quantization were identified for GPU implementation. For GIFTS data, using a spectral division strategy for sharing the compression workload among four GPUs, a speedup of ~42x was achieved. To further enhance the speedup, this work will explore a spatial division strategy for sharing workload in processing the six parts of a GIFTS datacube. As result, the total processing time of a GIFTS datacube on four GPUs can be less than 13 seconds which is equivalent to a speedup of ~72x. The use of multiple GPUs for PPVQ compression is thus promising as a low-cost and effective compression solution for ultraspectral sounder data for rebroadcast use.

Patent
Keiichi Chono1, Yuzo Senda1, Junji Tajime1, Hirofumi Aoki1, Kenta Senzaki1 
15 Dec 2011
TL;DR: In this article, a video encoding device includes: pixel bit length increasing means for increasing a pixel length of an input image based on bit length increase information; transform means for transforming output data of the pixel bit lengths increasing means; entropy encoding means for entropy-encoding output data output of the transform means; non-compression encoding means for noncompression input data; multiplexed data selection means for selecting output data from the entropy encoder or output from the non-encoder.
Abstract: A video encoding device includes: pixel bit length increasing means for increasing a pixel bit length of an input image based on pixel bit length increase information; transform means for transforming output data of the pixel bit length increasing means; entropy encoding means for entropy-encoding output data of the transform means; non-compression encoding means for non-compression-encoding input data; multiplexed data selection means for selecting output data of the entropy encoding means or output data of the non-compression encoding means; and multiplexing means for multiplexing the pixel bit length increase information in a bitstream, wherein a pixel bit length of an image corresponding to the output data of the entropy encoding means and a pixel bit length of an image corresponding to the output data of the non-compression encoding means are different from each other.

Patent
30 Sep 2011
TL;DR: In this article, the authors describe techniques for performing entropy encoding and decoding of video coefficients using a joint context model shared between transform units having different sizes, which may reduce an amount of memory necessary to store contexts and probabilities, and reduce computational costs of maintaining context models.
Abstract: This disclosure describes techniques for performing entropy encoding and decoding of video coefficients using a joint context model shared between transform units having different sizes. For example, the joint context model may be shared between transform units having a first size of 32x32 and transform units having a second size of 16x16. Performing entropy coding using a joint context model shared between transform units having different sizes may reduce an amount of memory necessary to store contexts and probabilities, and reduce computational costs of maintaining context models. In one example, the joint context model may be shared between transform units having the first size with coefficients zeroed out to generate a retained coefficient block having the second size and transform units having the second size. In another example, the joint context model may be shared between transform units having the first size and transform units having the second size.

Journal ArticleDOI
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the large-volume ultraspectral sounder data, compression is desirable to save storage space and transmission time. To retrieve the geophysical paramters without losing precision the ultraspectral sounder data compression has to be lossless. Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved by using GPU. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding. Two most time consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting the data parallel characteristics of these two stages, a spatial division design shows a speedup of 72x in our four-GPU-based implementation of the PPVQ compression scheme.

Patent
14 Nov 2011
TL;DR: In this paper, a method for coding video data includes identifying a scan path for scanning significance information associated with a quantized transform coefficient, and determining a context support neighborhood for entropy coding the significance information.
Abstract: In one example, a method for coding video data includes identifying a scan path for scanning significance information associated with a quantized transform coefficient. The method also includes determining a context support neighborhood for entropy coding the significance information associated with the quantized transform coefficient, wherein the context support neighborhood excludes one or more context support elements that are located in the scan path. The method also includes coding the significance information using the modified context support neighborhood.

Proceedings ArticleDOI
Vishal Bhola1, Ajit S. Bopardikar1, Rangavittal Narayanan1, Kyu-Sang Lee1, Tae-jin Ahn1 
12 Nov 2011
TL;DR: The proposed compression system provides features for loss less and nearly loss less compression as well as compressing only read and read + quality data and it is observed that the performance of the proposed system is superior to that of both the systems.
Abstract: In this paper, we propose a system to compress Next Generation Sequencing (NGS) information stored in a FASTQ file. A FASTQ file contains text, DNA read and quality information for millions or billions of reads. The proposed system first parses the FASTQ file into its component fields. In a partial first pass it gathers statistics which are then used to choose a representation for each field that can give the best compression. Text data is further parsed into repeating and variable components and entropy coding is used to compress the latter. Similarly, Markov encoding and repeat finding based methods are used for DNA read compression. Finally, we propose several run length based methods to encode quality data choosing the method that gives the best performance for a given set of quality values. The compression system provides features for loss less and nearly loss less compression as well as compressing only read and read + quality data. We compare its performance to bzip2 text compression utility and an existing benchmark algorithm. We observe that the performance of the proposed system is superior to that of both the systems.

Patent
30 Sep 2011
TL;DR: In this paper, a method and apparatus for encoding bit code utilizing context dependency simplification to reduce dependent scans is presented, which includes retrieving at least one 2D array of transform coefficient, transforming the at least 1D arrays of the significance map of the transform coefficient to a 1D coefficient scanning and determining at least scan direction, coding unit type and slice type assigned to transform coefficient.
Abstract: A method and apparatus for encoding bit code utilizing context dependency simplification to reduce dependent scans. The method includes retrieving at least one 2 dimensional array of transform coefficient, transforming the at least one 2 dimensional array of the significance map of the transform coefficient to a 1 dimensional coefficient scanning and determining at least one of scan direction, coding unit type and slice type assigned to transform coefficient, selecting neighbors based on at least one of scan direction and coding unit type and slice type, computing context index based on the values of the selected neighbors for context selection, and performing arithmetic coding to generate coded bit utilizing the computed context index and binarization.

Patent
Dake He, Jing Wang1
15 Apr 2011
TL;DR: In this article, the authors describe methods and devices for entropy coding data using an entropy coder to encode quantized transform domain coefficient data using two-dimensional coordinates for the last significant coefficient.
Abstract: Methods and devices are described for entropy coding data using an entropy coder to encode quantized transform domain coefficient data. Last significant coefficient information is signaled in the bitstream using two-dimensional coordinates for the last significant coefficient. The context for bins of one of the coordinates is based, in part, upon the value of the other of the coordinates. In one case, instead of signaling last significant coefficient information, the number of non-zero coefficients is binarized and entropy encoded.

Patent
05 Jan 2011
TL;DR: In this paper, a self-adaption dividing method for code units in high-efficiency video coding is proposed, which mainly solves the problems that a largest code unit and a smallest code unit in the division of the code units can not be regulated according to the characteristics of video content, and the dividing mode quantity of blocks on the boundary of a video frame is few.
Abstract: The invention discloses a self-adaption dividing method for code units in high-efficiency video coding, which mainly solves the problems that a largest code unit and a smallest code unit in the division of the code units in the prior art can not be regulated according to the characteristics of video content, and the dividing mode quantity of blocks on the boundary of a video frame is few. The method comprises the steps of: firstly, adopting the rate distortion rule or the relativity among video frames to self-adaptively determine the largest code unit and the smallest code unit in each video frame; secondly, expanding a rectangular block within the video frame in the largest code unit on the boundary of a video frame into a square block, and dividing the square block; then, marking the rectangular blocks after dividing, and carrying out frame prediction to the rectangular blocks after dividing; and finally, transforming, quantizing and entropy coding to prediction residuals. The invention has the advantage of high video compression efficiency, and can be applied to high-performance video coding standards.

Patent
Elena Alshina1, Alexander Alshin1, Woo-Jin Han1, Tammy Lee1, Yoonmi Hong1 
05 Apr 2011
TL;DR: In this paper, a method and apparatus for encoding a video by using dynamic range transformation based on content and a method for decoding a video decoding by using a dynamic range transform based on the content of the input video is presented.
Abstract: Provided is a method and apparatus for encoding a video by using dynamic range transformation based on content and a method and apparatus for decoding a video by using dynamic range transformation based on content. The encoding method includes: performing inter prediction, through motion estimation, and intra prediction for a current region using image data in which a dynamic range of the current region is transformed based on content of an image of input video; performing transformation on residual data generated by the intra prediction and the inter prediction and performing quantization on a transformation coefficient generated by the transformation; and performing entropy encoding on the quantized transformation coefficient.

Patent
08 Apr 2011
TL;DR: In this paper, a method and system for entropy coding can comprise, in response to detecting a first symbol combination comprising first run information indicating a first number of contiguous zero coefficients is greater than a cut-off-run value, assigning a first codeword to a first-symbol combination, wherein the first codeword comprises an escape code from a firstlevel VLC table; and in response, a second symbol combination consisting second run information indicated a second number of adjacent zero coefficients was less than or equal to the cutoff run value.
Abstract: A method and system for entropy coding can comprise, in response to detecting a first symbol combination comprising first run information indicating a first number of contiguous zero coefficients is greater than a cut-off-run value, assigning a first codeword to a first symbol combination, wherein the first codeword comprises an escape code from a first-level VLC table; and in response to a second symbol combination comprising second run information indicating a second number of contiguous zero coefficients is less than or equal to the cut-off-run value, assigning a second codeword to the second symbol combination, wherein the second codeword is from the first-level VLC table. The system and method can further comprise collecting coding statistics for a set of candidate symbol combinations and adjusting a mapping between codewords of the first-level VLC table and a subset of the set of candidate symbol combinations based on the coding statistics.

Patent
05 Apr 2011
TL;DR: In this article, context modeling is performed on a context unit of blocks of the image data based on the context model of a previously encoded or decoded block, where context models are used to encode and decode image data.
Abstract: Entropy encoding and entropy decoding of image data are respectively performed whereby context modeling is performed on a context unit of blocks of the image data based on a context model of a previously encoded or decoded block.

Patent
Bae-Keun Lee1, Sohn Yu-Mi1
08 Jul 2011
TL;DR: In this article, the authors present a method and apparatus for entropy encoding a transform block and a method for entropy decoding the transform block, which involves determining the location of the final active transform coefficient which is not zero from among the transform coefficients of a predetermined size.
Abstract: The present invention relates to a method and apparatus for entropy encoding a transform block, and to a method and apparatus for entropy decoding the transform block. The method for entropy encoding a transform coefficient according to one embodiment of the present invention involves: determining the location of the final active transform coefficient which is not zero from among the transform coefficients of a transform block of a predetermined size, according to a predetermined order of scanning; and encoding the location of the final active transform coefficient using the relative location in a horizontal axis direction and the relative location in a vertical axis direction in the transform block.

Patent
30 Sep 2011
TL;DR: In this paper, the authors describe techniques for performing entropy encoding and decoding of video coefficients using a joint context model shared between transform units having different sizes, which may reduce an amount of memory necessary to store contexts and probabilities, and reduce computational costs of maintaining context models.
Abstract: This disclosure describes techniques for performing entropy encoding and decoding of video coefficients using a joint context model shared between transform units having different sizes. For example, the joint context model may be shared between transform units having a first size of 32x32 and transform units having a second size of 16x16. Performing entropy coding using a joint context model shared between transform units having different sizes may reduce an amount of memory necessary to store contexts and probabilities, and reduce computational costs of maintaining context models. In one example, the joint context model may be shared between transform units having the first size with coefficients zeroed out to generate a retained coefficient block having the second size and transform units having the second size. In another example, the joint context model may be shared between transform units having the first size and transform units having the second size.

Patent
16 Dec 2011
TL;DR: In this article, an apparatus is disclosed for coding coefficients associated with a block of video data during a video coding process, wherein the apparatus includes a video coder configured to code information that identifies a position of a last non-zero coefficient within the block according to a scanning order associated with the block.
Abstract: In one example, an apparatus is disclosed for coding coefficients associated with a block of video data during a video coding process, wherein the apparatus includes a video coder configured to code information that identifies a position of a last non-zero coefficient within the block according to a scanning order associated with the block, wherein to code the information, the video coder is configured to perform a context adaptive entropy coding process that includes the video coder applying a context model based on at least three contexts, wherein the at least three contexts include a size associated with the block, a position of a given one of the coefficients within the block according to the scanning order, and the scanning order.

Journal ArticleDOI
TL;DR: The optimality of superposition coding is extended to the cases where there is either an additional all-access encoder or an additional secrecy constraint, and a subset entropy inequality recently proved by Madiman and Tetali is used to develop a new structural understanding of the work of Yeung and Zhang.
Abstract: Symmetrical multilevel diversity coding (SMDC) is a classical model for coding over distributed storage. In this setting, a simple separate encoding strategy known as superposition coding was shown to be optimal in terms of achieving the minimum sum rate (Roche, Yeung, and Hau, 1997) and the entire admissible rate region (Yeung and Zhang, 1999) of the problem. The proofs utilized carefully constructed induction arguments, for which the classical subset entropy inequality of Han (1978) played a key role. This paper includes two parts. In the first part the existing optimality proofs for classical SMDC are revisited, with a focus on their connections to subset entropy inequalities. First, a new sliding-window subset entropy inequality is introduced and then used to establish the optimality of superposition coding for achieving the minimum sum rate under a weaker source-reconstruction requirement. Second, a subset entropy inequality recently proved by Madiman and Tetali (2010) is used to develop a new structural understanding to the proof of Yeung and Zhang on the optimality of superposition coding for achieving the entire admissible rate region. Building on the connections between classical SMDC and the subset entropy inequalities developed in the first part, in the second part the optimality of superposition coding is further extended to the cases where there is either an additional all-access encoder (SMDC-A) or an additional secrecy constraint (S-SMDC).

Patent
01 Apr 2011
TL;DR: In this paper, an initial scanning pattern selection is performed in the decoder to allow the transform coefficients to be extracted in their proper order from encoded video data, which is applicable to both intra prediction and inter prediction.
Abstract: Entropy encoding is performed in the inventive apparatus and method in response to the scanning of transform coefficients following an initial scanning pattern selected on the basis of probability statistics of non-zero coefficients for each block position. These non-zero probability statistics are ranked for a given combination of coding characteristics within the current block to arrive at an initial scanning pattern. The same initial scanning pattern selection is performed in the decoder to allow the transform coefficients to be extracted in their proper order from encoded video data. The pattern selection is applicable to both intra prediction and inter prediction. Transform coefficients are more accurately ordered in response to the invention because in adapting pattern initialization to quantization step size, high-frequency basis functions are properly taken into account.

Patent
30 Sep 2011
TL;DR: In this paper, a method of entropy coding in a video encoder is provided that includes assigning a first bin to a first single-probability bin encoder based on a probability state of the first bin, assigning a second bin to another single probability encoder, and coding the first and second bins in the same encoder in parallel.
Abstract: A method of entropy coding in a video encoder is provided that includes assigning a first bin to a first single-probability bin encoder based on a probability state of the first bin, wherein the first single-probability bin encoder performs binary arithmetic coding based on a first fixed probability state, assigning a second bin to a second single-probability bin encoder based on a probability state of the second bin, wherein the second single-probability bin encoder performs binary arithmetic coding based on a second fixed probability state different from the first fixed probability state, and coding the first bin in the first single-probability bin encoder and the second bin in the second single-probability bin encoder in parallel, wherein the first single-probability bin encoder uses a first rLPS table for the first fixed probability state and the second single-probability bin encoder uses a second rLPS table for the second fixed probability state

Proceedings ArticleDOI
07 Apr 2011
TL;DR: This work optimize a family of systematic Raptor codes over GF (4) that are particularly suited for this application since they allow for a continuum of coding rates, in order to adapt to the quantized source entropy rate (which may differ from image to image) and to channel capacity.
Abstract: A new coding scheme for image transmission over noisy channel is proposed. Similar to standard image compression, the scheme includes a linear transform followed by embedded scalar quantization. Joint source-channel coding is implemented by optimizing the rate allocation across the source subbands, treated as the components of a parallel source model. The quantized transform coefficients are linearly mapped into channel symbols, using systematic linear encoders of appropriate rate. This fixed-to-fixed length “linear index coding” approach avoids the use of an explicit entropy coding stage (e.g., arithmetic or Huffman coding), which is typically non-robust to postdecoding residual errors. Linear codes over GF(4) codes are particularly suited for this application, since they are matched to the alphabet of the quantization indices of the dead-zone embedded quantizers used in the scheme, and to the QPSK modulation used on the deep-space communication channel. Therefore, we optimize a family of systematic Raptor codes over GF (4) that are particularly suited for this application since they allow for a continuum of coding rates, in order to adapt to the quantized source entropy rate (which may differ from image to image) and to channel capacity. Comparisons are provided with respect to the concatenation of state-of-the-art image coding and channel coding schemes used by Jet Propulsion Laboratories (JPL) for the Mars Exploration Rover (MER) Mission.