scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1987"


Proceedings ArticleDOI
01 Apr 1987
TL;DR: An algorithm for calculating a noise-to-mask ratio is presented which helps to identify, where quantization noise could be audible, where the OCF-Coder can be audible.
Abstract: Optimum Coding in the Frequency domain (OCF) uses entropy coding of quantized spectral coefficients to efficiently code high quality sound signals with 3 bits/sample. In an iterative algorithm psychoacoustic weigthing is used to get the quantization noise to be masked in every critical band. The coder itself uses iterative quantizer control to get each data block to be coded with a fixed number of bits. Details about the OCF-Coder are presented together with information about the codebook needed and the training for the entropy coder. An algorithm for calculating a noise-to-mask ratio is presented which helps to identify, where quantization noise could be audible.

95 citations


Patent
01 Dec 1987
TL;DR: In this paper, a method for modeling differential pulse code modulation (DPCM) input data for entropy coding is presented, in which the sign and magnitude of one piece of DPCM data after another are modelled to provide a magnitude state input and a sign state input, respectively, to an entropy encoder or decoder.
Abstract: Apparatus and method for modelling differential pulse code modulation (DPCM) input data for entropy coding. In partic­ular, the sign and magnitude of one piece of DPCM data after another are modelled to provide a magnitude state input and a sign state input to provide context for DPCM magnitude in­put and DPCM sign input, respectively, to an entropy encoder or decoder. That is, the DPCM magnitudes of earlier pieces of (context) DPCM magnitude data are re-mapped for each such earlier piece of data, the re-mapped data being aggregated to form a combined value indicative of the magnitude state input. Similarly, the DPCM signs of earlier pieces of (con­text) DPCM sign data are re-mapped for each such earlier piece of data, the re-mapped data being aggregated to form a combined value indicative of the sign state input. In an image data compression system, the magnitude state input serves as an activity indicator for picture elements (pixels) neighboring a "subject" pixel. According to the invention, the DPCM signal is derived from a difference value calculated by subtracting one of a plurality of predictor values from the graylevel value X of the subject pixel. The selection of predictor value P is based on the value of the magnitude state (activity indicator). In addition, the difference value is subject to adaptive quantization in which one of a plu­rality of quantizers is employed in assigning the (X-P) dif­ference value to a quantization level. The selection of quantizers is also based on the value of the magnitude state (activity indicator).

81 citations


Proceedings ArticleDOI
01 Apr 1987
TL;DR: An efficient pyramid image coding system using quadrature mirror filters to form the image-pyramids is proposed in this paper and results have shown that simple vector quantization accomplished significant bit-rate reduction over scalar quantization.
Abstract: The pyramid image structure can be naturally adapted for progressive image transmission over low-speed channels and hierarchical image retrieving in computerized image archiving. An efficient pyramid image coding system using quadrature mirror filters to form the image-pyramids is proposed in this paper. Characteristics of the image-pyramids are presented. Since the Laplacian pyramids of most nature images contain sparse and spatially concentrated data, a combined run-length coding for zero-valued elements and entropy coding for elements larger than a certain threshold is employed. The textural features in the Laplacian pyramids suggest that coding techniques pursuing spatial correlation may be advantageous. Therefore, vector quantization is chosen to code the Laplacian pyramids. Simulation results have shown that simple vector quantization accomplished significant bit-rate reduction over scalar quantization. The proposed system has also shown good-quality reproduction at bit rates lower than 1 bit/pixel.

32 citations


Journal ArticleDOI
Robert J. Moorhead1, S. Rajala, L. Cook
TL;DR: This paper presents and analyzes a pel-recursive, motioncompensated, image sequence compression algorithm and indicates that implementing the analytical model as opposed to the generally used heuristic technique does yield a decrease in the information rate and the computational requirements.
Abstract: This paper presents and analyzes a pel-recursive, motioncompensated, image sequence compression algorithm [1]. The analysis retains all the terms of the Taylor's series expansion and yields a set of equations for which the convergence criteria and the convergence rate of the motion estimate are more easily seen. The existing motion prediction schemes are also reviewed and a new motion prediction scheme is presented which is shown to be superior to the existing schemes. Simulations run on actual image sequences to verify the analytical results indicate that implementing the analytical model as opposed to the generally used heuristic technique does yield a decrease in the information rate and the computational requirements. Simulation results also are included which use the "projection-along-the-motiontrajectory" or PAMT prediction scheme. Third, zeroth-order entropy encoding is shown to reduce the bit rate on the order of 12 percent, and to reduce the mean square error in the reconstructed images on the order of 60 percent when compared to first-order entropy encoding. Fourth, field-to-field motion prediction is compared to frame-to-frame motion prediction.

26 citations


Journal ArticleDOI
T. Koga1, M. Ohta
TL;DR: Simulation of video sequences shows that a combination of the coding methods described here can achieve high coding efficiency for videoconference sequences.
Abstract: Entropy coding has been investigated for motion-compensated interframe (MC) prediction followed by two-dimensional discrete cosine transform (DCT) for prediction error. In particular, variable word length coding methods for motion vector and transform coefficients have been discussed assuming low bit rate such as 384 kbits/s for transmission of videoconference sequences. For motion vector information, it is advantageous to employ a one-dimensional code set common to both horizontal and vertical components of motion vectors. The code set can be obtained using a combined distribution of the two components. In order to encode transform coefficients, different methods are applied to significant and insignificant blocks. Run-length coding is adequate for representing clusters of insignificant blocks. In encoding transform coefficients in significant blocks, a zone coding method which encodes transform coefficients within a minimum area enclosing all nonzero coefficients is suitable. Simulation of video sequences shows that a combination of the coding methods described here can achieve high coding efficiency for videoconference sequences.

22 citations


Journal ArticleDOI
TL;DR: The theory and algorithms are applied to evaluating the performance of entropy coding for the discrete cosine transform coefficients of digital images from the "Walter Cronkite" video sequence and results indicate that binary arithmetic codes outperform run length codes by a factor of 55 percent for low-rate coding of the zero-valued coefficients.
Abstract: Several compression techniques need to be integrated for the achievement of effective low-bit-rate coding of moving images. Image entropy codes are used in conjunction with either predictive or transform coding methods. In this paper, we investigate the possible advantages of using arithmetric codes for image entropy coding. A theory of source modeling is established based on the concept of source parsing and conditioning trees. The key information-theoretic properties of conditioning trees are discussed along with algorithms for the construction of optimal and suboptimal trees. The theory and algorithms are then applied to evaluating the performance of entropy coding for the discrete cosine transform coefficients of digital images from the "Walter Cronkite" video sequence. The performance of arithmetic codes is compared to that of a traditional combination of run length and Huffman codes. The results indicate that binary arithmetic codes outperform run length codes by a factor of 55 percent for low-rate coding of the zero-valued coefficients. Hexadecimal arithmetic codes provide a coding rate improvement as high as 28 percent over truncated Huffman codes for the nonzero coefficients. The complexity of these arithmetic codes is suitable for practical implementation.

18 citations


DOI
01 Jul 1987
TL;DR: An approach to image data compression using the 2-D lattice modelling method is presented, which includes uniform quantisation and entropy coding of the prediction errors of the predictor.
Abstract: An approach to image data compression using the 2-D lattice modelling method is presented. In addition to the 2-D lattice predictor, this realisation includes uniform quantisation and entropy coding of the prediction errors of the predictor. Results show that coded pictures with an signal/noise ratio of 30.5 dB can be obtained at an information rate of 0.8 bit/pixel.

4 citations


Patent
08 Jan 1987
TL;DR: In this paper, a DPCM circuit is used to divide signal strings larger than a ternary value into two groups by giving binary discrimination to zero and non-zero phenomena having different statistic distributions to perform the compression of data with both groups after run-length coding and giving the Huffman coding to the nonzero phenomenon.
Abstract: PURPOSE:To attain the satisfactory compression of data by dividing signal strings larger than a ternary value into two groups by giving binary discrimination to zero and non-zero phenomena having different statistic distributions to perform the compression of data with both groups after run-length coding and giving the Huffman coding to the non-zero phenomenon owing to presence of several types of codes CONSTITUTION:A DPCM circuit 1 gives predictive coding to picture signals and a zero/non-zero discriminating circuit 2 discriminates a zero phenomenon R0 and a non-zero phenomenon R1 to deliver them to a differentiating circuit 3 in the form of the binary information At the same time, the non-zero information showing the point where the non-zero phenomenon starts to a multiplexing circuit 8 The circuit 3 supplies the pulse showing the boundary between levels 1 and 0 to a run-length detector 4, a zero run-length coder 5 and a non-zero run-length coder 6 respectively while the error signal undergoes the Huffman coding for each code through a Huffman coder 7 and supplied to the circuit 8 The circuit 8 cuts the Huffman codes of the zero phenomenon according to the non-zero phenomenon information Then the zero run-length code is delivered as it is and the non-zero run-length code is delivered with the Huffman codes accordant with each multi-value code added successively after the non-zero run-length code respectively

1 citations


01 Jan 1987
TL;DR: An algorithm for calculating a noise-to-mask ratio is presented which helps to identify, where quantization noise could be audible, where the OCF-Coder can be audible.
Abstract: Optimum Coding in the Frequency domain (OCF) uses entropy coding of spectral coefficients to efficiently code high quality sound signals with 3 bits/sample. In an iterative algorithm psychoacoustic weigthing is used to get the quantization noise to be masked in every critical band. The coder itself uses iterative quantizer control to get each data block to be coded with a fixed number of bits. Details about the OCF-Coder are presented together with information about the codebook needed and the training for the entropy coder. An algorithm for calculating a noise-to-mask ratio is presented which helps to identify, where quantization noise could be audible.

01 Jan 1987
TL;DR: Two new methods based on entropy for reconstructing images compressed with the Discrete Cosine trans form are presented, one based upon a sequential implementation of the Minimum Relative Entropy Principle and the other based upon the Maximum Entropy principle.
Abstract: ·presented are two new methods based on entropy for reconstructing images compressed with the Discrete Cosine trans form. One method is based upon a sequential implementation of the Minimum Relative Entropy Principle; the other is based upon the Maximum Entropy Principle. These will be compared with each other and with the conventional method employing the Inverse Discrete Cosine transform. Chapter 2 describes the traditional use of the Discrete Cosine transform for image compression. Chapter 3 explains the theory and implementation of the entropy-based reconstructions. It introduces a fast algorithm for the Maximum Entropy Principle. Chapter 4 compares the numerical performance of the three reconstruction methods. Chapter 5 shows the theoretical convergence limit of the iterative implementation of the Minimum Relative Entropy Principle to equal the limit of the convergence of the Maximum Relative Entropy method. Preliminary results of this thesis were presented at Southeastcon '87 in Tampa. Final results will be presented at the Annual Meeting of the American Optical Society in Rochester on