scispace - formally typeset
Search or ask a question

Showing papers on "Entropy encoding published in 1985"


Journal ArticleDOI
TL;DR: It is demonstrated that a uniform, one-dimensional quantizer followed by a noiseless digital variable-rate encoder can yield a rate that is, for any n, no more than 0.754 bit-per-sample higher than the rate associated with the optimal n -dimensionai quantizer.
Abstract: The quantization of n -dimensional vectors in R^{n} with an arbitrary probability measure, under a mean-square error constraint, is discussed. It is demonstrated that a uniform, one-dimensional quantizer followed by a noiseless digital variable-rate encoder ("entropy encoding") can yield a rate that is, for any n , no more than 0.754 bit-per-sample higher than the rate associated with the optimal n -dimensionai quantizer, regardless of the probabilistic characterization of the input n -vector for the allowable mean-square error.

200 citations


Patent
19 Aug 1985
TL;DR: In this paper, a run length encoding scheme using a flag byte symbol which is disposed between a character signal and a running length symbol was proposed. But this scheme was not suitable for the use of large numbers of characters.
Abstract: A compression device which uses both run length encoding and statistical encoding. The run length encoding scheme uses a flag byte symbol which is disposed between a character signal and a run length symbol. The statistical encoding process uses multiple statistical encoding tables which are selected based upon previously occurring data.

189 citations


Journal ArticleDOI
TL;DR: An efficient encoding scheme for arbitrary curves, based on the chain code representation, has been proposed, whose code amount is about 50-60 percent of that required for the conventional chain encoding scheme.
Abstract: An efficient encoding scheme for arbitrary curves, based on the chain code representation, has been proposed. The encoding scheme takes advantage of the property that a curve with gentle curvature is divided into somewhat long curve sections (segments), each of which is represented by a sequence of two adjacent-direction chain codes. The coding efficiency of the scheme becomes higher as the segments become longer, while a variable-length coding technique makes it possible to encode short segments without an extreme increase in code amount. An experiment with complicated curves obtained from geographic maps has shown a high data compression rate of the proposed scheme, whose code amount is about 50-60 percent of that required for the conventional chain encoding scheme.

126 citations


Patent
20 Nov 1985
TL;DR: In this paper, the second block including a non-peculiar event (for instance, a multivalued signal except for zero) discriminated to be a binary into a run length is encoded into the run length code by a valid block run length encoder.
Abstract: PURPOSE: To enable an effective data compression by encoding the second block including a non-peculiar event (for instance, a multivalued signal except for zero) discriminated to be a binary into a run length. CONSTITUTION: Signals of zero or that except zero of a prediction error of the picture signal of the output of a DPCM circuit 1 are inputted to a valid and invalid block discriminating circuit 2 and separated into blocks and the invalid block of the block of only zero and the valid block including the signal except for zero is discriminated. When it is discriminated that it is the valid block, the run length of the succeeding blocks is detected by a valid block run length encoder 6. Then, this run length is encoded into the run length code by a valid block run length encoder 7. Accordingly, the valid data compression is carried out. COPYRIGHT: (C)1987,JPO&Japio

10 citations


Journal ArticleDOI
TL;DR: An autoregressive model of information transmission is proposed for representing a continuous communication system which requires a pair of an internal noise source and a signal source to encode or decode a message.
Abstract: The operations of encoding and decoding in communication agree with filtering operations of convolution and deconvolution for Gaussian signal processing. In an analogy with power transmission in thermodynamics, an autoregressive model of information transmission is proposed for representing a continuous communication system which requires a pair of an internal noise source and a signal source to encode or decode a message. In this model transinformation (informational entropy) equals the increase in stationary nonequilibrium organization formed through the amplification of white noise by a positive feedback system. The channel capacity is finite due to the existence of inherent noise in the system. The maximum entropy criterion in information dynamics corresponds to the 2nd law of thermodynamics. If the process is stationary, the communication system is invertible, and has the maximum efficiency of transformation. The total variation in informational entropy is zero in the cycle of the invertible system, while in the noninvertible system the entropy of decoding is less than that of encoding. A noisy autoregressive coding which maximizes transinformation is optimum, but is also ideal.

1 citations


Proceedings ArticleDOI
S. Mizuno1, K. Iinuma
01 Apr 1985
TL;DR: It is proved that the ordering prediction is the optimized prediction scheme and has a lower prediction error entropy than that of DPCM and bit plane, and that the run-length coding reduces the entropy almost near to that of the two-dimensional Markov model entropy.
Abstract: An efficient coding method for multi-level digital image data with up to about 16 levels is described, which can attain data compression nearly equal to the theoretical bound given by a two-dimensional Markov model entropy, without any loss of picture information. The method, ordering predictive coding, is composed of an ordering prediction followed by a run-length coding of a serial prediction error bit stream. It is proved that the ordering prediction is the optimized prediction scheme and has a lower prediction error entropy than that of DPCM and bit plane, and also proved that the run-length coding reduces the entropy almost near to that of the two-dimensional Markov model entropy.