scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 1982"


Journal ArticleDOI
S. P. Lloyd1
TL;DR: In this article, the authors derived necessary conditions for any finite number of quanta and associated quantization intervals of an optimum finite quantization scheme to achieve minimum average quantization noise power.
Abstract: It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^{b} quanta, b=1,2, \cdots, 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes.

11,872 citations


Journal ArticleDOI
G. Ungerboeck1
TL;DR: A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth by channel coding with expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance.
Abstract: A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth. This is achieved by channel coding with expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance. Soft maximum--likelihood (ML) decoding using the Viterbi algorithm is assumed. Following a discussion of channel capacity, simple hand-designed trellis codes are presented for 8 phase-shift keying (PSK) and 16 quadrature amplitude-shift keying (QASK) modulation. These simple codes achieve coding gains in the order of 3-4 dB. It is then shown that the codes can be interpreted as binary convolutional codes with a mapping of coded bits into channel signals, which we call "mapping by set partitioning." Based on a new distance measure between binary code sequences which efficiently lower-bounds the Euclidean distance between the corresponding channel signal sequences, a search procedure for more powerful codes is developed. Codes with coding gains up to 6 dB are obtained for a variety of multilevel/phase modulation schemes. Simulation results are presented and an example of carrier-phase tracking is discussed.

4,091 citations


Journal ArticleDOI
TL;DR: These rates are shown to be optimal for deterministic distortion measures for random variables and Shannon mutual information.
Abstract: Consider a sequence of independent identically distributed (i.i.d.) random variables X_{l},X_{2}, \cdots, X_{n} and a distortion measure d(X_{i},X_{i}) on the estimates X_{i} of X_{i} . Two descriptions i(X)\in \{1,2, \cdots ,2^{nR_{1}\} and j(X)\in \{1,2, \cdots,2^{nR_{2}\} are given of the sequence X=(X_{1}, X_{2}, \cdots ,X_{n}) . From these two descriptions, three estimates (i(X)), X2(j(X)) , and \hat{X}_{O}(i(X),j(X)) are formed, with resulting expected distortions E \frac{1/n} \sum^{n}_{k=1} d(X_{k}, \hat{X}_{mk})=D_{m}, m=0,1,2. We find that the distortion constraints D_{0}, D_{1}, D_{2} are achievable if there exists a probability mass distribution p(x)p(\hat{x}_{1},\hat{x}_{2},\hat{x}_{0}|x) with Ed(X,\hat{x}_{m})\leq D_{m} such that R_{1}>I(X;\hat{X}_{1}), R_{2}>I(X;\hat{X}_{2}), where I(\cdot) denotes Shannon mutual information. These rates are shown to be optimal for deterministic distortion measures.

714 citations


Journal ArticleDOI
I. Ingemarsson, D. Tang1, C. Wong1
TL;DR: This work has shown how to use CKDS in connection with public key ciphers and an authorization scheme and reveals two important aspects of any conference key distribution system: the multitap resistance and the choice of a suitable symmetric function of the private keys.
Abstract: Encryption is used in a communication system to safeguard information in the transmitted messages from anyone other than the intended receiver(s). To perform the encryption and decryption the transmitter and receiver(s) ought to have matching encryption and decryption keys. A clever way to generate these keys is to use the public key distribution system invented by Diffie and Hellman. That system, however, admits only one pair of communication stations to share a particular pair of encryption and decryption keys, The public key distribution system is generalized to a conference key distribution system (CKDS) which admits any group of stations to share the same encryption and decryption keys. The analysis reveals two important aspects of any conference key distribution system. One is the multitap resistance, which is a measure of the information security in the communication system. The other is the separation of the problem into two parts: the choice of a suitable symmetric function of the private keys and the choice of a suitable one-way mapping thereof. We have also shown how to use CKDS in connection with public key ciphers and an authorization scheme.

583 citations


Journal ArticleDOI
TL;DR: Extensions of the limiting qnanfizafion error formula of Bennet are proved and random quantization, optimal quantization in the presence of an output information constraint, and quantization noise in high dimensional spaces are investigated.
Abstract: Extensions of the limiting qnanfizafion error formula of Bennet are proved. These are of the form D_{s,k}(N,F)=N^{-\beta}B , where N is the number of output levels, D_{s,k}(N,F) is the s th moment of the metric distance between quantizer input and output, \beta,B>0,k=s/\beta is the signal space dimension, and F is the signal distribution. If a suitably well-behaved k -dimensional signal density f(x) exists, B=b_{s,k}[\int f^{\rho}(x)dx]^{1/ \rho},\rho=k/(s+k) , and b_{s,k} does not depend on f . For k=1,s=2 this reduces to Bennett's formula. If F is the Cantor distribution on [0,1],0 and this k equals the fractal dimension of the Cantor set [12,13] . Random quantization, optimal quantization in the presence of an output information constraint, and quantization noise in high dimensional spaces are also investigated.

575 citations


Journal ArticleDOI
TL;DR: A very fast algorithm is given for finding the closest lattice point to an arbitrary point if these lattices are used for vector quantizing of uniformly distributed data.
Abstract: For each of the lattices A_{n}(n \geq 1), D_{n}(n \geq 2), E_{6}, E_{7}, E_{8} , and their duals a very fast algorithm is given for finding the closest lattice point to an arbitrary point. If these lattices are used for vector quantizing of uniformly distributed data, the algorithm finds the minimum distortion lattice point. If the lattices are used as codes for a Gaussian channel, the algorithm performs maximum likelihood decoding.

479 citations


Journal ArticleDOI
TL;DR: Three measures of divergence between vectors in a convex set of a n -dimensional real vector space are defined in terms of certain types of entropy functions, and their convexity property is studied.
Abstract: Three measures of divergence between vectors in a convex set of a n -dimensional real vector space are defined in terms of certain types of entropy functions, and their convexity property is studied. Among other results, a classification of the entropies of degree \alpha is obtained by the convexity of these measures. These results have applications in information theory and biological studies.

437 citations


Journal ArticleDOI
L. Liporace1
TL;DR: Parameter estimation for multivariate functions of Markov chains, a class of versatile statistical models for vector random processes, is discussed, and a powerful representation theorem by Fan is employed to generalize the analysis of Baum, et al. to a larger class of distributions.
Abstract: Parameter estimation for multivariate functions of Markov chains, a class of versatile statistical models for vector random processes, is discussed. The model regards an ordered sequence of vectors as noisy multivariate observations of a Markov chain. Mixture distributions are a special case. The foundations of the theory presented here were established by Baum, Petrie, Soules, and Weiss. A powerful representation theorem by Fan is employed to generalize the analysis of Baum, {\em et al.} to a larger class of distributions.

414 citations


Journal ArticleDOI
TL;DR: The capacity region of a class of deterministic discrete memoryless interference channels is established and the capacity, region for the case in which V_{2} \equiv 0 and Y_{1} depends randomly on X_1} is obtained and illustrated with an example.
Abstract: The capacity region of a class of deterministic discrete memoryless interference channels is established. In this class of channels the outputs Y_{1} and Y_{2} are (deterministic) functions of the inputs X_{1} and X_{2} such that H(Y_{1}|X_{1})=H(V_{2}) and H(Y_{2}|X_{2})=H(V_{l}) for all product probabiliW distributions on X_{1}X_{2} , where V_{1} is a function of X_{1} and V_{2} a function of X_{2} . The capacity, region for the case in which V_{2} \equiv 0 and Y_{1} depends randomly on X_{1} is also obtained and illustrated with an example.

358 citations


Journal ArticleDOI
TL;DR: Vector quantization is intrinsically superior to predictive coding, transform coding, and other suboptimal and {\em ad hoc} procedures since it achieves optimal rate distortion performance subject only to a constraint on memory or block length of the observable signal segment being encoded.
Abstract: Vector quantization is intrinsically superior to predictive coding, transform coding, and other suboptimal and {\em ad hoc} procedures since it achieves optimal rate distortion performance subject only to a constraint on memory or block length of the observable signal segment being encoded. The key limitation of existing techniques is the very large randomly generated code books which must be stored, and the computational complexity of the associated encoding procedures. The quantization operation is decomposed into its rudimentary structural components. This leads to a simple and elegant approach to derive analytical properties of optimal quantizers. Some useful properties of quantizers and algorithmic approaches are given, which are relevant to the complexity of both storage and processing in the encoding operation. Highly disordered quantizers, which have been designed using a clustering algorithm, are considered. Finally, lattice quantizers are examined which circumvent the need for a code book by using a highly structured code based on lattices. The code vectors are algorithmically generated in a simple manner rather than stored in a code book, and fast algorithms perform the encoding algorithm with negligible complexity.

356 citations


Journal ArticleDOI
TL;DR: New concepts and techniques for implementing encoders for Reed-Solomon codes, with or without interleaving are presented, including only fields of order 2”, where m m ight be any integer.
Abstract: This paper presents new concepts and techniques for implementing encoders for Reed-Solomon codes, with or without interleaving. Reed-Solomon encoders based on these concepts and techniques often require substantially less hardware than even linear cyclic binary codes of comparable redundancy. A CODEWORD of a cyclic code is a sequence of characters which can be viewed as the coefficients of a polynomial n-1 c(x) = 2 c,x’. i=o The characters C,,, C,,-2, Cn-s,. . . , C,, Co are elements in a finite field. In this paper, we consider only fields of order 2”, where m m ight be any integer. A sequence of n characters is a codeword if and only if its corresponding polynomial, C(x), is a mu ltiple of the code’s generator polynomial, g(x). Let deg g(x) = n k. The common method of encoding a cyclic code is to regard q-,9 cn-2,* * * 9 C,-, as message characters, and to divide the polynomial

Journal ArticleDOI
TL;DR: For Slepian-Wolf source networks, the error exponents obtained by Korner,Marton, and the author are shown to be universally attainable by linear codes also and improved exponents are derived for linear codes with "large rates."
Abstract: For Slepian-Wolf source networks, the error exponents obtained by Korner,Marton, and the author are shown to be universally attainable by linear codes also. Improved exponents are derived for linear codes with "large rates." Specializing the results to simple discrete memoryless sources reveals their relationship to the random coding and expurgated bounds for channels with additive noise. One corollary is that there are universal linear codes for this class of channels which attain the random coding error exponent for each channel in the class. The combinatorial approach of Csiszar-Korner-Marton is used. In particular, all results are derived from a lemma specifying good encoders in terms of purely combinatorial properties.

Journal ArticleDOI
TL;DR: The answers to the squared distance questions and a description of the Voronoi (or nearest neighbor) regions of these lattices have applications to quantization and to the design of signals for the Gaussian channel.
Abstract: If a point is picked at random inside a regular simplex, octahedron, 600 -cell, or other polytope, what is its average squared distance from the centroid? In n -dimensional space, what is the average squared distance of a random point from the closest point of the lattice A_{n} (or D_{n}, E_{n}, A_{n}^{\ast} or D_{n}^{\ast})? The answers are given here, together with a description of the Voronoi (or nearest neighbor) regions of these lattices. The results have applications to quantization and to the design of signals for the Gaussian channel. For example, a quantizer based on the eight-dimensional lattice E8 has a mean-squared error per symbol of 0.0717 \cdots when applied to uniformly distributed data, compared with 0.08333 \cdots for the best one-dimensional quantizer.

Journal ArticleDOI
TL;DR: A new family of nonlinear binary signal sets is constructed which achieve Welch's lower bound on simultaneous cross correlation and autocorrelation magnitudes and assures that the code sequence cannot be readily analyzed by a sophisticated enemy and then used to neutralize the advantages of the spread spectrum processing.
Abstract: In this paper we construct a new family of nonlinear binary signal sets which achieve Welch's lower bound on simultaneous cross correlation and autocorrelation magnitudes. Given a parameter n with n=0 \pmod{4} , the period of the sequences is 2^{n}-1 , the number of sequences in the set is 2^{n/2} , and the cross/auto correlation function has three values with magnitudes \leq 2^{n/2}+1 . The equivalent linear span of the codes is bound above by \sum_{i=1}^{n/4}\left(\stackrel{n}{i} \right) . These new signal sets have the same size and correlation properties as the small set of Kasami codes, but they have important advantages for use in spread spectrum multiple access communications systems. First, the sequences are "balances," which represents only a slight advantage. Second, the sequence generators are easy to randomly initialize into any assigned code and hence can be rapidly "hopped" from sequence to sequence for code division multiple access operation. Most importantly, the codes are nonlinear in that the order of the linear difference equation satisfied by the sequence can be orders of magnitude larger than the number of memory elements in the generator that produced it. This high equivalent linear span assures that the code sequence cannot be readily analyzed by a sophisticated enemy and then used to neutralize the advantages of the spread spectrum processing.

Journal ArticleDOI
TL;DR: The capacity of the class of relay channels with sender x, a relay sender x_{2} , a relay receiver y, and ultimate receiver y is proved to be C = C = min p(x_{1),x_{2}) + I(X_{1}, X_{2}; Y), H(Y_{1}|X-2})+I(X-1, X-2), H (Y-1;Y-Y-X), I (X-
Abstract: The capacity of the class of relay channels with sender x_{1} , a relay sender x_{2} , a relay receiver y_{1}=f(x_{1},x_{2}) , and ultimate receiver y is proved to be C = \max\min_{p(x_{1},x_{2})} \{I(X_{1}, X_{2}; Y), H(Y_{1}|X_{2})+I(X_{1};Y|X_{2},Y_{1}})\}.

Journal ArticleDOI
TL;DR: It is shown that a rate region, previously obtained for the multiple-access channel with "perfect" feedback to both senders, remains achievable when the feedback connection to one of the senders is eliminated.
Abstract: This paper is concerned with a communication channel with two senders and one receiver, in which each sender observes a private feedback signal. The two feedback signals are not necessarily equivalent to or derived from the signal observed by the receiver. An achievable rate region is demonstrated for this multiple-access channel by means of a new superposition coding scheme. In particular it is shown that a rate region, previously obtained for the multiple-access channel with "perfect" feedback to both senders, remains achievable when the feedback connection to one of the senders is eliminated.

Journal ArticleDOI
TL;DR: It is shown that the infimum over all N level quantizers of the quantity N r/k times the r th power average distortion converges to a finite constant as Narrowarrow \infty .
Abstract: Asymptotic properties of the r th power distortion measure associated with quantized k -dimensional random variables are considered. Subject only to a moment condition, it is shown that the infimum over all N level quantizers of the quantity N^{r/k} times the r th power average distortion converges to a finite constant as N \rightarrow \infty .

Journal ArticleDOI
David Pollard1
TL;DR: Asymptotic results from the statistical theory of k -means clustering are applied to problems of vector quantization and the behavior of quantizers constructed from long training sequences of data is analyzed.
Abstract: Asymptotic results from the statistical theory of k -means clustering are applied to problems of vector quantization. The behavior of quantizers constructed from long training sequences of data is analyzed by relating it to the consistency problem for k -means.

Journal ArticleDOI
TL;DR: The asymptotic "merit factor," i.e, the ratio of central to sidelobe energy of extremely long, optimally Iow autocorrelation sequences, formerly calculated as 2e^{2}=14.778 \cdots is recalculated without a convenient, but faulty, approximation.
Abstract: The asymptotic "merit factor," i.e, the ratio of central to sidelobe energy of extremely long, optimally Iow autocorrelation sequences, formerly calculated as 2e^{2}=14.778 \cdots with the use of an ergodicity hypothesis and a convenient, but faulty, approximation, is recalculated without that approximation and is established at 12.32 \cdots .

Journal ArticleDOI
TL;DR: A new detection algorithm, single most likely replacement (SMLR), for detecting randomly located impulsive events which have Gaussian-distributed amplitudes is presented and experimental results and comparisons with other detectors are provided.
Abstract: A new detection algorithm, single most likely replacement (SMLR), for detecting randomly located impulsive events which have Gaussian-distributed amplitudes is presented. This detector is designed for the case of severely overlapping wavelets. Estimation of the probability of events also is consider. Experimental results and comparisons with other detectors, using synthetic data, are provided.

Journal ArticleDOI
TL;DR: A simple (combinatorial) special case of the generalized Lloyd-Max problem is shown to be nondeterministic polynomial (NP)-complete, indicating that the general problem of communication theory, in its combinatorial forms, has at least that complexity.
Abstract: A simple (combinatorial) special case of the generalized Lloyd-Max (or quantization) problem is shown to be nondeterministic polynomial (NP)-complete. {\em A fortiori}, the general problem of communication theory, in its combinatorial forms, has at least that complexity.

Journal ArticleDOI
TL;DR: It is hoped that more general constructions may be found, leading to larger families of solutions, as well as better computational algorithms for finding individual solutions which may lie outside of the general families.
Abstract: A number of closely related combinatorial problems corresponding to specific assumptions about the type of time-frequency sequence which may be appropriate in a particular application, are formulated in terms of square or rectangular arrays of dots with appropriate constraints on the two-dimensional correlation function. The current state of knowledge concerning each of these problems is summarized. It is hoped that more general constructions may be found, leading to larger families of solutions, as well as better computational algorithms for finding individual solutions which may lie outside of the general families.

Journal ArticleDOI
TL;DR: It is shown that the optimum performance attainable by causal codes can be achieved by memoryless codes or by time-sharingMemoryless codes are defined.
Abstract: Causal source codes are defined. These include quantizers, delta modulators, differential pulse code modulators, and adaptive versions of these. Several types of causal codes are identified. For memoryless sources it is shown that the optimum performance attainable by causal codes can be achieved by memoryless codes or by time-sharing memoryless codes. This optimal performance can be evaluated straightforwardly.

Journal ArticleDOI
TL;DR: A general coding theorem is proved which established that a certain region defined in terms of "single-letter" information theoretic quantities, is an inner bound to the region of all attainable vectors of rates and distortions.
Abstract: We resolve some open problems concerning the encoding of a pair of correlated sources with respect to a fidelity criterion. Two encoders are used. One encoder observes only the output of the first source, while the other is supplied both with the second source output and with partial information about the first source at some predetermined rate. A general coding theorem is proved which established that {\cal S} \ast , a certain region defined in terms of "single-letter" information theoretic quantities, is an inner bound to the region of all attainable vectors of rates and distortions. For certain cases of interest the converse is proved, too, thereby establishing the rate-distortion region for these cases.

Journal ArticleDOI
TL;DR: An algorithm is described for computing the automorphism group of an error correcting code and produces a set of monomial permutations which generate the group.
Abstract: An algorithm is described for computing the automorphism group of an error correcting code. The algorithm determines the order of the automorphism group and produces a set of monomial permutations which generate the group. It has been implemented on a computer and has been used successfully on a great number of codes of moderate length.

Journal ArticleDOI
TL;DR: The simple condition of concavity of In p(x) is sufficient for uniqueness of a locally optimal quantizer being a stationary point of the quantization distortion measure E[f(x,\eta)].
Abstract: Sufficient conditions are presented for uniqueness of a locally optimal quantizer being a stationary point of the quantization distortion measure E[f(x,\eta)] , the expected value of an error weighting function f(x,\eta) , where x is a random variable to be quantized, where the probability density function p(x) describing x is continuous and positive on some finite or infinite interval and zero outside it, and where \eta is the quantization error. The function f(x,\eta) is assumed convex and symmetric in \eta and zero only for \eta = 0 . It is shown that in the cases of f(x,\eta)=\eta^{2} and f(x,\eta)=|\eta| , the simple condition of concavity of In p(x) is sufficient for uniqueness of a locally optimal quantizer.

Journal ArticleDOI
TL;DR: The main theorem proved is that an extremal self-dual doubly even code of length 48 with a nontrivial automorphism of odd order is equivalent to the extended quadratic residue code.
Abstract: General results on automorphisms of self-dual binary codes are given. These results are applied to the study of extremal self-dual doubly even binary codes of length 48 . The main theorem proved is that an extremal self-dual doubly even code of length 48 with a nontrivial automorphism of odd order is equivalent to the extended quadratic residue code. Interesting constructions of the binary extended Golay code as well as a conjecture about a possible connection between an extremal self-dual doubly even code of length 72 and an extremal quaternary code of length 24 arc yielded by techniques used in the proof.

Journal ArticleDOI
G. Langdon1, Jorma Rissanen1
TL;DR: A source code for binary strings, admitting a simple and fast hardware implementation, is described, capable of encoding strings modeled by stationary or nonstationary sources alike without use of alphabet extension.
Abstract: A source code for binary strings, admitting a simple and fast hardware implementation, is described. The code is an arithmetic code, and it is capable of encoding strings modeled by stationary or nonstationary sources alike without use of alphabet extension. In particular, in the case with a stationary independent information source, the code degenerates to a bitwise implementation of Golomb's run-length code.

Journal ArticleDOI
TL;DR: New short constraint length convolutional code constructions are tabulated, determined by iterative search based upon a criterion of optimizing the free distance profile, to maximize the freedistance d_{f} while minimizing the number of adversaries in the distance, or weight, spectrum.
Abstract: New short constraint length convolutional code constructions are tabulated for rates R=(n-k)/n, k=1,2, \cdots ,n-1 with n=2, 3,\cdots ,8 , and for constraint lengths K=3,4, \cdots,8 . These codes have been determined by iterative search based upon a criterion of optimizing the free distance profile. Specifically, these codes maximize the free distance d_{f} while minimizing the number of adversaries in the distance, or weight, spectrum. In several instances we demonstrate the superiority of these codes over previously published code constructions at the same rate and constraint length. These codes are expected to have a number of applications, including combined source-channel coding schemes as well as coding for burst or impulsive noise channels.

Journal ArticleDOI
TL;DR: This heuristic approach to the problem of conversion of decision tables to decision trees is treated and has low design complexity and yet provides near-optimal decision trees.
Abstract: The problem of conversion of decision tables to decision trees is treated. In most cases, the construction of optimal decision trees is an NP-complete problem and, therefore, a heuristic approach to this problem is necessary. In this heuristic approach, an application of information theoretic concepts to construct efficient decision trees for decision tables which may include "don't care" entries is made. In contrast to most of the existing heuristic algorithms, this algorithm is systematic and is intuitively appealing from an information theoretic standpoint. The algorithm has low design complexity and yet provides near-optimal decision trees.