scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 1997"


Journal ArticleDOI
TL;DR: The Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone is obtained, analogous to water-pouring in frequency for time-invariant frequency-selective fading channels.
Abstract: We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "water-pouring" in time, analogous to water-pouring in frequency for time-invariant frequency-selective fading channels. Inverting the channel results in a large capacity penalty in severe fading.

2,163 citations


Journal ArticleDOI
TL;DR: It is proved in this two-user case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized crosscorrelations not greater than 1/2 /spl radic/(2+/spl Radic/3)/spl cong/0.9659.
Abstract: The performance analysis of the minimum-mean-square-error (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In particular, the behavior of the multiple-access interference (MAI) at the output of the MMSE detector is examined under various asymptotic conditions, including: large signal-to-noise ratio; large near-far ratios; and large numbers of users. These results suggest that the MAI-plus-noise contending with the demodulation of a desired user is approximately Gaussian in many cases of interest. For the particular case of two users, it is shown that the maximum divergence between the output MAI-plus-noise and a Gaussian distribution having the same mean and variance is quite small in most cases of interest. It is further proved in this two-user case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized crosscorrelations not greater than 1/2 /spl radic/(2+/spl radic/3)/spl cong/0.9659.

890 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of computing the minimum distance of a binary linear code is NP-hard, and the corresponding decision problem isNP-complete.
Abstract: It is shown that the problem of computing the minimum distance of a binary linear code is NP-hard, and the corresponding decision problem is NP-complete. This result constitutes a proof of the conjecture of Berlekamp, McEliece, and van Tilborg (1978). Extensions and applications of this result to other problems in coding theory are discussed.

451 citations


Journal ArticleDOI
TL;DR: A unified framework is presented in order to build lattice constellations matched to both the Rayleigh fading channel and the Gaussian channel, which generalizes the structural construction proposed by Forney (1991).
Abstract: A unified framework is presented in order to build lattice constellations matched to both the Rayleigh fading channel and the Gaussian channel. The method encompasses the situations where the interleaving is done on the real components or on two-dimensional signals. In the latter case, a simple construction of lattices congruent to the densest binary lattices with respect to the Euclidean distance is proposed. It generalizes, in a sense to be clarified later, the structural construction proposed by Forney (1991). These constellations are next combined with coset codes. The partitioning rules and the gain formula are similar to those used for the Gaussian channel.

338 citations


Journal ArticleDOI
TL;DR: It is found, surprisingly, that fading may enhance performance in terms of Shannon theoretic achievable rates and the effect of a random number of users per cell is investigated and it is demonstrated that randomization is beneficial.
Abstract: For pt.I see ibid., vol.43, no.6, p.1877-94 (1997). A simple idealized linear (and planar) uplink, cellular, multiple-access communication model, where only adjacent cell interference is present and all signals may experience fading is considered. Shannon theoretic arguments are invoked to gain insight into the implications on performance of the main system parameters and multiple-access techniques. The model treated in Part I (Shamai, 1997) is extended here to account for cell-site receivers that may process also the received signal at an adjacent cell site, compromising thus between the advantage of incorporating additional information from other cell sites on one hand and the associated excess processing complexity on the other. Various settings which include fading, time-division multiple access (TDMA), wideband (WB), and (optimized) fractional inter-cell time sharing (ICTS) protocols are investigated and compared. In this case and for the WB approach and a large number of users per cell it is found, surprisingly, that fading may enhance performance in terms of Shannon theoretic achievable rates. The linear model is extended to account for general linear and planar configurations. The effect of a random number of users per cell is investigated and it is demonstrated that randomization is beneficial. Certain aspects of diversity as well as some features of TDMA and orthogonal code-division multiple access (CDMA) techniques in the presence of fading are studied in an isolated cell scenario.

337 citations


Journal ArticleDOI
TL;DR: The relation between the combinatorial packing of solid bodies and the information-theoretic "soft packing" with arbitrarily small, but positive, overlap is illuminated and the "soft-packing" results are new.
Abstract: General random coding theorems for lattices are derived from the Minkowski-Hlawka theorem and their close relation to standard averaging arguments for linear codes over finite fields is pointed out. A new version of the Minkowski-Hlawka theorem itself is obtained as the limit, for p/spl rarr//spl infin/, of a simple lemma for linear codes over GF(p) used with p-level amplitude modulation. The relation between the combinatorial packing of solid bodies and the information-theoretic "soft packing" with arbitrarily small, but positive, overlap is illuminated. The "soft-packing" results are new. When specialized to the additive white Gaussian noise channel, they reduce to (a version of) the de Buda-Poltyrev result that spherically shaped lattice codes and a decoder that is unaware of the shaping can achieve the rate 1/2 log/sub 2/ (P/N).

331 citations


Journal ArticleDOI
TL;DR: The problem of blind identification of p-inputs/q-outputs FIR transfer functions is addressed and the extension of the subspace method to the case p>1 is discussed.
Abstract: The problem of blind identification of p-inputs/q-outputs FIR transfer functions is addressed. Existing subspace identification methods derived for p=1 are first reformulated. In particular, the links between the noise subspace of a certain covariance matrix of the output signals (on which subspace methods build on) and certain rational subspaces associated with the transfer function to be identified are elucidated. Based on these relations, we study the behavior of the subspace method in the case where the order of the transfer function is overestimated. Next, an asymptotic performance analysis of this estimation method is carried out. Consistency and asymptotical normality of the estimates is established. A closed-form expression for the asymptotic covariance of the estimates is given. Numerical simulations and investigations are presented to demonstrate the potential of the subspace method. Finally, we take advantage of our new reformulation to discuss the extension of the subspace method to the case p>1. We show where the difficulties lie, and we briefly indicate how to solve the corresponding problems. The possible connections with classical approaches for MA model estimations are also outlined.

295 citations


Journal ArticleDOI
TL;DR: A simple sufficient condition for such a class of maps to produce a fair Bernoulli sequence, that is, a sequence of independent and identically distributed (i.i.d.) binary random variables.
Abstract: Statistical properties of binary sequences generated by a class of ergodic maps with some symmetric properties are discussed on the basis of an ensemble-average technique. We give a simple sufficient condition for such a class of maps to produce a fair Bernoulli sequence, that is, a sequence of independent and identically distributed (i.i.d.) binary random variables. This condition is expressed in terms of binary function, which is a generalized version of the Rademacher function for the dyadic map.

281 citations


Journal ArticleDOI
TL;DR: For four random variables, this work has discovered a conditional inequality which is not implied by the basic information inequalities of the same set of random variables.
Abstract: Given n discrete random variables /spl Omega/={X/sub 1/,...,X/sub n/}, associated with any subset /spl alpha/ of {1,2,...,n}, there is a joint entropy H(X/sub /spl alpha//) where X/sub /spl alpha//={X/sub i/: i/spl isin//spl alpha/}. This can be viewed as a function defined on 2/sup {1,2,...,n}/ taking values in [0, +/spl infin/). We call this function the entropy function of /spl Omega/. The nonnegativity of the joint entropies implies that this function is nonnegative; the nonnegativity of the conditional joint entropies implies that this function is nondecreasing; and the nonnegativity of the conditional mutual information implies that this function is two-alternative. These properties are the so-called basic information inequalities of Shannon's information measures. An entropy function can be viewed as a 2/sup n/-1-dimensional vector where the coordinates are indexed by the subsets of the ground set {1,2,...,n}. As introduced by Yeng (see ibid., vol.43, no.6, p.1923-34, 1997) /spl Gamma//sub n/ stands for the cone in IR(2/sup n/-1) consisting of all vectors which have all these properties. Let /spl Gamma//sub n/* be the set of all 2/sup n/-1-dimensional vectors which correspond to the entropy functions of some sets of n discrete random variables. A fundamental information-theoretic problem is whether or not /spl Gamma/~/sub n/*=/spl Gamma//sub n/. Here /spl Gamma/~/sub n/* stands for the closure of the set /spl Gamma//sub n/*. We show that /spl Gamma/~/sub n/* is a convex cone, /spl Gamma//sub 2/*=/spl Gamma//sub 2/, /spl Gamma//sub 3/*/spl ne//spl Gamma//sub 3/, but /spl Gamma/~/sub 3/*=/spl Gamma//sub 3/. For four random variables, we have discovered a conditional inequality which is not implied by the basic information inequalities of the same set of random variables. This lends an evidence to the plausible conjecture that /spl Gamma/~/sub n/*/spl ne//spl Gamma//sub n/ for n>3.

223 citations


Journal ArticleDOI
TL;DR: The Bayesian Ziv-Zakai bound on the mean square error (MSE) in estimating a uniformly distributed continuous random variable is extended for arbitrarily distributed continuousrandom vectors and for distortion functions other than MSE.
Abstract: The Bayesian Ziv-Zakai bound on the mean square error (MSE) in estimating a uniformly distributed continuous random variable is extended for arbitrarily distributed continuous random vectors and for distortion functions other than MSE. The extended bound is evaluated for some representative problems in time-delay and bearing estimation. The resulting bounds have simple closed-form expressions, and closely predict the simulated performance of the maximum-likelihood estimator in all regions of operation.

214 citations


Journal ArticleDOI
TL;DR: Most identities and inequalities involving a definite number of random variables can now be verified by a software called ITIP which is available on the World Wide Web and suggests the possibility of the existence of information inequalities which cannot be proved by conventional techniques.
Abstract: We present a framework for information inequalities, namely, inequalities involving only Shannon's information measures, for discrete random variables. A region in IR(2/sup n/-1), denoted by /spl Gamma/*, is identified to be the origin of all information inequalities involving n random variables in the sense that all such inequalities are partial characterizations of /spl Gamma/*. A product from this framework is a simple calculus for verifying all unconstrained and constrained linear information identities and inequalities which can be proved by conventional techniques. These include all information identities and inequalities of such types in the literature. As a consequence of this work, most identities and inequalities involving a definite number of random variables can now be verified by a software called ITIP which is available on the World Wide Web. Our work suggests the possibility of the existence of information inequalities which cannot be proved by conventional techniques. We also point out the relation between /spl Gamma/* and some important problems in probability theory and information theory.

Journal ArticleDOI
TL;DR: This paper addresses the problem of characterizing and designing scheduling policies that are optimal in the sense of minimizing buffer and/or delay requirements under the assumption of commonly accepted traffic constraints, and investigates buffer requirements under three typical memory allocation mechanisms which represent tradeoffs between efficiency and complexity.
Abstract: This paper is motivated by the need to provide per-session quality of service guarantees in fast packet-switched networks. We address the problem of characterizing and designing scheduling policies that are optimal in the sense of minimizing buffer and/or delay requirements under the assumption of commonly accepted traffic constraints. We investigate buffer requirements under three typical memory allocation mechanisms which represent tradeoffs between efficiency and complexity. For traffic with delay constraints we provide policies that are optimal in the sense of satisfying the constraints if they are satisfiable by any policy. We also investigate the tradeoff between delay and buffer optimality, and design policies that are "good" (optimal or close to) for both. Finally, we extend our results to the case of "soft" delay constraints and address the issue of designing policies that satisfy such constraints in a fair manner. Given our focus on packet switching, we mainly concern ourselves with nonpreemptive policies, but one class of nonpreemptive policies which we consider is based on tracking preemptive policies. This class is introduced and may be of interest in other applications as well.

Journal ArticleDOI
TL;DR: A transmission scheduling policy is proposed that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy.
Abstract: A communication network with tine-varying topology is considered. The network consists of M receivers and N transmitters that, in principle, may access every receiver. An underlying network state process with Markovian statistics is considered that reflects the physical characteristics of the network affecting the link service capacity. The transmissions are scheduled dynamically, based on information about the link capacities and the backlog in the network. The region of achievable throughputs is characterized. A transmission scheduling policy is proposed that utilizes current topology state information and achieves all throughput vectors achievable by any anticipative policy. The changing topology model applies to networks of low-Earth orbit (LEO) satellites, meteor-burst communication networks, and networks with mobile users.

Journal ArticleDOI
TL;DR: In this correspondence, the first extremal Type I [86,43,16] code and new extremal self-dual codes with weight enumerators which were not previously known to exist for lengths 40,50,52 and 54 are constructed.
Abstract: In this correspondence, we investigate binary extremal self-dual codes. Numerous extremal self-dual codes and interesting self-dual codes with minimum weight d=14 and 16 are constructed. In particular, the first extremal Type I [86,43,16] code and new extremal self-dual codes with weight enumerators which were not previously known to exist for lengths 40,50,52 and 54 are constructed. We also determine the possible weight enumerators for extremal Type I codes of lengths 66-100.

Journal ArticleDOI
TL;DR: An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval (0, 1), which is called the interval algorithm, is proposed and a fairly tight evaluation on the efficiency is given.
Abstract: The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied. An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval (0, 1), which we call the interval algorithm, is proposed. A fairly tight evaluation on the efficiency is given. Generalizations of the interval algorithm to the following cases are investigated: (1) output sequence is independent and identically distributed (i.i.d.); (2) output sequence is Markov; (3) input sequence is Markov; (4) input sequence and output sequence are both subject to arbitrary stochastic processes.

Journal ArticleDOI
TL;DR: A Barankin-type bound is presented which is useful in problems where there is a prior knowledge on some of the parameters to be estimated and which provides bounds on the covariance of any unbiased estimators of the nonrandom parameters and an estimator of the random parameters, simultaneously.
Abstract: The Barankin (1949) bound is a realizable lower bound on the mean-square error (MSE) of any unbiased estimator of a (nonrandom) parameter vector. We present a Barankin-type bound which is useful in problems where there is a prior knowledge on some of the parameters to be estimated. That is, the parameter vector is a hybrid vector in the sense that some of its entries are deterministic while other are random variables. We present a simple expression for a positive-definite matrix which provides bounds on the covariance of any unbiased estimator of the nonrandom parameters and an estimator of the random parameters, simultaneously. We show that the Barankin bound for deterministic parameters estimation and the Bobrovsky-Zakai (1976) bound for random parameters estimation are special cases of our proposed bound.

Journal ArticleDOI
Immink Kornelis Antonie1
TL;DR: A new coding technique is proposed that translates user information into a constrained sequence using very long codewords using a storage-effective enumerative encoding scheme for translating user data into long dk sequences and vice versa and estimates are given of the relationship between coding efficiency versus encoder and decoder complexity.
Abstract: A new coding technique is proposed that translates user information into a constrained sequence using very long codewords. Huge error propagation resulting from the use of long codewords is avoided by reversing the conventional hierarchy of the error control code and the constrained code. The new technique is exemplified by focusing on (d, k)-constrained codes. A storage-effective enumerative encoding scheme is proposed for translating user data into long dk sequences and vice versa. For dk runlength-limited codes, estimates are given of the relationship between coding efficiency versus encoder and decoder complexity. We show that for most common d, k values, a code rate of less than 0.5% below channel capacity can be obtained by using hardware mainly consisting of a ROM lookup table of size 1 kbyte. For selected values of d and k, the size of the lookup table is much smaller. The paper is concluded by an illustrative numerical example of a rate 256/466, (d=2, k=15) code, which provides a serviceable 10% increase in rate with respect to its traditional rate 1/2, (2, 7) counterpart.

Journal ArticleDOI
TL;DR: If one could find an efficient (i.e., polynomial-time) algorithm for code equivalence, then one could settle the long-standing problem of determining whether there is an efficient algorithm for solving graph isomorphism.
Abstract: We study the computational difficulty of deciding whether two matrices generate equivalent linear codes, i.e., codes that consist of the same codewords up to a fixed permutation on the codeword coordinates. We call this problem code equivalence. Using techniques from the area of interactive proofs, we show on the one hand, that under the assumption that the polynomial-time hierarchy does not collapse, code equivalence is not NP-complete. On the other hand, we present a polynomial-time reduction from the graph isomorphism problem to code equivalence. Thus if one could find an efficient (i.e., polynomial-time) algorithm for code equivalence, then one could settle the long-standing problem of determining whether there is an efficient algorithm for solving graph isomorphism.

Journal ArticleDOI
TL;DR: It is shown that Csiszar and Korner's (1978) characterization of a discrete memoryless channel (DMC)X/spl rarr/Y as being less noisy than the DMC X/spl Rarr/Z is equivalent to the condition that the mutual-information difference I(X;Y)-I (X;Z) be a convex-/spl cap/ function of the probability distribution for X.
Abstract: It is shown that Csiszar and Korner's (1978) characterization of a discrete memoryless channel (DMC)X/spl rarr/Y as being less noisy than the DMC X/spl rarr/Z is equivalent to the condition that the mutual-information difference I(X;Y)-I(X;Z) be a convex-/spl cap/ function of the probability distribution for X. This result is used to obtain a simple determination of the capacity region of the broadcast channel with confidential messages (BCC), which is a DMC X/spl rarr/(Y,Z), when the DMC X/spl rarr/Y to the legitimate receiver is less noisy than the DMC X/spl rarr/Z to the enemy cryptanalyst and there is a probability distribution for X having strictly positive components that achieves capacity on both these channels. In particular, when these DMC's are both symmetric, then the secrecy capacity of the BCC is the difference of their capacities. It is shown further that the less-noisy condition in this result cannot be weakened to the condition that the DMC X/spl rarr/Y be more capable than the DMC X/spl rarr/Z in the sense of Csiszar and Korner.

Journal ArticleDOI
TL;DR: A single-letter characterization of the coding rate region is obtained and it is shown that coding by superposition is optimal for this problem and a tight lower bound on the coding rates sum is derived.
Abstract: Multilevel diversity coding was introduced in recent work by Roche (1992) and Yeung (1995). In a multilevel diversity coding system, an information source is encoded by a number of encoders. There is a set of decoders, partitioned into multiple levels, with each decoder having access to a certain subset of the encoders. The reconstructions of the source by decoders within the same level are identical and are subject to the same distortion criterion. Inspired by applications in computer communication and fault-tolerant data retrieval, we study a multilevel diversity coding problem with three levels for which the connectivity between the encoders and decoders is symmetrical. We obtain a single-letter characterization of the coding rate region and show that coding by superposition is optimal for this problem. Generalizing to a symmetrical problem with an arbitrary number of levels, we derive a tight lower bound on the coding rate sum.

Journal ArticleDOI
TL;DR: For stationary mixing sequences, the problem investigated by Steinberg and Gutman by showing that a lossy extension of the Wyner-Ziv (1989) scheme cannot be optimal is settled, and the asymptotic behavior of the so-called approximate waiting time N/sub l/ is established.
Abstract: A practical suboptimal (variable source coding) algorithm for lossy data compression is presented. This scheme is based on approximate string matching, and it naturally extends the lossless Lempel-Ziv (1977) data compression scheme. Among others we consider the typical length of an approximately repeated pattern within the first n positions of a stationary mixing sequence where D percent of mismatches is allowed. We prove that there exists a constant r/sub 0/(D) such that the length of such an approximately repeated pattern converges in probability to 1/r/sub 0/(D) log n (pr.) but it almost surely oscillates between 1/r/sub -/spl infin//(D) log n and 2/r/sub 1/(D) log n, where r/sub -/spl infin//(D)>r/sub 0/(D)>r/sub 1/(D)/2 are some constants. These constants are natural generalizations of Renyi entropies to the lossy environment. More importantly, we show that the compression ratio of a lossy data compression scheme based on such an approximate pattern matching is asymptotically equal to r/sub 0/(D). We also establish the asymptotic behavior of the so-called approximate waiting time N/sub l/ which is defined as the time until a pattern of length C repeats approximately for the first time. We prove that log N/sub l//l/spl rarr/r/sub 0/(D) (pr.) as l/spl rarr//spl infin/. In general, r/sub 0/(D)>R(D) where R(D) is the rate distortion function. Thus for stationary mixing sequences we settle in the negative the problem investigated by Steinberg and Gutman by showing that a lossy extension of the Wyner-Ziv (1989) scheme cannot be optimal.

Journal ArticleDOI
TL;DR: It is shown in this paper that the minimax redundancy minus ((k-1)/2) log(n/(2/spl pi/e)) converges to log/spl int//spl radic/(det I(/spl theta/))d/spl Theta/=log (/spl Gamma/(1/2)/sup k///spl Gamma/2), where I( /spl theTA/) is the Fisher information and the integral is over the whole probability simplex.
Abstract: Let X/sup n/=(X/sub 1/,...,X/sub n/) be a memoryless source with unknown distribution on a finite alphabet of size k. We identify the asymptotic minimax coding redundancy for this class of sources, and provide a sequence of asymptotically minimax codes. Equivalently, we determine the limiting behavior of the minimax relative entropy min/sub QXn/ max/sub pXn/ D(P/sub Xn//spl par/Q/sub Xn/), where the maximum is over all independent and identically distributed (i.i.d.) source distributions and the minimum is over all joint distributions. We show in this paper that the minimax redundancy minus ((k-1)/2) log(n/(2/spl pi/e)) converges to log/spl int//spl radic/(det I(/spl theta/))d/spl theta/=log (/spl Gamma/(1/2)/sup k///spl Gamma/(k/2)), where I(/spl theta/) is the Fisher information and the integral is over the whole probability simplex. The Bayes strategy using Jeffreys' prior is shown to be asymptotically maximin but not asymptotically minimax in our setting. The boundary risk using Jeffreys' prior is higher than that of interior points. We provide a sequence of modifications of Jeffreys' prior that put some prior mass near the boundaries of the probability simplex to pull down that risk to the asymptotic minimax level in the limit.

Journal ArticleDOI
TL;DR: The lower bound of R/sub n/(d) derived in this paper gives a positive answer to a conjecture proposed by Yu and Speed (1993) about redundancy of source coding with respect to a fidelity criterion.
Abstract: The problem of redundancy of source coding with respect to a fidelity criterion is considered. For any fixed rate R>0 and any memoryless source with finite source and reproduction alphabets and a common distribution p, the nth-order distortion redundancy D/sub n/(R) of fixed-rate coding is defined as the minimum of the difference between the expected distortion per symbol of any block code with length n and rate R and the distortion rate function d(p,R) of the source p. It is demonstrated that for sufficiently large n, D/sub n/(R) is equal to -(/spl part///spl part/R)d(p,R) ln n/2n+o(ln n/n), where (/spl part///spl part/R)d(p,R) is the partial derivative of d(p,R) evaluated at R and assumed to exist. For any fixed distortion level d>0 and any memoryless source p, the nth-order rate redundancy R/sub n/(d) of coding at fixed distortion level d (or by using d-semifaithful codes) is defined as the minimum of the difference between the expected rate per symbol of any d-semifaithful code of length n and the rate-distortion function R(p,d) of p evaluated at d. It is proved that for sufficiently large n, R/sub n/(d) is upper-bounded by ln n/n+o(ln n/n) and lower-bounded by In n/2n+o(In n/n). As a by-product, the lower bound of R/sub n/(d) derived in this paper gives a positive answer to a conjecture proposed by Yu and Speed (1993).

Journal ArticleDOI
TL;DR: This correspondence studies resilient functions which have applications in fault-tolerant distributed computing, quantum cryptographic key distribution, and random sequence generation for stream ciphers and presents a number of new methods for synthesizing resilient functions.
Abstract: This correspondence studies resilient functions which have applications in fault-tolerant distributed computing, quantum cryptographic key distribution, and random sequence generation for stream ciphers. We present a number of new methods for synthesizing resilient functions. An interesting aspect of these methods is that they are applicable both to linear and nonlinear resilient functions. Our second major contribution is to show that every linear resilient function can be transformed into a large number of nonlinear resilient functions with the same parameters. As a result, we obtain resilient functions that are highly nonlinear and have a high algebraic degree.

Journal ArticleDOI
TL;DR: It is seen that multicarrier transmission can provide a significant improvement at low and intermediate channel signal-to-noise ratios and is demonstrated to be superior to decision feedback equalized single-carrier QAM.
Abstract: Optimization of the performance of multicarrier transmission over a linear dispersive channel is presented. The optimum data and power assignment to the subcarriers are derived for both the conventional error probability criterion, and a new criterion based on the normalized mean-square error. The assignments and algorithms hold for channels where performance is degraded by additive noise, intersymbol and interchannel interference. Lower bounds on throughput are derived and are used to compare multicarrier performance with conventional single-carrier quadrature amplitude modulation (QAM) with both linear and decision feedback equalization. It is seen that multicarrier transmission can provide a significant improvement at low and intermediate channel signal-to-noise ratios. As an example, the optimization is applied to the high-speed digital subscriber loop, and multicarrier transmission is demonstrated to be superior to decision feedback equalized single-carrier QAM.

Journal ArticleDOI
TL;DR: It is demonstrated that for unifilar or Markov sources, the redundancy of encoding the first n letters of the source output with the Lempel-Ziv incremental parsing rule, the Welch modification, or a new variant is O((ln n)/sup -1/), and the exact form of convergence is upper-bound.
Abstract: The Lempel-Ziv codes are universal variable-to-fixed length codes that have become virtually standard in practical lossless data compression. For any given source output string from a Markov or unifilar source, we upper-bound the difference between the number of binary digits needed to encode the string and the self-information of the string. We use this result to demonstrate that for unifilar or Markov sources, the redundancy of encoding the first n letters of the source output with the Lempel-Ziv incremental parsing rule (LZ'78), the Welch modification (LZW), or a new variant is O((ln n)/sup -1/), and we upper-bound the exact form of convergence. We conclude by considering the relationship between the code length and the empirical entropy associated with a string.

Journal ArticleDOI
TL;DR: The class of additive propelinear codes-the Abelian subclass of the translation-invariant propelinar codes-is studied and a family of nonlinear binary perfect codes with a very simply construction and a very simple decoding algorithm is presented.
Abstract: A class of binary group codes is investigated. These codes are the propelinear codes, defined over the Hamming metric space F/sup m/, F=(0, 1), with a group structure. Generally, they are neither Abelian nor translation-invariant codes but they have good algebraic and combinatorial properties. Linear codes and Z/sub 4/-linear codes can be seen as a subclass of propelinear codes. It is shown here that the subclass of translation-invariant propelinear codes is of type Z/sub 2//sup k1//spl oplus/Z/sub 4//sup k2//spl oplus/Q/sub 8/(k3) where Q/sub 8/ is the non-Abelian quaternion group of eight elements. Exactly, every translation-invariant propelinear code of length n can be seen as a subgroup of Z/sub 2//sup k1//spl oplus/Z/sub 4//sup k2//spl oplus/Q/sub 8//sup k3/ with k/sub 1/+2k/sub 2/+4k/sub 3/=n. For k/sub 2/=k/sub 3/=0 we obtain linear binary codes and for k/sub 1/=k/sub 3/=0 we obtain Z/sub 4/-linear codes. The class of additive propelinear codes-the Abelian subclass of the translation-invariant propelinear codes-is studied and a family of nonlinear binary perfect codes with a very simply construction and a very simply decoding algorithm is presented.

Journal ArticleDOI
TL;DR: It is shown that the minimax and maximin values of this game are always equal, and there is always a minimax strategy in the closure of the set of all Bayes strategies.
Abstract: Suppose nature picks a probability measure P/sub /spl theta// on a complete separable metric space X at random from a measurable set P/sub /spl Theta//={P/spl theta/:/spl theta//spl isin//spl Theta/}. Then, without knowing /spl theta/, a statistician picks a measure Q on S. Finally, the statistician suffers a loss D(P/sub 0//spl par/Q), the relative entropy between P/sub /spl theta// and Q. We show that the minimax and maximin values of this game are always equal, and there is always a minimax strategy in the closure of the set of all Bayes strategies. This generalizes previous results of Gallager(1979), and Davisson and Leon-Garcia (1980).

Journal ArticleDOI
TL;DR: In this paper, it was shown that for a memoryless source, the average redundancy rate attains asymptotically Er/sub n/=(A+/spl delta/(n))/log n+ O(log log n/log/sup 2/n), where A is an explicitly given constant that depends on source characteristics, and /spl delta is a fluctuating function with a small amplitude.
Abstract: In this paper, we settle a long-standing open problem concerning the average redundancy r/sub n/ of the Lempel-Ziv'78 (LZ78) code. We prove that for a memoryless source the average redundancy rate attains asymptotically Er/sub n/=(A+/spl delta/(n))/log n+ O(log log n/log/sup 2/ n), where A is an explicitly given constant that depends on source characteristics, and /spl delta/(x) is a fluctuating function with a small amplitude. We also derive the leading term for the kth moment of the number of phrases. We conclude by conjecturing a precise formula on the expected redundancy for a Markovian source. The main result of this paper is a consequence of the second-order properties of the Lempel-Ziv algorithm obtained by Jacquet and Szpankowski (1995). These findings have been established by analytical techniques of the precise analysis of algorithms. We give a brief survey of these results since they are interesting in their own right, and shed some light on the probabilistic behavior of pattern matching based data compression.

Journal ArticleDOI
TL;DR: Efficient code-search maximum-likelihood decoding algorithms, based on reliability information, are presented for binary Linear block codes, applicable to codes of relatively large size.
Abstract: Efficient code-search maximum-likelihood decoding algorithms, based on reliability information, are presented for binary Linear block codes. The codewords examined are obtained via encoding. The information set utilized for encoding comprises the positions of those columns of a generator matrix G of the code which, for a given received sequence, constitute the most reliable basis for the column space of G. Substantially reduced computational complexity of decoding is achieved by exploiting the ordering of the positions within this information set. The search procedures do not require memory; the codeword to be examined is constructed from the previously examined codeword according to a fixed rule. Consequently, the search algorithms are applicable to codes of relatively large size. They are also conveniently modifiable to achieve efficient nearly optimum decoding of particularly large codes.