scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 1994"


Journal ArticleDOI
TL;DR: Certain notorious nonlinear binary codes contain more codewords than any known linear code and can be very simply constructed as binary images under the Gray map of linear codes over Z/sub 4/, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes).
Abstract: Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by Nordstrom-Robinson (1967), Kerdock (1972), Preparata (1968), Goethals (1974), and Delsarte-Goethals (1975). It is shown here that all these codes can be very simply constructed as binary images under the Gray map of linear codes over Z/sub 4/, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes). The construction implies that all these binary codes are distance invariant. Duality in the Z/sub 4/ domain implies that the binary images have dual weight distributions. The Kerdock and "Preparata" codes are duals over Z/sub 4/-and the Nordstrom-Robinson code is self-dual-which explains why their weight distributions are dual to each other. The Kerdock and "Preparata" codes are Z/sub 4/-analogues of first-order Reed-Muller and extended Hamming codes, respectively. All these codes are extended cyclic codes over Z/sub 4/, which greatly simplifies encoding and decoding. An algebraic hard-decision decoding algorithm is given for the "Preparata" code and a Hadamard-transform soft-decision decoding algorithm for the I(Kerdock code. Binary first- and second-order Reed-Muller codes are also linear over Z/sub 4/, but extended Hamming codes of length n/spl ges/32 and the Golay code are not. Using Z/sub 4/-linearity, a new family of distance regular graphs are constructed on the cosets of the "Preparata" code. >

1,347 citations


Journal ArticleDOI
TL;DR: Simulations have demonstrated promising performance of the proposed algorithm for the blind equalization of a three-ray multipath channel, which may achieve equalization with fewer symbols than most techniques based only on higher-order statistics.
Abstract: A new blind channel identification and equalization method is proposed that exploits the cyclostationarity of oversampled communication signals to achieve identification and equalization of possibly nonminimum phase (multipath) channels without using training signals. Unlike most adaptive blind equalization methods for which the convergence properties are often problematic, the channel estimation algorithm proposed here is asymptotically ex-set. Moreover, since it is based on second-order statistics, the new approach may achieve equalization with fewer symbols than most techniques based only on higher-order statistics. Simulations have demonstrated promising performance of the proposed algorithm for the blind equalization of a three-ray multipath channel. >

1,123 citations


Journal ArticleDOI
TL;DR: A formula for the capacity of arbitrary single-user channels without feedback is proved and capacity is shown to equal the supremum, over all input processes, of the input-output inf-information rate defined as the liminf in probability of the normalized information density.
Abstract: A formula for the capacity of arbitrary single-user channels without feedback (not necessarily information stable, stationary, etc.) is proved. Capacity is shown to equal the supremum, over all input processes, of the input-output inf-information rate defined as the liminf in probability of the normalized information density. The key to this result is a new converse approach based on a simple new lower bound on the error probability of m-ary hypothesis tests among equiprobable hypotheses. A necessary and sufficient condition for the validity of the strong converse is given, as well as general expressions for /spl epsiv/-capacity. >

907 citations


Journal ArticleDOI
A.D. Wyner1
TL;DR: Shannon-theoretic limits for a very simple cellular multiple-access system, and a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal.
Abstract: We obtain Shannon-theoretic limits for a very simple cellular multiple-access system. In our model the received signal at a given cell site is the sum of the signals transmitted from within that cell plus a factor /spl alpha/ (0/spl les//spl alpha//spl les/1) times the sum of the signals transmitted from the adjacent cells plus ambient Gaussian noise. Although this simple model is scarcely realistic, it nevertheless has enough meat so that the results yield considerable insight into the workings of real systems. We consider both a one dimensional linear cellular array and the familiar two-dimensional hexagonal cellular pattern. The discrete-time channel is memoryless. We assume that N contiguous cells have active transmitters in the one-dimensional case, and that N/sup 2/ contiguous cells have active transmitters in the two-dimensional case. There are K transmitters per cell. Most of our results are obtained for the limiting case as N/spl rarr//spl infin/. The results include the following. (1) We define C/sub N/,C/spl circ//sub N/ as the largest achievable rate per transmitter in the usual Shannon-theoretic sense in the one- and two-dimensional cases, respectively (assuming that all signals are jointly decoded). We find expressions for limN/spl rarr//spl infin/C/sub N/ and limN/spl rarr//spl infin/C/spl circ//sub N/. (2) As the interference parameter /spl alpha/ increases from 0, C/sub N/ and C/spl circ//sub N/ increase or decrease according to whether the signal-to-noise ratio is less than or greater than unity. (3) Optimal performance is attainable using TDMA within the cell, but using TDMA for adjacent cells is distinctly suboptimal. (4) We suggest a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal. >

787 citations


Journal ArticleDOI
Marcel Rupf, J.L. Massey1
TL;DR: It is shown that the sum capacity of the symbol-synchronous code-division multiple-access channel with equal average-input-energy constraints is maximized precisely by those spreading sequence multisets that meet Welch's lower bound on total squared correlation.
Abstract: It is shown that the sum capacity of the symbol-synchronous code-division multiple-access channel with equal average-input-energy constraints is maximized precisely by those spreading sequence multisets that meet Welch's lower bound on total squared correlation. It is further shown that the symmetric capacity of the channel determined by these same sequence multisets is equal to the sum capacity. >

387 citations


Journal ArticleDOI
TL;DR: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas and it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large.
Abstract: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas A useful sequential test which we term the MSPRT is studied in this paper The test is shown to be a generalization of the sequential probability ratio test Under Bayesian assumptions, it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large Bounds on error probabilities are derived, and asymptotic expressions for the stopping time and error probabilities are given A design procedure is presented for determining the parameters of the MSPRT Two examples involving Gaussian densities are included, and comparisons are made between simulation results and asymptotic expressions Comparisons with Bayesian fixed sample size tests are also made, and it is found that the MSPRT requires two to three times fewer samples on average >

296 citations


Journal ArticleDOI
Jr. G.D. Forney1
TL;DR: This semi-tutorial paper discusses the connections between the dimension/length profile (DLP) of a linear code, which is essentially the same as its "generalized Hamming weight hierarchy", and the complexity of its minimal trellis diagram.
Abstract: This semi-tutorial paper discusses the connections between the dimension/length profile (DLP) of a linear code, which is essentially the same as its "generalized Hamming weight hierarchy", and the complexity of its minimal trellis diagram. These connections are close and deep. DLP duality is closely related to trellis duality. The DLP of a code gives tight bounds on its state and branch complexity profiles under any coordinate ordering; these bounds can often be met. A maximum distance separable (MDS) code is characterized by a certain extremal DLP, from which the main properties of MDS codes are easily derived. The simplicity and generality of these interrelationships are emphasized. >

278 citations


Journal ArticleDOI
TL;DR: A model to describe a source with distortion no larger than /spl Delta//sub 1 and a more accurate description at distortion at R/sub 1.
Abstract: Let R(/spl middot/) be the rate-distortion function. Assume that we want to describe a source with distortion no larger than /spl Delta//sub 1/. From the rate-distortion theory we know that we need to do so at a rate R/sub 1/ no smaller than R(/spl Delta//sub 1/) /spl lsqb/bits/symbol/spl rsqb/. If it turns out that a more accurate description at distortion /spl Delta//sub 2/, /spl Delta//sub 2/ >

275 citations


Journal ArticleDOI
G. Poltyrev1
TL;DR: Bounds on the error probability of maximum likelihood decoding of a binary linear code are considered and the author shows that the bound considered for the binary symmetrical channel case coincides asymptotically with the random coding bound.
Abstract: Bounds on the error probability of maximum likelihood decoding of a binary linear code are considered. The bounds derived use the weight spectrum of the code and they are tighter than the conventional union bound in the case of large noise in the channel. The bounds derived are applied to a code with an average spectrum, and the result is compared to the random coding exponent. The author shows that the bound considered for the binary symmetrical channel case coincides asymptotically with the random coding bound. For the case of AWGN channel the author shows that Berlekamp's (1980) tangential bound can be improved, but even this improved bound does not coincide with the random coding bound, although it can be very close to it. >

255 citations


Journal ArticleDOI
TL;DR: The optimal waveform selection algorithms in the paper may be included with conventional Kalman filtering equations to form an enhanced Kalman tracker to yield the most improvement possible in tracking performance for each new transmitted pulse.
Abstract: Investigates adaptive waveform selection schemes where selection is based on overall target tracking system performance. Optimal receiver assumptions allow the inclusion of transmitted waveform specification parameters in the tracking subsystem defining equations. The authors give explicit expressions for two one-step ahead optimization problems for a single target in white Gaussian noise when the tracker is a conventional Kalman filter. These problems may be solved to yield the most improvement possible in tracking performance for each new transmitted pulse. In cases where target motion is restricted to one dimension, closed-form solutions to the local (one step ahead) waveform optimization problem have been obtained. The optimal waveform selection algorithms in the paper may be included with conventional Kalman filtering equations to form an enhanced Kalman tracker. Simulation examples are presented to illustrate the potential of the waveform selection schemes for the optimal utilization of the capabilities of modern digital waveform generators, including multiple waveform classes. The extension of the basic waveform optimization scheme to more complex tracking scenarios is also discussed. >

242 citations


Journal ArticleDOI
TL;DR: It is shown that for any graph G of maximum degree d, there is a perfect secret-sharing scheme for G with information rate 2/(d+1), as a corollary, the maximum information rate of secret- sharing schemes for paths on more than three vertices and for cycles on morethan four vertices is shown to be 2/3.
Abstract: The paper describes a very powerful decomposition construction for perfect secret-sharing schemes. The author gives several applications of the construction and improves previous results by showing that for any graph G of maximum degree d, there is a perfect secret-sharing scheme for G with information rate 2/(d+1). As a corollary, the maximum information rate of secret-sharing schemes for paths on more than three vertices and for cycles on more than four vertices is shown to be 2/3. >

Journal ArticleDOI
TL;DR: The problem of entropy-constrained multiple-description scalar quantizer design is posed as an optimization problem, necessary conditions for optimality are derived, and an iterative design algorithm is presented.
Abstract: The problem of entropy-constrained multiple-description scalar quantizer design is posed as an optimization problem, necessary conditions for optimality are derived, and an iterative design algorithm is presented. Performance results are presented for a Gaussian source, along with comparisons to the multiple-description rate distortion bound and a reference system. >

Journal ArticleDOI
TL;DR: It is shown that McEliece's and Niederreiter's public-key cryptosystems are equivalent when set up for corresponding choices of parameters and a security analysis for the two systems is presented.
Abstract: It is shown that McEliece's and Niederreiter's public-key cryptosystems are equivalent when set up for corresponding choices of parameters. A security analysis for the two systems based on this equivalence observation, is presented. >

Journal ArticleDOI
TL;DR: The joint transmit-receive optimization problem for multiuser communication systems with decision feedback is investigated and it is shown that minimization of the geometric mean-squared error leads to a tractable transmitter optimized problem for general multi-input multi-output decision-feedback systems.
Abstract: The joint transmit-receive optimization problem for multiuser communication systems with decision feedback is investigated. It is shown that minimization of the geometric mean-squared error (defined as the determinant of the error covariance matrix) leads to a tractable transmitter optimization problem for general multi-input multi-output decision-feedback systems. Several computational results are included that highlight system performance for a variety of useful transmission scenarios. >

Journal ArticleDOI
TL;DR: A scheme for the optimal shaping of multidimensional constellations is proposed, motivated by a type of structured vector quantizer for memoryless sources, and results in N-sphere shaping of N-dimensional cubic lattice-basedconstellations.
Abstract: A scheme for the optimal shaping of multidimensional constellations is proposed. This scheme is motivated by a type of structured vector quantizer for memoryless sources, and results in N-sphere shaping of N-dimensional cubic lattice-based constellations. Because N-sphere shaping is optimal in N dimensions, shaping gains higher than those of N-dimensional Voronoi constellations can be realized. While optimal shaping for a large N can realize most of the 1.53 dB total shaping gain, it has the undesirable effect of increasing the size and the peak-to-average power ratio of the constituent 2D constellation. This limits its usefulness for many real world channels which have nonlinearities. The proposed scheme alleviates this problem by achieving optimal constellation shapes for a given limit on the constellation expansion ratio or the peak-to-average power ratio of the constituent 2D constellation. Results of Calderbank and Ozarow (1990) on nonequiprobable signaling are used to reduce the complexity of this scheme and make it independent of the data rate with essentially no effect on the shaping gain. Comparisons with Forney's (1989) trellis shaping scheme are also provided. >

Journal ArticleDOI
TL;DR: Computer simulation results indicate, for some signal-to-noise ratios (SNR), that the proposed soft decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of the algorithm is always superior to those ofthe other two.
Abstract: A new soft decoding algorithm for linear block codes is proposed. The decoding algorithm works with any algebraic decoder and its performance is strictly the same as that of maximum-likelihood-decoding (MLD). Since our decoding algorithm generates sets of different candidate codewords corresponding to the received sequence, its decoding complexity depends on the received sequence. We compare our decoding algorithm with Chase (1972) algorithm 2 and the Tanaka-Kakigahara (1983) algorithm in which a similar method for generating candidate codewords is used. Computer simulation results indicate, for some signal-to-noise ratios (SNR), that our decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of ours is always superior to those of the other two. >

Journal ArticleDOI
TL;DR: This paper reformulated the rate-distortion problem in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density and shows how the number of "symbols" grows as the system undergoes phase transitions.
Abstract: In rate-distortion theory, results are often derived and stated in terms of the optimizing density over the reproduction space. In this paper, the problem is reformulated in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density. This results in optimality conditions that are "random relatives" of the known Lloyd (1982) optimality conditions for deterministic quantizers. The validity of the mapping approach is assured by fundamental isomorphism theorems for measure spaces. We show that for the squared error distortion, the optimal reproduction random variable is purely discrete at supercritical distortion (where the Shannon (1948) lower bound is not tight). The Gaussian source is thus the only source that produces continuous reproduction variables for the entire range of positive rate. To analyze the evolution of the optimal reproduction distribution, we use the mapping formulation and establish an analogy to statistical mechanics. The solutions are given by the distribution at isothermal statistical equilibrium, and are parameterized by the temperature in direct correspondence to the parametric solution of the variational equations in rate-distortion theory. The analysis of an annealing process shows how the number of "symbols" grows as the system undergoes phase transitions. Thus, an algorithm based on the mapping approach often needs but a few variables to find the exact solution, while the Blahut (1972) algorithm would only approach it at the limit of infinite resolution. Finally, a quick "deterministic annealing" algorithm to generate the rate-distortion curve is suggested. The resulting curve is exact as long as continuous phase transitions in the process are accurately followed. >

Journal ArticleDOI
TL;DR: The authors show several simple lower bounds on mutual information which do not assume that at least one of the random variables is equiprobable, and replace log M with the infinite-order Renyi entropy in the Fano inequality.
Abstract: The Fano inequality gives a lower bound on the mutual information between two random variables that take values on an M-element set, provided at least one of the random variables is equiprobable. The authors show several simple lower bounds on mutual information which do not assume such a restriction. In particular, this ran be accomplished by replacing log M with the infinite-order Renyi entropy in the Fano inequality. Applications to hypothesis testing are exhibited along with bounds on mutual information in terms of the a priori and a posteriori error probabilities. >

Journal ArticleDOI
TL;DR: Lee-metric BCH codes can be used for protecting against bitshift errors and synchronization errors caused by insertion and/or deletion of zeros in (d, k)-constrained channels, providing an algebraic approach to correcting errors in partial-response channels where matched spectral-null codes are used.
Abstract: Shows that each code in a certain class of BCH codes over GF(p), specified by a code length n/spl les/p/sup m/-1 and a runlength r/spl les/(p-1)/2 of consecutive roots in GF(p/sup m/), has minimum Lee distance /spl ges/2r. For the very high-rate range these codes approach the sphere-packing bound on the minimum Lee distance. Furthermore, for a given r, the length range of these codes is twice as large as that attainable by Berlekamp's (1984) extended negacyclic codes. The authors present an efficient decoding procedure, based on Euclid's algorithm, for correcting up to r-1 errors and detecting r errors, that is, up to the number of Lee errors guaranteed by the designed minimum Lee distance 2r. Bounds on the minimum Lee distance for r/spl ges/(p+1)/2 are provided for the Reed-Solomon case, i.e., when the BCH code roots are in GF(p). The authors present two applications. First, Lee-metric BCH codes can be used for protecting against bitshift errors and synchronization errors caused by insertion and/or deletion of zeros in (d, k)-constrained channels. Second, the code construction with its decoding algorithm can be formulated over the integer ring, providing an algebraic approach to correcting errors in partial-response channels where matched spectral-null codes are used. >

Journal ArticleDOI
TL;DR: It is shown that the Shannon lower bound is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy and a finite /splalpha/ th moment for some /spl alpha/>0, with respect to the given norm.
Abstract: New results are proved on the convergence of the Shannon (1959) lower bound to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the Shannon lower bound is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy and a finite /spl alpha/ th moment for some /spl alpha/>0, with respect to the given norm. Moreover, we derive a theorem of Linkov (1965) on the asymptotic tightness of the Shannon lower bound for general difference distortion measures with more relaxed conditions on the source density. We also show that the Shannon lower bound relative to a stationary source and single-letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. >

Journal ArticleDOI
TL;DR: Smoothed polyperiodograms are proposed for cyclic polyspectral estimation and are shown to be consistent and asymptotically normal.
Abstract: Second- and higher-order almost cyclostationary processes are random signals with almost periodically time-varying statistics. The class includes stationary and cyclostationary processes as well as many real-life signals of interest. Cyclic and time-varying cumulants and polyspectra are defined for discrete-time real kth-order cyclostationary processes, and their interrelationships are explored. Smoothed polyperiodograms are proposed for cyclic polyspectral estimation and are shown to be consistent and asymptotically normal. Asymptotic covariance expressions are derived along with their computable forms. Higher than second-order cyclic cumulants and polyspectra convey time-varying phase information and are theoretically insensitive to any stationary (for nonzero cycles) as well as additive cyclostationary Gaussian noise (for all cycles). >

Journal ArticleDOI
TL;DR: The problem is solved for two cases: where the encoder is and is not informed of the side-information, and the minimum achievable rate R is given as a per-letter minimization of information theoretic quantities.
Abstract: A discrete memoryless source {X/sub k/} is to be coded into a binary stream of rate R bits/symbol such that {X/sub k/} can be recovered with minimum possible distortion. The system is to be optimized for best performance with two decoders, one of which has access to side-information about the source. For given levels of average distortion for these two decoders, the minimum achievable rate R (in the usual Shannon theory sense) is given as a per-letter minimization of information theoretic quantities. The problem is solved for two cases: where the encoder is and is not informed of the side-information. >

Journal ArticleDOI
TL;DR: The authors prove that being ideal over just one of the two domains does not suffice for universally ideal access structures, and give an exact characterization for each of these two conditions.
Abstract: Given a set of parties {1, /spl middot//spl middot//spl middot/, n}, an access structure is a monotone collection of subsets of the parties. For a certain domain of secrets, a secret-sharing scheme for an access structure is a method for a dealer to distribute shares to the parties. These shares enable subsets in the access structure to reconstruct the secret, while subsets not in the access structure get no information about the secret. A secret-sharing scheme is ideal if the domains of the shares are the same as the domain of the secrets. An access structure is universally ideal if there exists an ideal secret-sharing scheme for it over every finite domain of secrets. An obvious necessary condition for an access structure to be universally ideal is to be ideal over the binary and ternary domains of secrets. The authors prove that this condition is also sufficient. They also show that being ideal over just one of the two domains does not suffice for universally ideal access structures. Finally, they give an exact characterization for each of these two conditions. >

Journal ArticleDOI
TL;DR: In the present paper, two estimators of the normalized autocorrelation function based on the phase only are presented and their theoretical accuracy is evaluated and compared to the accuracy of the direct estimate.
Abstract: The normalized autocorrelation function of a Gaussian process may be recovered from second order moments of their polarity, through the arcsin law. By analogy, it is possible to calculate the normalized autocorrelation function of a circularly complex Gaussian process from the knowledge of moments of its instantaneous phase. In the present paper, two estimators of the normalized autocorrelation function based on the phase only are presented. Their theoretical accuracy is evaluated and compared to the accuracy of the direct estimate. >

Journal ArticleDOI
TL;DR: An upper bound on the cardinality of the intersection of two perfect codes of length n is presented, and perfect codes whose intersection attains the upper bound are constructed for all n.
Abstract: Properties of nonlinear perfect binary codes are investigated and several new constructions of perfect codes are derived from these properties. An upper bound on the cardinality of the intersection of two perfect codes of length n is presented, and perfect codes whose intersection attains the upper bound are constructed for all n. As an immediate consequence of the proof of the upper bound the authors obtain a simple closed-form expression for the weight distribution of a perfect code. Furthermore, they prove that the characters of a perfect code satisfy certain constraints, and provide a sufficient condition for a binary code to be perfect. The latter result is employed to derive a generalization of the construction of Phelps (1983), which is shown to give rise to some perfect codes that are nonequivalent to the perfect codes obtained from the known constructions. Moreover, for any m/spl ges/4 the authors construct full-rank perfect binary codes of length 2/sup m/-1. These codes are obviously nonequivalent to any of the previously known perfect codes. Furthermore the latter construction exhibits the existence of full-rank perfect tilings. Finally, they construct a set of 2(2/sup cn/) nonequivalent perfect codes of length n, for sufficiently large n and a constant c=0.5-/spl epsiv/. Precise enumeration of the number of codes in this set provides a slight improvement over the results reported by Phelps. >

Journal ArticleDOI
TL;DR: The authors identify the problem of maximum likelihood sequence estimation with that of finding the nearest lattice point with a tight upper bound on the symbol error probability of the new estimator, and it turns out that the estimator is effectively optimal for m/spl ges/4 and the loss in signal-to-noise ratio is less than 0.5 dB.
Abstract: Considers the problem of data detection in multilevel lattice-type modulation systems in the presence of intersymbol interference and additive white Gaussian noise. The conventional maximum likelihood sequence estimator using the Viterbi algorithm has a time complexity of O(m/sup /spl nu/+1/) operations per symbol and a space complexity of O(/spl delta/m/sup /spl nu//) storage elements, where m is the size of input alphabet, /spl nu/ is the length of channel memory, and /spl delta/ is the truncation depth. By revising the truncation scheme and viewing the channel as a linear transform, the authors identify the problem of maximum likelihood sequence estimation with that of finding the nearest lattice point. From this lattice viewpoint, the lattice sequence estimator for PAM systems is developed, which has the following desired properties: 1) its expected time-complexity grows as /spl delta//sup 2/ as SNR/spl rarr//spl infin/; 2) its space complexity grow as /spl delta/; and 3) its error performance is effectively optimal for sufficiently large m. A tight upper bound on the symbol error probability of the new estimator is derived, and is confirmed by the simulation results of an example channel. It turns out that the estimator is effectively optimal for m/spl ges/4 and the loss in signal-to-noise ratio is less than 0.5 dB even for m=2. Finally, limitations of the proposed estimator are also discussed. >

Journal ArticleDOI
TL;DR: A new construction of optimal binary sequences, identical to the well known family of Gold sequences in terms of maximum nontrivial correlation magnitude and family size, but having larger linear span is presented.
Abstract: A new construction of optimal binary sequences, identical to the well known family of Gold sequences in terms of maximum nontrivial correlation magnitude and family size, but having larger linear span is presented The distribution of correlation values is determined For every odd integer /spl tau//spl ges/3, the construction provides a family that contains 2/sup /spl tau//+1 cyclically distinct sequences, each of period 2/sup /spl tau//-1 The maximum nontrivial correlation magnitude equals 2(/spl tau/+1)/sup /2/+1, with one exception, each of the sequences in the family has linear span at least (/spl tau//sup 2/-/spl tau/)/2 (compared to 2/spl tau/ for Gold sequences) The sequences are easily implemented using a quaternary shift register followed by a simple feedforward nonlinearity >

Journal ArticleDOI
TL;DR: The jointly designedTCQ/TCM system outperforms the straightforward cascade of separately designed TCQ and TCM systems and provides larger signal-to-noise ratio than Farvardin and Vaishampayan's (1991) channel-optimized vector quantization.
Abstract: Trellis-coded quantization (TCQ) of memoryless sources is developed for transmission over a binary symmetric channel. The optimized TCQ coder can achieve essentially the same performance as Ayanoglu and Gray's (1987) unconstrained trellis coding optimized for the binary symmetric channel, but with a much lower implementation complexity for transmission rates above 1 b/sample. In most cases, the optimized TCQ coder also provides larger signal-to-noise ratio than Farvardin and Vaishampayan's (1991) channel-optimized vector quantization. Algorithms are developed for the joint design of trellis-coded quantization/modulation (TCQ/TCM). The jointly designed TCQ/TCM system outperforms the straightforward cascade of separately designed TCQ and TCM systems. The improvement is most significant at low channel signal-to-noise ratio. For a first-order Gauss-Markov source, the predictive TCQ/TCM performance can exceed that of optimum pulse amplitude modulation. >

Journal ArticleDOI
TL;DR: In this article, the error exponent given by Csiszar and Korner (1980) for the universal coding system can strictly be sharpened in general for a region of relatively higher rates.
Abstract: Universal coding for the Slepian-Wolf (1973) data compression system is considered. We shall demonstrate based on a simple observation that the error exponent given by Csiszar and Korner (1980) for the universal coding system can strictly be sharpened in general for a region of relatively higher rates. This kind of observation can be carried over also to the case of lower rates outside the Slepian-Wolf region, which establishes the strong converse along with the optimal exponent. >

Journal ArticleDOI
TL;DR: It is established that the corresponding minimax robust tests are solutions to simple decentralized detection problems for which the sensor distributions are specified to be the least favorable distributions.
Abstract: Decentralized detection problems are studied where the sensor distributions are not specified completely. The sensor distributions are assumed to belong to known uncertainty classes. It is shown for a broad class of such problems that a set of least favorable distributions exists for minimax robust testing between the hypotheses. It is hence established that the corresponding minimax robust tests are solutions to simple decentralized detection problems for which the sensor distributions are specified to be the least favorable distributions. >