# Showing papers in "IEEE Transactions on Information Theory in 2001"

••

TL;DR: A generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph, that computes-either exactly or approximately-various marginal functions derived from the global function.

Abstract: Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of "local" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms.

6,637 citations

••

TL;DR: This work designs low-density parity-check codes that perform at rates extremely close to the Shannon capacity and proves a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution.

Abstract: We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.

3,520 citations

••

Bell Labs

^{1}TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.

Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations

••

TL;DR: It is proved that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l/sup 1/ norm of the coefficients among all decompositions.

Abstract: Suppose a discrete-time signal S(t), 0/spl les/t

2,207 citations

••

TL;DR: Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.

Abstract: This paper presents a geometric approach to the construction of low-density parity-check (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner (1981) graphs have girth 6. Finite-geometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasi-cyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finite-geometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.

1,401 citations

••

TL;DR: A simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs is introduced and a simple criterion involving the fractions of nodes of different degrees on both sides of the graph is obtained which is necessary and sufficient for the decoding process to finish successfully with high probability.

Abstract: We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate R and any given real number /spl epsiv/ a family of linear codes of rate R which can be encoded in time proportional to ln(1//spl epsiv/) times their block length n. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length (1+/spl epsiv/)Rn or more. The recovery algorithm also runs in time proportional to n ln(1//spl epsiv/). Our algorithms have been implemented and work well in practice; various implementation issues are discussed.

1,341 citations

••

TL;DR: By using the Gaussian approximation for message densities under density evolution, the sum-product decoding algorithm can be visualize and the optimization of degree distributions can be understood and done graphically using the visualization.

Abstract: Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under message-passing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating the means of the Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is 10. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.

1,204 citations

••

TL;DR: It is shown how to exploit the sparseness of the parity-check matrix to obtain efficient encoders and it is shown that "optimized" codes actually admit linear time encoding.

Abstract: Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. We consider the encoding problem for LDPC codes. More generally we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3,6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length n/spl sime/100000 is still quite practical. More importantly, we show that "optimized" codes actually admit linear time encoding.

1,173 citations

••

TL;DR: The results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels suggest that variations of irregular low density parity-check codes may be able to match or beat turbo code performance.

Abstract: We construct new families of error-correcting codes based on Gallager's (1973) low-density parity-check codes. We improve on Gallager's results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels. For example, using belief propagation, for rate 1/4 codes on 16000 bits over a binary-symmetric channel, previous low-density parity-check codes can correct up to approximately 16% errors, while our codes correct over 17%. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density parity-check codes may be able to match or beat turbo code performance.

900 citations

••

Institut Eurécom

^{1}TL;DR: The throughput of automatic retransmission request (ARQ) protocols is compared to that of code division multiple access (CDMA) with conventional decoding and Interestingly, the ARQ systems are not interference-limited even if no multiuser detection or joint decoding is used, as opposed to conventional CDMA.

Abstract: In next-generation wireless communication systems, packet-oriented data transmission will be implemented in addition to standard mobile telephony. We take an information-theoretic view of some simple protocols for reliable packet communication based on "hybrid-ARQ," over a slotted multiple-access Gaussian channel with fading and study their throughput (total bit per second per hertz) and average delay under idealized but fairly general assumptions. As an application of the renewal-reward theorem, we obtain closed-form throughput formulas. Then, we consider asymptotic behaviors with respect to various system parameters. The throughput of automatic retransmission request (ARQ) protocols is compared to that of code division multiple access (CDMA) with conventional decoding. Interestingly, the ARQ systems are not interference-limited even if no multiuser detection or joint decoding is used, as opposed to conventional CDMA.

742 citations

••

TL;DR: An information-theoretic perspective on optimum transmitter strategies, and the gains obtained by employing them, for systems with transmit antenna arrays and imperfect channel feedback is provided.

Abstract: The use of channel feedback from receiver to transmitter is standard in wireline communications. While knowledge of the channel at the transmitter would produce similar benefits for wireless communications as well, the generation of reliable channel feedback is complicated by the rapid time variations of the channel for mobile applications. The purpose of this paper is to provide an information-theoretic perspective on optimum transmitter strategies, and the gains obtained by employing them, for systems with transmit antenna arrays and imperfect channel feedback. The spatial channel, given the feedback, is modeled as a complex Gaussian random vector. Two extreme cases are considered: mean feedback, in which the channel side information resides in the mean of the distribution, with the covariance modeled as white, and covariance feedback, in which the channel is assumed to be varying too rapidly to track its mean, so that the mean is set to zero, and the information regarding the relative geometry of the propagation paths is captured by a nonwhite covariance matrix. In both cases, the optimum transmission strategies, maximizing the information transfer rate, are determined as a solution to simple numerical optimization problems. For both feedback models, our numerical results indicate that, when there is a moderate disparity between the strengths of different paths from the transmitter to the receiver, it is nearly optimal to employ the simple beamforming strategy of transmitting all available power in the direction which the feedback indicates is the strongest.

••

TL;DR: It is shown that the assignment based on a fixed point of max-product is a "neighborhood maximum" of the posterior probabilities: the posterior probability of the max- product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment.

Abstract: Graphical models, such as Bayesian networks and Markov random fields (MRFs), represent statistical dependencies of variables by a graph. The max-product "belief propagation" algorithm is a local-message-passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed point yields the most probable values of the unobserved variables given the observed ones. Good empirical performance has been obtained by running the max-product algorithm (or the equivalent min-sum algorithm) on graphs with loops, for applications including the decoding of "turbo" codes. Except for two simple graphs (cycle codes and single-loop graphs) there has been little theoretical understanding of the max-product algorithm on graphs with loops. Here we prove a result on the fixed points of max-product on a graph with arbitrary topology and with arbitrary probability distributions (discrete- or continuous-valued nodes). We show that the assignment based on a fixed point is a "neighborhood maximum" of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs, this neighborhood is exponentially large. We illustrate the analysis with examples.

••

TL;DR: This paper derives the outage capacity regions of fading broadcast channels, assuming that both the transmitter and the receivers have perfect channel side information, and finds a strategy which bounds the outage probability region for different spectrum-sharing techniques.

Abstract: For pt.I see ibid., vol.47, no.3, p.1083-1102 (2002). We study three capacity regions for fading broadcast channels and obtain their corresponding optimal resource allocation strategies: the ergodic (Shannon) capacity region, the zero-outage capacity region, and the capacity region with outage. In this paper, we derive the outage capacity regions of fading broadcast channels, assuming that both the transmitter and the receivers have perfect channel side information. These capacity regions and the associate optimal resource allocation policies are obtained for code division (CD) with and without successive decoding, for time division (TD), and for frequency division (FD). We show that in an M-user broadcast system, the outage capacity region is implicitly obtained by deriving the outage probability region for a given rate vector. Given the required rate of each user, we find a strategy which bounds the outage probability region for different spectrum-sharing techniques. The corresponding optimal power allocation scheme is a multiuser generalization of the threshold-decision rule for a single-user fading channel. Also discussed is a simpler minimum common outage probability problem under the assumption that the broadcast channel is either not used at all when fading is severe or used simultaneously for all users. Numerical results for the different outage capacity regions are obtained for the Nakagami-m (1960) fading model.

••

TL;DR: In this article, the authors define and show how to construct nonbinary quantum stabilizer codes based on nonbinary error bases, which generalizes the relationship between self-orthogonal codes over F/sub 4/ and binary quantum codes to one between Self-Orthogonal code over F(q/sup 2/) and q-ary quantum codes for any prime power q.

Abstract: We define and show how to construct nonbinary quantum stabilizer codes. Our approach is based on nonbinary error bases. It generalizes the relationship between self-orthogonal codes over F/sub 4/ and binary quantum codes to one between self-orthogonal codes over F(q/sup 2/) and q-ary quantum codes for any prime power q.

••

TL;DR: This work casts space-time codes in an optimal signal-to-noise ratio (SNR) framework and shows that they achieve the maximum SNR and, in fact, they correspond to a generalized maximal ratio combiner.

Abstract: In Tarokh et al. (1999) space-time block codes were introduced to obtain coded diversity for a multiple-antenna communication system, in this work, we cast space-time codes in an optimal signal-to-noise ratio (SNR) framework and show that they achieve the maximum SNR and, in fact, they correspond to a generalized maximal ratio combiner. The maximum SNR framework also helps in calculating the distribution of the SNR and in deriving explicit expressions for bit error rates. We bring out the connection between the theory of amicable orthogonal designs and space-time codes. Based on this, we give a much simpler proof to one of the main theorems on space-time codes for complex symbols. We present a rate 1/2 code for complex symbols which has a smaller delay than the code already known. We also present another rate 3/4 code which is simpler than the one already known, in the sense it does not involve additions or multiplications. We also point out the connection between generalized real designs and generalized orthogonal designs.

••

TL;DR: The orthogonal decomposition of an exponential family or mixture family of probability distributions has a natural hierarchical structure is given and is important for extracting intrinsic interactions in firing patterns of an ensemble of neurons and for estimating its functional connections.

Abstract: An exponential family or mixture family of probability distributions has a natural hierarchical structure. This paper gives an "orthogonal" decomposition of such a system based on information geometry. A typical example is the decomposition of stochastic dependency among a number of random variables. In general, they have a complex structure of dependencies. Pairwise dependency is easily represented by correlation, but it is more difficult to measure effects of pure triplewise or higher order interactions (dependencies) among these variables. Stochastic dependency is decomposed quantitatively into an "orthogonal" sum of pairwise, triplewise, and further higher order dependencies. This gives a new invariant decomposition of joint entropy. This problem is important for extracting intrinsic interactions in firing patterns of an ensemble of neurons and for estimating its functional connections. The orthogonal decomposition is given in a wide class of hierarchical structures including both exponential and mixture families. As an example, we decompose the dependency in a higher order Markov chain into a sum of those in various lower order Markov chains.

••

TL;DR: The problem becomes one of extremizing the mutual information over all noise processes with covariances satisfying the correlation constraints R/sub 0/,..., R/ sub p/ for high signal powers, the worst additive noise is Gauss-Markov of order p as expected.

Abstract: The maximum entropy noise under a lag p autocorrelation constraint is known by Burg's theorem to be the pth order Gauss-Markov process satisfying these constraints. The question is, what is the worst additive noise for a communication channel given these constraints? Is it the maximum entropy noise? The problem becomes one of extremizing the mutual information over all noise processes with covariances satisfying the correlation constraints R/sub 0/,..., R/sub p/. For high signal powers, the worst additive noise is Gauss-Markov of order p as expected. But for low powers, the worst additive noise is Gaussian with a covariance matrix in a convex set which depends on the signal power.

••

TL;DR: The capacity-achieving distribution of a discrete-time Rayleigh fading channel, in which successive symbols face independent fading, and where neither the transmitter nor the receiver has channel state information is studied.

Abstract: We consider transmission over a discrete-time Rayleigh fading channel, in which successive symbols face independent fading, and where neither the transmitter nor the receiver has channel state information. Subject to an average power constraint, we study the capacity-achieving distribution of this channel and prove it to be discrete with a finite number of mass points, one of them located at the origin. We numerically compute the capacity and the corresponding optimal distribution as a function of the signal-to-noise ratio (SNR). The behavior of the channel at low SNR is studied and finally a comparison is drawn with the ideal additive white Gaussian noise channel.

••

TL;DR: This work suggests a penalty function to be used in various problems of structural risk minimization, based on the sup-norm of the so-called Rademacher process indexed by the underlying class of functions (sets), and obtains oracle inequalities for the theoretical risk of estimators, obtained by structural minimization of the empirical risk withRademacher penalties.

Abstract: We suggest a penalty function to be used in various problems of structural risk minimization. This penalty is data dependent and is based on the sup-norm of the so-called Rademacher process indexed by the underlying class of functions (sets). The standard complexity penalties, used in learning problems and based on the VC-dimensions of the classes, are conservative upper bounds (in a probabilistic sense, uniformly over the set of all underlying distributions) for the penalty we suggest. Thus, for a particular distribution of training examples, one can expect better performance of learning algorithms with the data-driven Rademacher penalties. We obtain oracle inequalities for the theoretical risk of estimators, obtained by structural minimization of the empirical risk with Rademacher penalties. The inequalities imply some form of optimality of the empirical risk minimizers. We also suggest an iterative approach to structural risk minimization with Rademacher penalties, in which the hierarchy of classes is not given in advance, but is determined in the data-driven iterative process of risk minimization. We prove probabilistic oracle inequalities for the theoretical risk of the estimators based on this approach as well.

••

TL;DR: The spectral efficiency as a function of the number of users per chip, the distribution of the flat fading, and the signal-to-noise ratio (SNR) is found for the optimum receiver as well as linear receivers (single-user matched filter, decorrelator, and minimum mean-square error (MMSE).

Abstract: The capacity of the randomly spread synchronous code-division multiple-access (CDMA) channel subject to frequency-flat fading is studied in the wide-band limit of large number of users. We find the spectral efficiency as a function of the number of users per chip, the distribution of the flat fading, and the signal-to-noise ratio (SNR), for the optimum receiver as well as linear receivers (single-user matched filter, decorrelator, and minimum mean-square error (MMSE)). The potential improvements due to both decentralized transmitter power control and multi-antenna receivers are also analyzed.

••

Rice University

^{1}TL;DR: A detailed study of the properties and several potential applications of the Renyi entropies, with emphasis on the mathematical foundations for quadratic TFRs, and establishes that there exist signals for which the measures are not well defined.

Abstract: The generalized entropies of Renyi inspire new measures for estimating signal information and complexity in the time-frequency plane. When applied to a time-frequency representation (TFR) from Cohen's class or the affine class, the Renyi entropies conform closely to the notion of complexity that we use when visually inspecting time-frequency images. These measures possess several additional interesting and useful properties, such as accounting and cross-component and transformation invariances, that make them natural for time-frequency analysis. This paper comprises a detailed study of the properties and several potential applications of the Renyi entropies, with emphasis on the mathematical foundations for quadratic TFRs. In particular, for the Wigner distribution, we establish that there exist signals for which the measures are not well defined.

••

Bell Labs

^{1}TL;DR: This work uses the powerful theory of fixed-point-free groups and their representations to design high-rate constellations with full diversity and reveals that the number of different group structures with full Diversity is very limited when the numberof transmitter antennas is large and odd.

Abstract: Multiple antennas can greatly increase the data rate and reliability of a wireless communication link in a fading environment, but the practical success of using multiple antennas depends crucially on our ability to design high-rate space-time constellations with low encoding and decoding complexity. It has been shown that full transmitter diversity, where the constellation is a set of unitary matrices whose differences have nonzero determinant, is a desirable property for good performance. We use the powerful theory of fixed-point-free groups and their representations to design high-rate constellations with full diversity. Furthermore, we thereby classify all full-diversity constellations that form a group, for all rates and numbers of transmitter antennas. The group structure makes the constellations especially suitable for differential modulation and low-complexity decoding algorithms. The classification also reveals that the number of different group structures with full diversity is very limited when the number of transmitter antennas is large and odd. We, therefore, also consider extensions of the constellation designs to nongroups. We conclude by showing that many of our designed constellations perform excellently on both simulated and real wireless channels.

••

TL;DR: In this paper, the authors studied different notions of traceability (TA) for pirated data and discussed equivalent formulations using structures such as perfect hash families, and used methods from combinatorics and coding theory to provide bounds (necessary conditions) and constructions (sufficient conditions) for the objects of interest.

Abstract: In order to protect copyrighted material, codes may be embedded in the content or codes may be associated with the keys used to recover the content. Codes can offer protection by providing some form of traceability (TA) for pirated data. Several researchers have studied different notions of TA and related concepts in previous years. "Strong" versions of TA allow at least one member of a coalition that constructs a "pirate decoder" to be traced. Weaker versions of this concept ensure that no coalition can "frame" a disjoint user or group of users. All these concepts can be formulated as codes having certain combinatorial properties. We study the relationships between the various notions, and we discuss equivalent formulations using structures such as perfect hash families. We use methods from combinatorics and coding theory to provide bounds (necessary conditions) and constructions (sufficient conditions) for the objects of interest.

••

TL;DR: It is proved that for a tensor product of two unital stochastic maps on qubit states, using an entanglement that involves only states which emerge with minimal entropy cannot decrease the entropy below the minimum achievable using product states.

Abstract: We consider the minimal entropy of qubit states transmitted through two uses of a noisy quantum channel, which is modeled by the action of a completely positive trace-preserving (or stochastic) map. We provide strong support for the conjecture that this minimal entropy is additive, namely, that the minimum entropy can be achieved when product states are transmitted. Explicitly, we prove that for a tensor product of two unital stochastic maps on qubit states, using an entanglement that involves only states which emerge with minimal entropy cannot decrease the entropy below the minimum achievable using product states. We give a separate argument, based on the geometry of the image of the set of density matrices under stochastic maps, which suggests that the minimal entropy conjecture holds for nonunital as well as for unital maps. We also show that the maximal norm of the output states is multiplicative for most product maps on n-qubit states, including all those for which at least one map is unital. For the class of unital channels on C/sup 2/, we show that additivity of minimal entropy implies that the Holevo (see IEEE Trans. Inform. Theory, vol.44, p.269-73, 1998 and Russian Math. Surv., p.1295-1331, 1999) capacity of the channel is additive over two inputs, achievable with orthogonal states, and equal to the Shannon capacity. This implies that superadditivity of the capacity is possible only for nonunital channels.

••

TL;DR: A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors, and consists of nonlinear inner codes, which is called "watermark"" codes, concatenated with low-density parity-check codes over nonbinary fields.

Abstract: A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors. The code consists of nonlinear inner codes, which we call "watermark"" codes, concatenated with low-density parity-check codes over nonbinary fields. The inner code allows probabilistic resynchronization and provides soft outputs for the outer decoder, which then completes decoding. We present codes of rate 0.7 and transmitted length 5000 bits that can correct 30 insertion/deletion errors per block. We also present codes of rate 3/14 and length 4600 bits that can correct 450 insertion/deletion errors per block.

••

AT&T Labs

^{1}TL;DR: It is shown that the maximum fidelity obtained by a positive partial transpose (p.p.t.) distillation protocol is given by the solution to a certain semidefinite program, which gives a number of new lower and upper bounds on p.

Abstract: We show that the maximum fidelity obtained by a positive partial transpose (p.p.t.) distillation protocol is given by the solution to a certain semidefinite program. This gives a number of new lower and upper bounds on p.p.t. distillable entanglement (and thus new upper bounds on 2-locally distillable entanglement). In the presence of symmetry, the semidefinite program simplifies considerably, becoming a linear program in the case of isotropic and Werner states. Using these techniques, we determine the p.p.t. distillable entanglement of asymmetric Werner states and "maximally correlated" states. We conclude with a discussion of possible applications of semidefinite programming to quantum codes and 1-local distillation.

••

TL;DR: This work explicitly construct multiple transmit antenna differential encoding/decoding schemes based on generalized orthogonal designs that generalize the two transmit antennas differential detection scheme that was proposed before.

Abstract: We explicitly construct multiple transmit antenna differential encoding/decoding schemes based on generalized orthogonal designs. These constructions generalize the two transmit antenna differential detection scheme that we proposed before (Tarokh and Jafarkhani 2000).

••

TL;DR: This work presents an algorithm where users update their transmitter signature sequences sequentially, in a distributed fashion, by using available receiver measurements, and proves that the algorithm converges to a set of orthogonal signature sequences when the number of users is less than or equal to the processing gain.

Abstract: Optimum signature sequence sets that maximize the capacity of single-cell synchronous code division multiple access (CDMA) systems have been identified. Optimum signature sequences minimize the total squared correlation (TSC); they form a set of orthogonal sequences, if the number of users is less than or equal to the processing gain, and a set of Welch (1994) bound equality (WBE) sequences, otherwise. We present an algorithm where users update their transmitter signature sequences sequentially, in a distributed fashion, by using available receiver measurements. We show that each update decreases the TSC of the set, and produces better signature sequence sets progressively. We prove that the algorithm converges to a set of orthogonal signature sequences when the number of users is less than or equal to the processing gain. We observe and conjecture that the algorithm converges to a WBE set when the number of users is greater than the processing gain. At each step, the algorithm replaces one signature sequence from the set with the normalized minimum mean squared error (MMSE) receiver corresponding to that signature sequence. Since the MMSE filter can be obtained by a distributed algorithm for each user, the proposed algorithm is amenable to distributed implementation.

•

Rice University

^{1}TL;DR: A new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance is defined and its relation to well-known distance measures is determined.

Abstract: We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.

••

Bell Labs

^{1}TL;DR: Numerical studies show that this method performs well at low redundancies, as compared to uniform MD scalar quantization, and several transform optimization results are presented for memoryless Gaussian sources.

Abstract: Multiple description (MD) coding is source coding in which several descriptions of the source are produced such that various reconstruction qualities are obtained from different subsets of the descriptions. Unlike multiresolution or layered source coding, there is no hierarchy of descriptions; thus, MD coding is suitable for packet erasure channels or networks without priority provisions. Generalizing work by Orchard, Wang, Vaishampayan and Reibman (see Proc IEEE Int. Conf. Image Processing, vol.I, Santa Barbara, CA, p.608-11, 1997), a transform-based approach is developed for producing M descriptions of an N-tuple source, M/spl les/N. The descriptions are sets of transform coefficients, and the transform coefficients of different descriptions are correlated so that missing coefficients can be estimated. Several transform optimization results are presented for memoryless Gaussian sources, including a complete solution of the N=2, M=2 case with arbitrary weighting of the descriptions. The technique is effective only when independent components of the source have differing variances. Numerical studies show that this method performs well at low redundancies, as compared to uniform MD scalar quantization.