scispace - formally typeset
Search or ask a question

Showing papers presented at "Information Theory Workshop in 2006"


Proceedings ArticleDOI
13 Mar 2006
TL;DR: This work proposes a spreading sequences scheme based on random sparse signatures, and a detection algorithm based on belief propagation with linear time complexity, and proves that the information capacity of the system converges to Tanaka's formula for random 'dense' signatures, providing the first rigorous justification of this formula.
Abstract: We consider the CDMA (code-division multiple-access) multi-user detection problem for binary signals and additive white gaussian noise. We propose a spreading sequences scheme based on random sparse signatures, and a detection algorithm based on belief propagation (BP) with linear time complexity. In the new scheme, each user conveys its power onto a finite number of chips l, in the large system limit. We analyze the performances of BP detection and prove that they coincide with the ones of optimal (symbol MAP) detection in the l → ∞ limit. In the same limit, we prove that the information capacity of the system converges to Tanaka's formula for random 'dense' signatures, thus providing the first rigorous justification of this formula. Apart from being computationally convenient, the new scheme allows for optimization in close analogy with irregular low density parity check code ensembles.

175 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: It is shown that with BPSK modulation, PNC still yields significantly higher capacity than straightforward network coding when there are synchronization errors, and interestingly, this remains to be so even in the extreme case where synchronization is not performed at all.
Abstract: When data are transmitted in a wireless network, they reach the target receiver as well as other receivers in the neighborhood. Rather than a blessing, this attribute is treated as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). Physical-layer network coding (PNC), however, has been proposed to take advantage of this attribute. Unlike "conventional" network coding which performs coding arithmetic on digital bit streams after they are decoded, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves and applies the network coding arithmetic at the physical layer. As a result, the destructive effect of interference is eliminated and the capacity of networks is boosted significantly. A key requirement of PNC is synchronization among nodes, which has not been addressed previously. This is the focus of this paper. Specifically, we investigate the impact of imperfect synchronization (i.e., finite synchronization errors) on PNC. We show that with BPSK modulation, PNC still yields significantly higher capacity than straightforward network coding when there are synchronization errors. And interestingly, this remains to be so even in the extreme case where synchronization is not performed at all.

96 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The mean squared error of estimating each symbol of the input vector using BP is proved to be equal to the MMSE of estimating the same symbol through a scalar Gaussian channel with some degradation in the signal-to-noise ratio (SNR).
Abstract: This paper studies the estimation of a high-dimensional vector signal where the observation is a known "sparse" linear transformation of the signal corrupted by additive Gaussian noise. A paradigm of such a linear system is code-division multiple access (CDMA) channel with sparse spreading matrix. Assuming a "semi-regular" ensemble of sparse matrix linear transformations, where the bi-partite graph describing the system is asymptotically cycle-free, it is shown that belief propagation (BP) achieves the minimum mean-square error (MMSE) in estimating the transformation of the input vector in the large-system limit. The result holds regardless of the the distribution and power of the input symbols. Furthermore, the mean squared error of estimating each symbol of the input vector using BP is proved to be equal to the MMSE of estimating the same symbol through a scalar Gaussian channel with some degradation in the signal-to-noise ratio (SNR). The degradation, called the efficiency, is determined from a fixed-point equation due to Guo and Verdu, which is a generalization of Tanaka's formula to arbitrary prior distributions.

95 citations


Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this article, the authors describe a new tiling approach for network code design, which applies dynamic programming to find the best strategy among a restricted collection of network codes, and demonstrate the proposed strategy as a method for efficiently accommodating multiple unicasts in a wireless coding environment on a triangular lattice.
Abstract: We describe a new tiling approach for network code design. The proposed method applies dynamic programming to find the best strategy among a restricted collection of network codes. We demonstrate the proposed strategy as a method for efficiently accommodating multiple unicasts in a wireless coding environment on a triangular lattice and discuss its generalization.

83 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper gives a simple detection structure of random access preambles of CAZAC sequences good periodic correlation, and results indicate that the proposed detector performs as well as detectors that other companies provided, but has simpler structure.
Abstract: Constant amplitude zero autocorrelation (CAZAC) sequence, which is one type of polyphase codes, has many applications in channel estimation and time synchronization, since it has good periodic correlation properties. Now it is also introduced to long-term evolution (LTE) of 3rd Generation Partnership Project (3GPP) random access procedure as preamble signature. Based on the introduction of CAZAC sequences good periodic correlation, this paper gives a simple detection structure of random access preambles. Simulation results indicate that the proposed detector performs as well as detectors that other companies provided, but has simpler structure

79 citations


Proceedings ArticleDOI
13 Mar 2006
TL;DR: Both theoretical and simulation results show that both LDPC and Raptor codes are suitable for HARQ schemes, and which codes would make a better choice depends mainly on the width of the signal-to-noise operating range of the HARQ scheme, prior knowledge of that range, and other design parameters and constraints dictated by standards.
Abstract: Two incremental redundancy hybrid ARQ (IR-HARQ) schemes are compared: one is based on LDPC code ensembles with random transmission assignments, the other is based on recently introduced Raptor codes. A number of important issues, such as rate and power control, and error rate performance after each transmission on time varying binary-input, symmetric-output channels are addressed by analyzing performance of LDPC and Raptor codes on parallel channels. The theoretical results obtained for random code ensembles are tested on several practical code examples by simulation. Both theoretical and simulation results show that both LDPC and Raptor codes are suitable for HARQ schemes. Which codes would make a better choice depends mainly on the width of the signal-to-noise operating range of the HARQ scheme, prior knowledge of that range, and other design parameters and constraints dictated by standards.

78 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The existence of codes that can correct errors up to the full error correction capability in singleton bound is proved and it is shown that the error can be corrected with very high probability under reasonable assumptions.
Abstract: This paper, we studies basic properties of network error correction codes, their construction, and correction capability for various kinds of errors. Our discussion is confined to the single source multicast case. We define the minimum rank of a network error correction code. This plays the same role that minimum distance has played in classical coding theory. We prove the existence of codes that can correct errors up to the full error correction capability in singleton bound. Even when the rank of the error is higher than the error correction capability, we show that the error can be corrected with very high probability under reasonable assumptions

72 citations


Proceedings ArticleDOI
01 Jan 2006

70 citations


Proceedings ArticleDOI
13 Mar 2006
TL;DR: A new information reconciliation method is proposed which allows two parties sharing continuous random variables to agree on a common bit string and achieves higher efficiency than previously reported results.
Abstract: We propose a new information reconciliation method which allows two parties sharing continuous random variables to agree on a common bit string. We show that existing coded modulation techniques can be adapted for reconciliation and give an explicit code construction based on LDPC codes in the case of Gaussian variables. Simulations show that our method achieves higher efficiency than previously reported results.

66 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper focuses on the uplink and introduces a framework to investigate capacity bounds in cellular networks that use conventional single-cell links or employ multi-cell joint detection, as well as the methodology of efficiently and realistically modelling the physical layer of future cellular networks.
Abstract: In the current research on future cellular networks, both classical MIMO concepts, such as schemes achieving spatial diversity or spatial multiplexing, and also virtual MIMO concepts are observed. In the latter, multiple base stations serving different cells are grouped to form distributed antenna systems that can be exploited as large MIMO systems. For example, concepts such as joint detection in the uplink and joint transmission in the downlink can be applied in order to cancel or suppress inter-cell interference. In this paper, we focus on the uplink and introduce a framework to investigate capacity bounds in cellular networks that use conventional single-cell links or employ multi-cell joint detection. We observe both an ideal hexagonal cell setup and also the existing GSM cell setup in downtown Dresden, while initially computing the capacities based on idealistic assumptions, and then successively introducing more and more non-idealistic aspects, to finally obtain a good estimate of performance bounds in a real-world system. The paper illustrates capacity bounds and thus the benefits of distributed antenna systems, as well as the methodology of efficiently and realistically modelling the physical layer of future cellular networks.

60 citations


Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this paper, the authors compare the performance of one-shot and iterative conferencing in a cooperative Gaussian relay channel and show that when the relay channel is strong, iterative cooperation, with optimal allocation of resources, outperforms 1-shot cooperation provided that the conference link capacity is large.
Abstract: We compare the rates of one-shot and iterative conferencing in a cooperative Gaussian relay channel. The relay and receiver cooperate via a conference, as introduced by Willems, in which they exchange a series of communications over orthogonal links. Under one-shot conferencing, decode-and-forward (DF) is capacity-achieving when the relay has a strong channel. On the other hand, Wyner-Ziv compress-and-forward (CF) approaches the cut-set bound when the conference link capacity is large. To contrast with one-shot conferencing, we consider a two-round iterative conference scheme; it comprises CF in the first round, and DF in the second. When the relay has a weak channel, the iterative scheme is disadvantageous. However, when the relay channel is strong, iterative cooperation, with optimal allocation of conferencing resources, outperforms one-shot cooperation provided that the conference link capacity is large. When precise allocation of conferencing resources is not possible, we consider iterative cooperation with symmetric conference links, and show that the iterative scheme still surpasses one-shot cooperation, albeit under more restricted conditions.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: In this- two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well.
Abstract: In this- two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover the GNAF protocol enables the use of sphere decoders at the destination with a non-exponential maximum likelihood (ML) decoding complexity. In part-II, several low decoding complexity code constructions are studied and a lower bound on the diversity-multiplexing gain tradeoff of the GNAF protocol is obtained

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The CAGA converges to the global optimum in far fewer generations, and gets stuck at a local optimum fewer times than SGA and AGA.
Abstract: Traditional genetic algorithms (GAs) easily get stuck at a local optimum, and often have slow convergent speed. A novel adaptive genetic algorithm (AGA) called cloud-model-based AGA (CAGA) is proposed in this paper. Unlike conventional genetic algorithms, CAGA presents the use of cloud model to adaptively tune the probabilities of crossover pc and mutation pm depending on the fitness values of solutions. Because normal cloud model has the properties of randomness and stable tendency, CAGA is expected to realize the twin goals of maintaining diversity in the population and sustaining the convergence capacity of the GA. We compared the performance of the CAGA with that of the standard GA (SGA) and AGA in optimizing several typical functions with varying degrees of complexity and solving the Travelling Salesman Problems. In all cases studied, CAGA is greatly superior to SGA and AGA in terms of robustness and efficiency. The CAGA converges to the global optimum in far fewer generations, and gets stuck at a local optimum fewer times than SGA and AGA.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The results show that while random linear coding may outperform re-transmissions for heavy traffic, the delay incurred by the use of random linear codes is significantly higher when the source is lightly loaded.
Abstract: In this work we address the stability and delay performance of a multicast erasure channel with random arrivals at the source node We consider both a standard retransmission (ARQ) scheme as well as random linear coding Our results show that while random linear coding may outperform re-transmissions for heavy traffic, the delay incurred by the use of random linear codes is significantly higher when the source is lightly loaded

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper extends fountain codes on erasure channels to arbitrary channel types by first taking into account the symbol information and belief propagation (BP) algorithm and proposes a rateless-coding framework based on fountain codes, which allows reliable wireless broadcast communications and asynchronous data access simultaneously.
Abstract: This paper investigates Fountain codes and applications to wireless broadcast. In this paper, we extend Fountain codes on erasure channels to arbitrary channel types by first taking into account the symbol information and Belief Propagation (BP) algorithm. We propose a rateless-coding framework based on Fountain codes, which allow reliable wireless broadcast communications and asynchronous data access simultaneously. Considering the flexible code rate characteristic, we find a new measurement to evaluate the performance of Fountain codes in the wireless environment. Via comparing the performance of two types of Fountain codes: Luby Transform (LT) codes and Raptor codes within this framework, we get the conclusion that Fountain codes will be sufficient to fulfill both requirements in a wireless system. We also demonstrate such a framework has advantages in efficiency, reliability, and robustness.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: An algorithm for collaboratively training regularized kernel least-squares regression estimators is derived, noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms.
Abstract: This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting.

Proceedings Article
01 Jan 2006

Proceedings ArticleDOI
13 Mar 2006
TL;DR: A general lower bound for the convergence rate of the regret is proved, and a specific strategy that attains this rate for any game for which a Hannan consistent player exists is exhibited.
Abstract: We consider repeated games in which the player, instead of observing the action chosen by the opponent in each game round, receives a feedback generated by the combined choice of the two players. We study Hannan consistent players for these games, that is, randomized playing strategies whose per-round regret vanishes with probability one as the number of game rounds goes to infinity. We prove a general lower bound for the convergence rate of the regret, and exhibit a specific strategy that attains this rate for any game for which a Hannan consistent player exists.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work shows that the existing outer bounds can in fact be arbitrarily loose in some parameter ranges, and by deriving new outer bounds, it is shown that a simplified Han-Kobayashi type scheme can achieve to within a single bit the capacity for all values of the channel parameters.
Abstract: The capacity of the two-user Gaussian interference channel has been open for thirty years. The understanding on this problem has been limited. The best known achievable region is due to Han-Kobayashi but its characterization is very complicated. It is also not known how tight the existing outer bounds are. In this work, we show that the existing outer bounds can in fact be arbitrarily loose in some parameter ranges, and by deriving new outer bounds, we show that a simplified Han-Kobayashi type scheme can achieve to within a single bit the capacity for all values of the channel parameters. We also show that the scheme is asymptotically optimal at certain high SNR regimes. Using our results, we provide a natural generalization of the point-to-point classical notion of degrees of freedom to interference-limited scenarios.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: The achievable rate region is characterized for the parallel Gaussian degraded message set broadcast problem, when only the strongest user needs the private information, and the set of achievable rate-diversity tuples for the diversity embedded problem for parallel fading channels is described.
Abstract: Diversity embedded codes are opportunistic codes which take advantage of good channel realizations while ensuring at least part of the information is received reliably for bad channels. We establish a connection between these codes and degraded message set broadcast codes. We characterize the achievable rate region for the parallel Gaussian degraded message set broadcast problem, when only the strongest user needs the private information. Using this, we partially characterize the set of achievable rate-diversity tuples for the diversity embedded problem for parallel fading channels.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A review ofensor product (TP) codes will be given and possible applications to digital storage systems will be discussed.
Abstract: Tensor Product (TP) codes result from combining two constituent error control codes in a particular manner. Depending upon the types of constituent codes used, the resulting codes can be error detection codes, error correction codes, error location codes, or some combination thereof. In this paper a review of these codes will be given and possible applications to digital storage systems will be discussed.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work gives a refined analysis of the information leakage that involves m-th moment methods and chooses a linear transformation h in such a way as to minimize both the length n of the transmitted vector and theInformation leakage to the eavesdropper.
Abstract: To communicate an r-bit secret s through a wire-tap channel, the syndrome coding strategy consists of choosing a linear transformation h and transmitting an n-bit vector x such that h(x) = s The receiver obtains a corrupted version of x and the eavesdropper an even more corrupted version of x: the (syndrome) function h should be chosen in such a way as to minimize both the length n of the transmitted vector and the information leakage to the eavesdropper We give a refined analysis of the information leakage that involves m-th moment methods

Proceedings ArticleDOI
01 Oct 2006
TL;DR: In this article, a new family of distributed space-time codes based on Co-ordinate Interleaved Orthogonal Designs (CIOD) which result in reduced Maximum Likelihood (ML) decoding complexity at the destination is proposed.
Abstract: This is the second part of a two-part series of papers. In this paper, for the generalized non-orthogonal amplify and forward (GNAF) protocol presented in Part-I, a construction of a new family of distributed space-time codes based on Co-ordinate Interleaved Orthogonal Designs (CIOD) which result in reduced Maximum Likelihood (ML) decoding complexity at the destination is proposed. Further, it is established that the recently proposed Toeplitz space-time codes as well as space-time block codes (STBCs) from cyclic division algebras can be used in GNAF protocol. Finally, a lower bound on the optimal Diversity-Multiplexing Gain (DM-G) tradeoff for the GNAF protocol is established and it is shown that this bound approaches the transmit diversity bound asymptotically as the number of relays and the number of channels uses increases.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this paper, the authors describe some practical aspects of the design process of good Raptor codes for finite block lengths over arbitrary binary input symmetric channels, and introduce a simple model for the finite-length convergence behavior of the iterative decoding algorithm based on density evolution, and propose a practical design procedure.
Abstract: In this paper we describe some practical aspects of the design process of good Raptor codes for finite block lengths over arbitrary binary input symmetric channels In particular we introduce a simple model for the finite-length convergence behavior of the iterative decoding algorithm based on density evolution, and propose a practical design procedure We report simulation results for some example codes

Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this paper, the authors considered transmission of a continuous amplitude source over a quasi-static MIMO Rayleigh fading channel and considered joint source and channel coding techniques to maximize the distortion exponent.
Abstract: In this paper, we consider transmission of a continuous amplitude source over a quasi-static MIMO Rayleigh fading channel. The performance metric is end-to-end distortion of the source caused both by the lossy compression and the channel errors. We are interested in the high SNR behavior expressed in the distortion exponent, which is the exponential decay rate of the average end-to-end distortion as a function of SNR. Our goal is to maximize this distortion exponent by considering joint source and channel coding techniques. We provide digital strategies that utilize layered source coding coupled with multi-rate channel coding either by progressive or by superposition transmission, as well as a hybrid digital-analog scheme. When either the transmitter or the receiver has one antenna, we show that we are able to achieve the optimal distortion exponent.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: A means to construct dense, full-diversity STBCs from maximal orders in central simple algebras is introduced for the first time and a general algorithm for testing the maximality of a given order is presented.
Abstract: A means to construct dense, full-diversity STBCs from maximal orders in central simple algebras is introduced for the first time. As an example we construct an efficient ST lattice code with non-vanishing determinant for 4 transmit antenna MISO application. Also a general algorithm for testing the maximality of a given order is presented. By using a maximal order instead of just the ring of algebraic integers, the size of the code increases without losses in the minimum determinant. The usage of a proper ideal of a maximal order further improves the code, as the minimum determinant increases. Simulations in a quasi-static Rayleigh fading channel show that our lattice outperforms the DAST-lattice due to the properties described above.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this article, the dependence balance bounds of Hekstra and Willems were generalized and refined for the K-user multiaccess channel (MAC) with output feedback, and they were shown to establish the feedback sum-rate capacity for the Gaussian MAC when all users have the same per-symbol power constraints.
Abstract: Dependence balance bounds of Hekstra and Willems are generalized and refined. The new bounds are applied to the K-user multiaccess channel (MAC) with output feedback, and they are shown to establish the feedback sum-rate capacity for the Gaussian MAC when all users have the same per-symbol power constraints. The sum-rate capacity is achieved by Fourier modulated estimate correction. The feedback sum-rate capacity is shown to improve the no-feedback capacity by only log log K nats per use for large K. The new bounds also improve on cut-set bounds for asymmetric powers and rates.

Proceedings ArticleDOI
15 Sep 2006
TL;DR: A fast min-sum algorithm for decoding LDPC codes over GF(q) because it searches the whole configuration space efficiently through dynamic programming, which is simpler at the design stage because it has less parameters to tune.
Abstract: In this paper, we present a fast min-sum algorithm for decoding LDPC codes over GF(q). Our algorithm is different from the one presented by David Declercq and Marc Fossorier in [1] only at the way of speeding up the horizontal scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up the computation by reducing the number of configurations, while our algorithm uses the dynamic programming instead. Compared with the configuration reduction algorithm, the dynamic programming one is simpler at the design stage because it has less parameters to tune. Furthermore, it does not have the performance degradation problem caused by the configuration reduction because it searches the whole configuration space efficiently through dynamic programming. Both algorithms have the same level of complexity and use simple operations which are suitable for hardware implementations.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The authors shows the SD-method has far better error probability property than OFDM method under the heavy multipath environment by a simulation under the stable multi-path environment.
Abstract: Suehiro, et al have proposed a new information transmission method (Suehiro's DFT method, or SD-method) using the Kronecker product between the rows of DFT matrix and the data sequences In this paper, the authors shows the SD-method has far better error probability property than OFDM method under the heavy multipath environment by a simulation under the stable multi-path environment

Proceedings ArticleDOI
13 Mar 2006
TL;DR: In this article, the authors present a highly efficient and accurate implementation of density evolution for LDPC codes and demonstrate that density evolution is computationally too intensive to be used in many applications.
Abstract: Density evolution for LDPC codes predicts asymptotic performance and serves as a practical design tool for designing top performing structures [1]. Many papers advocate the use of exit chart methods and other approximations, proclaiming that density evolution is computationally too intensive. In this paper we show that this is not the case: we present a highly efficient and accurate implementation of density evolution for LDPC codes.