scispace - formally typeset
Search or ask a question

Showing papers presented at "Information Theory Workshop in 2013"


Proceedings ArticleDOI
23 Dec 2013
TL;DR: The findings provide the insight that cooperation of the source nodes by sharing energy with the relay node leads to them indirectly cooperating with each other, and that such cooperation can be carried out in a last-minute fashion.
Abstract: This paper considers two-hop communication networks where the transmitters harvest their energy in an intermittent fashion. In this network, communication is carried out by signal cooperation, i.e., relaying. Additionally, the transmitters have the option of transferring energy to one another, i.e., energy cooperation. Energy is partially lost during transfer, exposing a trade-off between energy cooperation and use of harvested energy for transmission. A multi-access relay model is considered and transmit power allocation and energy transfer policies that jointly maximize the sum-rate are found. It is shown that a class of power policies achieves the optimal sum-rate, allowing a separation of optimal energy transfer and optimal power allocation problems. The optimal energy transfer policy is shown to be an ordered node selection, where nodes with better energy transfer efficiency and worse channels transfer all their energy to the relay or other source nodes via the relay. For the special case of single source, the optimal policy requires the direction of energy transfer to remain unchanged unless either node depletes all of its energy. Overall, the findings provide the insight that cooperation of the source nodes by sharing energy with the relay node leads to them indirectly cooperating with each other, and that such cooperation can be carried out in a last-minute fashion.

85 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this article, the combined effect of coding in the delivery phase, achieving coded multicast gain, and spatial reuse due to local short-range D2D communication is considered.
Abstract: We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop, users make arbitrary requests from a finite library of possible files and user devices cache information in the form of carefully designed sets of packets from all files in the library. We consider the combined effect of coding in the delivery phase, achieving “coded multicast gain”, and of spatial reuse due to local short-range D2D communication. Somewhat counterintuitively, we show that the coded multicast gain and the spatial reuse gain do not cumulate, in terms of the throughput scaling laws. In particular, the spatial reuse gain shown in our previous work on uncoded random caching and the coded multicast gain shown in this paper yield the same scaling laws behavior, but no further scaling law gain can be achieved by using both coded caching and D2D spatial reuse.

81 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: A decoding algorithm is proposed, which employs estimates of the not-yet-processed bit channel error probabilities to perform directed search in code tree, reducing thus the total number of iterations.
Abstract: A novel construction of polar codes with dynamic frozen symbols is proposed. The proposed codes are subcodes of extended BCH codes, which ensure sufficiently high minimum distance. Furthermore, a decoding algorithm is proposed, which employs estimates of the not-yet-processed bit channel error probabilities to perform directed search in code tree, reducing thus the total number of iterations.

80 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: Scenarios where inter-node interference can be managed to provide significant gain in degrees of freedom over the conventional half-duplex cellular design are identified.
Abstract: Feasibility of full-duplex opens up the possibility of applying it to cellular networks to operate uplink and downlink simultaneously for multiple users. However, simultaneous operation of uplink and downlink poses a new challenge of intra-cell inter-node interference. In this paper, we identify scenarios where inter-node interference can be managed to provide significant gain in degrees of freedom over the conventional half-duplex cellular design.

69 citations


Proceedings ArticleDOI
01 Sep 2013
TL;DR: In this paper, it was shown that the labeled stochastic block model exhibits a phase transition: reconstruction (i.e. identification of a partition positively correlated with the true partition of the underlying communities) would be feasible if and only if a model parameter exceeds a threshold.
Abstract: The labeled stochastic block model is a random graph model representing networks with community structure and interactions of multiple types. In its simplest form, it consists of two communities of approximately equal size, and the edges are drawn and labeled at random with probability depending on whether their two endpoints belong to the same community or not. It has been conjectured in [1] that this model exhibits a phase transition: reconstruction (i.e. identification of a partition positively correlated with the “true partition” into the underlying communities) would be feasible if and only if a model parameter exceeds a threshold. We prove one half of this conjecture, i.e., reconstruction is impossible when below the threshold. In the converse direction, we introduce a suitably weighted graph. We show that when above the threshold by a specific constant, reconstruction is achieved by (1) minimum bisection, and (2) a spectral method combined with removal of nodes of high degree.

46 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: The new technique of physical-layer cryptography based on using a massive MIMO channel as a key between the sender and desired receiver, which need not be secret, has security that exceeds that of the most common encryption methods used today such as RSA and Diffie-Hellman.
Abstract: We propose the new technique of physical-layer cryptography based on using a massive MIMO channel as a key between the sender and desired receiver, which need not be secret. The goal is for low-complexity encoding and decoding by the desired transmitter-receiver pair, whereas decoding by an eavesdropper is hard in terms of prohibitive complexity. The massive MIMO system has a channel gain matrix that is drawn i.i.d. according to a Gaussian distribution, subject to additive white Gaussian noise. The decoding complexity is analyzed by mapping the massive MIMO system to a lattice. We show that the eavesdropper's decoder for the MIMO system with M-PAM modulation is equivalent to solving standard lattice problems that are conjectured to be of exponential complexity for both classical and quantum computers. Hence, under the widely-held conjecture that standard lattice problems are of worst-case complexity, the proposed encryption scheme has security that exceeds that of the most common encryption methods used today such as RSA and Diffie-Hellman. Additionally, we show that this scheme could be used to securely communicate without a pre-shared secret key and little computational overhead. In particular, a standard parallel channel decomposition allows the desired transmitter-receiver pair to encode and decode transmissions over the MIMO channel based on the singular value decomposition of the channel, while decoding remains computationally hard for an eavesdropper with an independent channel gain matrix, even if it knows the channel gain matrix between the desired transmitter and receiver. Thus, the massive MIMO system provides for low-complexity encryption commensurate with the most sophisticated forms of application-layer encryption by exploiting the physical layer properties of the radio channel.

42 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: The problem of secure transmission over a two-user multi-input multi-output (MIMO) X-channel in which channel state information is provided with one-unit delay to both transmitters (CSIT), and each receiver feeds back its channel output to a different transmitter.
Abstract: We investigate the problem of secure transmission over a two-user multi-input multi-output (MIMO) X-channel with noiseless local feedback and delayed channel state information (CSI) available at transmitters. The transmitters are equipped with M antennas each, and the receivers are equipped with N antennas each. For this model, we characterize the optimal sum secure degrees of freedom (SDoF) region. We show that, in presence of local feedback and delayed CSI, the sum SDoF region of the MIMO X-channel is same as the SDoF region of a two-user MIMO BC with 2M antennas at the transmitter and N antennas at each receiver. This result shows that, upon availability of feedback and delayed CSI, there is no performance loss in sum SDoF due to the distributed nature of the transmitters. Next, we show that this result also holds if only global feedback is conveyed to the transmitters. We also study the case in which only local feedback is provided to the transmitters, i.e., without CSI, and derive a lower bound on the sum SDoF for this model. Furthermore, we specialize our results to the case in which there are no security constraints. In particular, similar to the setting with security constraints, we show that the optimal sum degrees of freedom (sum DoF) region of the (M, M, N, N)-MIMO X-channel is same of the DoF region of a two-user MIMO BC with 2M antennas at the transmitter and N antennas at each receiver. We illustrate our results with some numerical examples.

34 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this paper, deviation bounds for self-normalized averages and applications to estimation with a random number of observations are presented, relying on a peeling argument in exponential martingale techniques that represents an alternative to the method of mixture.
Abstract: We present deviation bounds for self-normalized averages and applications to estimation with a random number of observations. The results rely on a peeling argument in exponential martingale techniques that represents an alternative to the method of mixture. The motivating examples of bandit problems and context tree estimation are detailed.

33 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this article, the authors derive fundamental limits for many linear and non-linear sparse signal processing models including group testing, quantized compressive sensing, multivariate regression and observations with missing features.
Abstract: In this work we derive fundamental limits for many linear and non-linear sparse signal processing models including group testing, quantized compressive sensing, multivariate regression and observations with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of N variables X1, X2, ..., XN, and there is an unknown subset of variables S ⊂ {1, 2, ..., N} that are relevant for predicting outcomes/outputs Y. In other words, when Y is conditioned on {Xn}nϵS it is conditionally independent of the other variables, {Xn}n∉S. Our goal is to identify the set S from samples of the variables X and the associated outcomes Y. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and non-linear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions.

31 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: It is shown that, as long as the number of users grows sublinearly with the coding block length, random coding with Feinstein's threshold decoding is sufficient to achieve the symmetric capacity of the Gaussian MnAC.
Abstract: This paper studies communication networks with a very large number of users simultaneously communicating with an access point. A new notion of many-access channel (MnAC) is introduced, which is the same as a multiaccess channel except that the number of users increases unboundedly with the coding block length. Unlike the conventional multiaccess channel with a fixed number of users, the joint typicality decoding technique is not directly applicable to establish the achievability of the capacity. It is shown that, as long as the number of users grows sublinearly with the coding block length, random coding with Feinstein's threshold decoding is sufficient to achieve the symmetric capacity of the Gaussian MnAC.

28 citations


Proceedings ArticleDOI
01 Sep 2013
TL;DR: The compound MIMO Gaussian wiretap channel is studied, where the channel to the legitimate receiver is known and the eavesdropper channel is not known to the transmitter but is known to have a bounded spectral norm (channel gain).
Abstract: The compound MIMO Gaussian wiretap channel is studied, where the channel to the legitimate receiver is known and the eavesdropper channel is not known to the transmitter but is known to have a bounded spectral norm (channel gain). The compound secrecy capacity is established without the de-gradedness assumption and the optimal signaling is identified: the compound capacity equals the worst-case channel capacity thus establishing the saddle-point property, the optimal signaling is Gaussian and on the eigenvectors of the legitimate channel and the worst-case eavesdropper is isotropic. The eigenmode power allocation somewhat resembles the standard water-filling but is not identical to it.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: It is shown that under MAP decoding, although the introduction of a list can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected for any finite list size.
Abstract: Motivated by the significant performance gains which polar codes experience when they are decoded with successive cancellation list decoders, we study how the scaling exponent changes as a function of the list size L. In particular, we fix the block error probability Pe and we analyze the tradeoff between the blocklength N and the back-off from capacity C-R using scaling laws. By means of a Divide and Intersect procedure, we provide a lower bound on the error probability under MAP decoding with list size L for any binary-input memoryless output-symmetric channel and for any class of linear codes such that their minimum distance is unbounded as the blocklength grows large. We show that, although list decoding can significantly improve the involved constants, the scaling exponent itself, i.e., the speed at which capacity is approached, stays unaffected. This result applies in particular to polar codes, since their minimum distance tends to infinity as N increases. Some considerations are also pointed out for the genie-aided successive cancellation decoder when transmission takes place over the binary erasure channel.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: This paper shows how some of these ideal lattices can be constructed from polynomial codes (generalization of cyclic codes) via Construction A, and illustrates how these lattices enable multiplication.
Abstract: As a first step towards distributed computations in a wireless network, we introduce ideal lattices, that is lattices built over an ideal of a ring of integers in a number field, as a tool for constructing lattice codes at the physical layer. These lattices are not only additive groups as all lattices, but they are also equipped with a multiplication, which enables polynomial operations at each node of the wireless network. In this paper, we show how some of these ideal lattices can be constructed from polynomial codes (generalization of cyclic codes) via Construction A, and illustrate how these lattices enable multiplication.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this paper, the likelihood encoder with a random codebook is demonstrated as an effective tool for source coding and shown to yield simple achievability proofs for known results, such as rate-distortion theory.
Abstract: The likelihood encoder with a random codebook is demonstrated as an effective tool for source coding. Coupled with a soft covering lemma (associated with channel resolvability), likelihood encoders yield simple achievability proofs for known results, such as rate-distortion theory. They also produce a tractable analysis for secure rate-distortion theory and strong coordination.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this paper, the stable throughput region of the two-user interference channel is investigated and conditions for the convexity/concavity of the stability region and for which a certain interference management strategy leads to broader stability region.
Abstract: The stable throughput region of the two-user interference channel is investigated here. First, the stability region for the general case is characterized. Second, we study the cases where the receivers treat interference as noise or perform successive interference cancelation. Finally, we provide conditions for the convexity/concavity of the stability region and for which a certain interference management strategy leads to broader stability region.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: The results imply that the capacity region of a state-dependent Gaussian Z-interference channel with mismatched transmitter- side state cognition and receiver-side state interference is strictly less than that of the corresponding channel without state, which is in contrast to Costa type of dirty channels, for which dirty paper coding achieves the capacity of the corresponds channels without state.
Abstract: A state-dependent Gaussian Z-interference channel model is investigated in the regime of high state power, in which transmitters 1 and 2 communicate with receivers 1 and 2, and only receiver 2 is interfered by transmitter 1's signal and a random state sequence. The state sequence is known noncausally only to transmitter 1, not to the corresponding transmitter 2. A layered coding scheme is designed for transmitter 1 to help interference cancelation at receiver 2 (using a cognitive dirty paper coding) and to transmit its own message to receiver 1. Inner and outer bounds are derived, and are further analyzed to characterize the boundary of the capacity region either fully or partially for all Gaussian channel parameters. Our results imply that the capacity region of such a channel with mismatched transmitter-side state cognition and receiver-side state interference is strictly less than that of the corresponding channel without state, which is in contrast to Costa type of dirty channels, for which dirty paper coding achieves the capacity of the corresponding channels without state.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: The goal of this stochastic threshold group testing scheme is to identify the set of d defective items via a “small” number of such tests with high probability in the regime that l = o(d), and it presents schemes that are computationally feasible to design and implement, and require near-optimal number of tests.
Abstract: We formulate and analyze a stochastic threshold group testing problem motivated by biological applications. Here a set of n items contains a subset of d ≪ C n defective items. Subsets (pools) of the n items are tested. The test outcomes are negative if the number of defectives in a pool is no larger than l; positive if the pool contains more than u defectives, and stochastic (negative/positive with some probability) if the number of defectives in the pool is in the interval [l, u]. The goal of our stochastic threshold group testing scheme is to identify the set of d defective items via a “small” number of such tests with high probability. In the regime that l = o(d) we present schemes that are computationally feasible to design and implement, and require near-optimal number of tests. Our schemes are robust to a variety of models for probabilistic threshold group testing.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: A class of lattices built using Construction A, where the underlying code is a non-binary spatially-coupled low density parity check code, is considered, which are used for implementing a compute-and-forward protocol.
Abstract: We consider a class of lattices built using Construction A, where the underlying code is a non-binary spatially-coupled low density parity check code We refer to these lattices as spatially-coupled LDA (SCLDA) lattices SCLDA lattices can be constructed over integers, Gaussian integers and Eisenstein integers We empirically study the performance of SCLDA lattices under belief propagation (BP) decoding Ignoring the rate loss from termination, simulation results show that the BP thresholds of SCLDA lattices over integers is 011 dB (034 dB with the rate loss) and the BP thresholds for SCLDA lattices over Eisenstein integers are 008 dB from the Poltyrev limit (019 dB with the rate loss) Motivated by this result, we use SCLDA lattice codes over Eisenstein integers for implementing a compute-and-forward protocol For the examples considered in this paper, the thresholds for the proposed lattice codes are within 028 dB from the achievable rate of this coding scheme and within 106 dB from the achievable computation rate of Nazer and Gastpar's coding scheme in [6] extended to Eisenstein integers

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this article, an algorithm for constructing Tanner graphs of non-binary irregular quasi-cyclic LDPC codes is introduced, which employs a new method for selection of edge labels allowing control over the code's nonbinary ACE spectrum and resulting in low error-floor.
Abstract: An algorithm for constructing Tanner graphs of non-binary irregular quasi-cyclic LDPC codes is introduced. It employs a new method for selection of edge labels allowing control over the code's non-binary ACE spectrum and resulting in low error-floor. The efficiency of the algorithm is demonstrated by generating good codes of short to moderate length over small fields, outperforming codes generated by the known methods.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: This paper seeks to identify key ingredients to a successful design, critical and common limitations to most intra-session NC systems as well as promising techniques and ideas to guide future models and research problems grounded on practical concerns.
Abstract: Network coding (NC) has attracted tremendous attention from the research community due to its potential to significantly improve networks' throughput, delay, and energy performance as well as a means to simplify protocol design and naturally providing security support. The possibilities in code design have produced a large influx of new ideas and approaches to harness the power of NC. But, which of these designs are truly successful in practice? and which designs will not live up to their promised theoretical gains due to real-world constraints? Without attempting a comprehensive view of all practical pitfalls, this paper seeks to identify key ingredients to a successful design, critical and common limitations to most intra-session NC systems as well as promising techniques and ideas to guide future models and research problems grounded on practical concerns.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this paper, the robust principles of treating interference as noise (TIN) and avoiding it when it is not, form the background for the work of combining TIN with the topological interference management (TIM) framework that identifies optimal interference avoidance schemes.
Abstract: The robust principles of treating interference as noise (TIN) when it is sufficiently weak, and avoiding it when it is not, form the background for this work. Combining TIN with the topological interference management (TIM) framework that identifies optimal interference avoidance schemes, a baseline TIM-TIN approach is proposed which decomposes a network into TIN and TIM components, allocates the signal power levels to each user in the TIN component, allocates signal vector space dimensions to each user in the TIM component, and guarantees that the product of the two is an achievable number of signal dimensions available to each user in the original network.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: In this article, a simple compress-and-forward scheme is employed, where the base-stations quantize the received signals and send the quantized signals to the CP using distributed Wyner-Ziv compression.
Abstract: This paper investigates an uplink multi-cell processing (MCP) model where the cell sites are linked to a central processor (CP) via noiseless backhaul links with limited capacity. A simple compress-and-forward scheme is employed, where the base-stations (BSs) quantize the received signals and send the quantized signals to the CP using distributed Wyner-Ziv compression. The CP decodes the quantization codewords first, then decodes the user messages as if the users and the CP form a virtual multiple-access channel. This paper formulates the problem of maximizing the overall sum rate under a sum backhaul constraint for such a setting. It is shown that setting the quantization noise levels to be uniform across the BSs maximizes the achievable sum rate under high signal-to-noise ratio (SNR). Further, for general SNR a low-complexity fixed-point iteration algorithm is proposed to optimize the quantization noise levels. This paper further shows that with uniform quantization noise levels, the compress-and-forward scheme with Wyner-Ziv compression already achieves a sum rate that is within a constant gap to the sum capacity of the uplink MCP model. The gap depends linearly on the number of BSs in the network but is independent of the SNR and the channel matrix.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this paper, the authors show that if the relays quantize their received signals at a resolution decreasing with the number of nodes in the network, the additive gap to capacity can be made logarithmic in the total number of relays for a class of layered, time-varying wireless relay networks.
Abstract: Consider a Gaussian relay network where a number of sources communicate to a destination with the help of several layers of relays. Recent work has shown that a compress-and-forward based strategy at the relays can achieve the capacity of this network within an additive gap. In this strategy, the relays quantize their observations at the noise level and map it to a random Gaussian codebook. The resultant capacity gap is independent of the SNR's of the channels in the network but linear in the total number of nodes. In this paper, we show that if the relays quantize their signals at a resolution decreasing with the number of nodes in the network, the additive gap to capacity can be made logarithmic in the number of nodes for a class of layered, time-varying wireless relay networks. This suggests that the rule-of-thumb to quantize the received signals at the noise level used for compress-and-forward in the current literature can be highly suboptimal.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: This paper completely characterize the subprobability distribution that attains the infimum included in the definition of the smooth Rényi entropy Hαε(p) of order α by using the notions of majorization and the Schur convexity/concavity.
Abstract: This paper unveils a new connection between majorization theory and the smooth Renyi entropy of order α. We completely characterize the subprobability distribution that attains the infimum included in the definition of the smooth Renyi entropy Hαe(p) of order α by using the notions of majorization and the Schur convexity/concavity, where p denotes a probability distribution on a discrete alphabet and e ϵ [0,1) is an arbitrarily given constant. We can apply the obtained result to characterization of asymptotic behavior of 1/n Hαe(p) as n → ∞ for general sources satisfying the strong converse property.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: In this article, the problem of allocating radio channels to links in a wireless network is formulated as a generic linear bandit problem and analyzed in a stochastic setting where radio conditions are driven by a i.i.d. stochastically process.
Abstract: We consider the problem of allocating radio channels to links in a wireless network. Links interact through interference, modelled as a conflict graph (i.e., two interfering links cannot be simultaneously active on the same channel). We aim at identifying the channel allocation maximizing the total network throughput over a finite time horizon. Should we know the average radio conditions on each channel and on each link, an optimal allocation would be obtained by solving an Integer Linear Program (ILP). When radio conditions are unknown a priori, we look for a sequential channel allocation policy that converges to the optimal allocation while minimizing on the way the throughput loss or regret due to the need for exploring suboptimal allocations. We formulate this problem as a generic linear bandit problem, and analyze it in a stochastic setting where radio conditions are driven by a i.i.d. stochastic process, and in an adversarial setting where radio conditions can evolve arbitrarily. We provide, in both settings, algorithms whose regret upper bounds outperform those of existing algorithms.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: It is proved that smooth min Rényi divergence can be used to prove one- shot analogue of the Stein's lemma, and a one-shot analogue of covering lemma using smooth max Re⩽i divergence is proved.
Abstract: One-shot analogues for various information theory results known in the asymptotic case are proven using smooth min and max Renyi divergences. In particular, we prove that smooth min Renyi divergence can be used to prove one-shot analogue of the Stein's lemma. Using smooth min Renyi divergence we prove a special case of packing lemma in the one-shot setting. Furthermore, we prove a one-shot analogue of covering lemma using smooth max Renyi divergence. We also propose one-shot achievable rate for source coding under maximum distortion criterion. This achievable rate is quantified in terms of smooth max Renyi divergence.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: The competing goals of utility and privacy as they arise when a provider delegates the processing of its personal information to a recipient who is better able to handle this data are studied.
Abstract: We study the competing goals of utility and privacy as they arise when a provider delegates the processing of its personal information to a recipient who is better able to handle this data. We formulate our goals in terms of the inferences which can be drawn using the shared data. A whitelist describes the inferences that are desirable, i.e., providing utility. A blacklist describes the unwanted inferences which the provider wants to keep private. We formally define utility and privacy parameters using elementary information-theoretic notions and derive a bound on the region spanned by these parameters. We provide constructive schemes for achieving certain boundary points of this region. Finally, we improve the region by sharing data over aggregated time slots.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: It is shown that, in addition to superior iterative decoding thresholds, connected chain ensembles have better performance than single chainEnsembles of the same rate and length.
Abstract: The finite length performance of codes on graphs constructed by connecting spatially coupled low-density parity-check (SC-LDPC) code chains is analyzed. Successive (peeling) decoding is considered for the binary erasure channel (BEC). The evolution of the undecoded portion of the bipartite graph remaining after each iteration is analyzed as a dynamical system. It is shown that, in addition to superior iterative decoding thresholds, connected chain ensembles have better performance than single chain ensembles of the same rate and length.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: A sufficient condition on network coding functions is derived to guarantee that the edge removal property holds when the network is operated using functions satisfying the condition.
Abstract: In this paper, we investigate the impact of a single edge on the capacity region of a network of error-free, point-to-point links. A family of networks and edges is said to exhibit the “edge removal property” if for any network and edge in the family, removing a δ-capacity edge changes the capacity region by at most δ in each dimension. We derive a sufficient condition on network coding functions to guarantee that the edge removal property holds when the network is operated using functions satisfying the condition. Also, we extend the family of network capacity bounds for which it is known that removing a single edge of capacity δ changes the capacity bound by at most f(δ) in each dimension. Specifically, we show that removing a single δ-capacity edge changes the Generalized Network Sharing outer bound by at most δ in each dimension and the Linear Programming outer bound by at most a constant times δ in each dimension.

Proceedings ArticleDOI
23 Dec 2013
TL;DR: An alternative definition of the smooth Rényi entropy of order infinity is introduced, and a unified approach to represent the intrinsic randomness in terms of this information quantity is shown.
Abstract: The purpose of this paper is to establish a new unified method for random number generation from general sources. Specifically, we introduce an alternative definition of the smooth Renyi entropy of order infinity, and show a unified approach to represent the intrinsic randomness in terms of this information quantity. Our definition of the smooth Renyi entropy is easy to calculate for finite block lengths. We also represent δ-intrinsic randomness and the strong converse property in terms of the smooth Renyi entropy.