scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Bounds on the Tradeoff Between Decoding Complexity and Rate for Sparse-Graph Codes

24 Sep 2007-pp 196-201
TL;DR: A lower bound is derived on the complexity of per bit decoding complexity for capacity achieving sparse-graph codes as a function of their gap from the channel capacity over BSC: one of the node degree distributions must have a finite mean and an infinite variance.
Abstract: Khandekar and McEliece suggested the problem of per bit decoding complexity for capacity achieving sparse-graph codes as a function of their gap from the channel capacity. We consider the problem for the case of the binary symmetric channel. We derive a lower bound on this complexity for some codes on graphs for Belief Propagation decoding. For bounded degree LDPC and LDGM codes, any concatenation of the two, and punctured bounded-degree LDPC codes, this reduces to a lower bound of O (log (1/isin-)). The proof of this result leads to an interesting necessary condition on the code structures which could achieve capacity with bounded decoding complexity over BSC: the average edge-degree must converge to infinity while the average node-degree must be bounded. That is, one of the node degree distributions must have a finite mean and an infinite variance.
Citations
More filters
Journal ArticleDOI

2,415 citations

Journal ArticleDOI
TL;DR: The current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues are provided.
Abstract: This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.

829 citations

Journal ArticleDOI
TL;DR: This paper models the required decoding power and investigates the minimization of total system power from two complementary perspectives, using new lower bounds on the complexity of message-passing decoding to show there is a fundamental tradeoff between transmit and decoding power.
Abstract: Traditional communication theory focuses on minimizing transmit power. However, communication links are increasingly operating at shorter ranges where transmit power can be significantly smaller than the power consumed in decoding. This paper models the required decoding power and investigates the minimization of total system power from two complementary perspectives. First, an isolated point-to-point link is considered. Using new lower bounds on the complexity of message-passing decoding, lower bounds are derived on decoding power. These bounds show that 1) there is a fundamental tradeoff between transmit and decoding power; 2) unlike the implications of the traditional "waterfall" curve which focuses on transmit power, the total power must diverge to infinity as error probability goes to zero; 3) Regular LDPCs, and not their known capacity-achieving irregular counterparts, can be shown to be power order optimal in some cases; and 4) the optimizing transmit power is bounded away from the Shannon limit. Second, we consider a collection of links. When systems both generate and face interference, coding allows a system to support a higher density of transmitter-receiver pairs (assuming interference is treated as noise). However, at low densities, uncoded transmission may be more power-efficient in some cases.

159 citations

Proceedings ArticleDOI
01 Jul 2012
TL;DR: Fundamental information-theoretic bounds are provided on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes and for bounded transmit-power schemes, showing that there is a fundamental tradeoff between the transmit and encoding/decoding power.
Abstract: We provide fundamental information-theoretic bounds on the required circuit wiring complexity and power consumption for encoding and decoding of error-correcting codes. These bounds hold for all codes and all encoding and decoding algorithms implemented within the paradigm of our VLSI model. This model essentially views computation on a 2-D VLSI circuit as a computation on a network of connected nodes. The bounds are derived based on analyzing information flow in the circuit. They are then used to show that there is a fundamental tradeoff between the transmit and encoding/decoding power, and that the total (transmit + encoding + decoding) power must diverge to infinity at least as fast as cube-root of log 1/p e , where P e is the average block-error probability. On the other hand, for bounded transmit-power schemes, the total power must diverge to infinity at least as fast as square-root of log 1/P e due to the burden of encoding/decoding.

47 citations

Journal ArticleDOI
01 Jun 2018
TL;DR: Joint uplink and downlink coverage of cellular-based ambient RF energy harvesting IoT where the cellular network is assumed to be the only source of RF energy is studied and dominant BS-based approach is developed to derive tight approximation for this joint coverage probability.
Abstract: Ambient radio frequency (RF) energy harvesting has emerged as a promising solution for powering small devices and sensors in massive Internet of Things (IoT) ecosystem due to its ubiquity and cost efficiency. In this paper, we study joint uplink and downlink coverage of cellular-based ambient RF energy harvesting IoT where the cellular network is assumed to be the only source of RF energy. We consider a time division-based approach for power and information transmission where each time-slot is partitioned into three sub-slots: 1) charging sub-slot during which the cellular base stations (BSs) act as RF chargers for the IoT devices, which then use the energy harvested in this sub-slot for information transmission and/or reception during the remaining two sub-slots; 2) downlink sub-slot during which the IoT device receives information from the associated BS; and 3) uplink sub-slot during which the IoT device transmits information to the associated BS. For this setup, we characterize the joint coverage probability , which is the joint probability of the events that the typical device harvests sufficient energy in the given time slot and is under both uplink and downlink signal-to-interference-plus-noise ratio (SINR) coverage with respect to its associated BS. This metric significantly generalizes the prior art on energy harvesting communications, which usually focused on downlink or uplink coverage separately. The key technical challenge is in handling the correlation between the amount of energy harvested in the charging sub-slot and the information signal quality (SINR) in the downlink and uplink sub-slots. Dominant BS-based approach is developed to derive tight approximation for this joint coverage probability. Several system design insights including comparison with regularly powered IoT network and throughput-optimal slot partitioning are also provided.

44 citations


Cites background from "Bounds on the Tradeoff Between Deco..."

  • ...energy consumption scales noticeably with the data rates due to increase in the length of decoder interconnects [23]–[25]....

    [...]

References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations

Book
01 Jan 1968
TL;DR: This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.
Abstract: Communication Systems and Information Theory. A Measure of Information. Coding for Discrete Sources. Discrete Memoryless Channels and Capacity. The Noisy-Channel Coding Theorem. Techniques for Coding and Decoding. Memoryless Channels with Discrete Time. Waveform Channels. Source Coding with a Fidelity Criterion. Index.

6,684 citations

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Book
26 Sep 2014
TL;DR: This new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics.
Abstract: Csiszr and Krner's book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics.

3,404 citations

Journal ArticleDOI
TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.
Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations


"Bounds on the Tradeoff Between Deco..." refers background or methods in this paper

  • ...It is, nevertheless, interesting that our bound uses the structure of the belief propagation decoder....

    [...]

  • ...Therefore, for the infinite length analysis, the average message (over the channel realizations) for any particular code in the ensemble is close to the average over the ensemble and the channel....

    [...]

  • ...The concentration theorem [15] shows that the probability of error, that is, the average number of incorrect messages passed at the decoder for any particular code, concentrates around the average over the code ensemble....

    [...]

  • ...We do not use the structure of the code for the proof, therefore the result holds for any sequence of codes, including those based on sparse-graph codes with cycles....

    [...]