scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Edge-Cut Bounds on Network Coding Rates

01 Mar 2006-Journal of Network and Systems Management (Springer US)-Vol. 14, Iss: 1, pp 49-67
TL;DR: A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR of selected received packets.
Abstract: Active networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR, of selected received packets. The bound generalizes an edge-cut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened d-separation conditions are satisfied. The bound improves on the cut-set bound and its efficacy is demonstrated by showing that routing is rate-optimal for some commonly cited examples in the networking literature.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
07 Jul 2013
TL;DR: It is shown that the Generalized Network Sharing bound is equivalent to a functional dependence bound in the literature and the problem of computing the GNS bound is NP-complete, even for two-unicast networks.
Abstract: We consider sum-rate edge-cut bounds on network coding rates for the multiple unicast problem. We first show that the Generalized Network Sharing (GNS) bound is equivalent to a functional dependence bound in the literature. After defining a notion of profile of an edge-cut, we show that the only profiles for which, every edge-cut with the said profile leads to a fundamental bound on network coding rates, are the so-called GNS profiles and further, we quantify with a tight constant factor, the amount by which network coding can potentially beat edge-cuts associated with other profiles. Finally, we show that the problem of computing the GNS bound is NP-complete, even for two-unicast networks.

7 citations


Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...The works of [3], [9], [12] provide algorithms to check if their approach can deduce the fundamentality of a given edge-cut....

    [...]

  • ...Connection of the GNS bound to the PdE bound [9] and the information dominance bound [3] will be explored in a future work....

    [...]

  • ...they can potentially be beaten by network coding [9]....

    [...]

  • ...• PdE bound [9] • Information Dominance bound [3] • Functional Dependence bound [12]...

    [...]

  • ...[9], [12] study bounds derived from functional dependence graphs and [3] studies bounds derived from information dominance....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors considered the energy efficiency of network coding instead of plain routing in wireless multiple unicast problems and established lower bounds on the benefit of using network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations.
Abstract: We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations. It is shown that if coding and routing solutions are using the same transmission range, the benefit in $d$-dimensional networks is at least $2d/\lfloor\sqrt{d}\rfloor$. Moreover, it is shown that if the transmission range can be optimized for routing and coding individually, the benefit in 2-dimensional networks is at least 3. Our results imply that codes following a \emph{decode-and-recombine} strategy are not always optimal regarding energy efficiency.

7 citations

Journal ArticleDOI
Stephen F. Bush1
TL;DR: In this article, the authors provide a brief introduction to nanoscale and molecular networking in general and provide opinions on the role of active networking for in vivo nan-oscale information transport.
Abstract: A safe and reliable in vivo nanoscale communication network will be of great benefit for medical diagnosis and monitoring as well as medical implant communication. This review article provides a brief introduction to nanoscale and molecular networking in general and provides opinions on the role of active networking for in vivo nanoscale information transport. While there are many in vivo communication mechanisms that can be leveraged, for example, forms of cell signaling, gap junctions, calcium and ion signaling, and circulatory borne communication, this review examines two in particular: molecular motor transport and neuronal information communication. Molecular motors transport molecules representing information and neural coding operates by means of the action potential; these mechanisms are reviewed within the theoretical framework of an active network. This review suggests that an active networking paradigm is necessary at the nanoscale along with a new communication constraint, namely, minimizing the communication impact upon the living environment. The goal is to assemble efficient nanoscale and molecular communication channels while minimizing disruption to the host organism.

7 citations

Proceedings ArticleDOI
01 Oct 2013
TL;DR: A new set of explicit network coding bounds, which combine different simple cuts of the network via a variety of set operations (not just the union), are established via their connections to extremal inequalities for submodular functions.
Abstract: An explicit characterization of the capacity region of the general network coding problem is one of the best known open problems in information theory. A simple set of bounds that are often used in the literature to show that certain rate tuples are infeasible are based on the graph-theoretic notion of cut. The standard cut-set bounds, however, are known to be loose in general when there are multiple messages to be communicated in the network. This paper focuses on broadcast networks, for which the standard cut-set bounds are closely related to union as a specific set operation to combine different simple cuts of the network. A new set of explicit network coding bounds, which combine different simple cuts of the network via a variety of set operations (not just the union), are established via their connections to extremal inequalities for submodular functions. The tightness of these bounds are demonstrated via applications to combination networks.

7 citations


Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...We mention here that our paper was partly motivated by an earlier work by Kramer and Savari [8], where the idea of combining the properties of Shannon entropy and the graph-theoretic notion of cut to obtain explicit network coding bounds was first explored....

    [...]

  • ...For broadcast networks, however, the proposed network coding bounds (dubbed as the PdE bounds) [8] coincide with the standard cut-set bounds....

    [...]

  • ...For non-broadcast networks, the union of several simple cuts may not give rise to a super cut that separates the collection of the source nodes from the collection of the sink nodes and hence may not lead to any network coding bounds [8]....

    [...]

Journal ArticleDOI
S. Bush1
TL;DR: From a medical standpoint, the use of current wireless techniques to communicate with implants is unacceptable for many reasons, including bulky size, inability to use magnetic resonance imaging after implantation, potential radiation damage, surgical invasiveness, need to recharge/replace power, post-operative pain and long recovery times, and reduced quality of life for the patient.
Abstract: Wireless ad hoc communication on the nanoscale will require thinking outside of the traditional radio spectrum. New applications will utilize new forms of wireless communication channels. For example, nanoscale communication will enable precise mechanisms for directly interacting with cells in vivo. Information may be sent to and from specific cells within the body, allowing detection and healing of diseases on the cellular scale. From a medical standpoint, the use of current wireless techniques to communicate with implants is unacceptable for many reasons, including bulky size, inability to use magnetic resonance imaging after implantation, potential radiation damage, surgical invasiveness, need to recharge/replace power, post-operative pain and long recovery times, and reduced quality of life for the patient. Better, more human in vivo implant communication is needed. Development of both biological and engineered nanomachines is progressing; such machines will need to communicate. Unfortunately, networking vast collections of nanoscale sensors and robots using current techniques, including wireless techniques, is not possible without communication mechanisms that exceed nanoscale volumes.

7 citations

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Additional excerpts

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...7 we choose Ed = {(3, 2), (3, 4), (5, 2), (5, 4)}, Sd = {2, 3}, [π(1), π(2)] = [2, 3]....

    [...]

Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.

8,533 citations


"Edge-Cut Bounds on Network Coding R..." refers background in this paper

  • ...For example, it is known that linear network coding is optimal for multicasting a single source in directed networks [1], [9]....

    [...]

  • ...The terminals can further perform network coding [1], [9], i....

    [...]

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...Network coding has been intensely studied since [1] presented a novel coding scheme that attains a cut-set bound for multicasting in networks....

    [...]

Book
01 Jan 1962
TL;DR: Ford and Fulkerson as mentioned in this paper set the foundation for the study of network flow problems and developed powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming.
Abstract: In this classic book, first published in 1962, L. R. Ford, Jr., and D. R. Fulkerson set the foundation for the study of network flow problems. The models and algorithms introduced in Flows in Networks are used widely today in the fields of transportation systems, manufacturing, inventory planning, image processing, and Internet traffic. The techniques presented by Ford and Fulkerson spurred the development of powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming. In addition, the book helped illuminate and unify results in combinatorial mathematics while emphasizing proofs based on computationally efficient construction. Flows in Networks is rich with insights that remain relevant to current research in engineering, management, and other sciences. This landmark work belongs on the bookshelf of every researcher working with networks.

4,341 citations