scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Edge-Cut Bounds on Network Coding Rates

01 Mar 2006-Journal of Network and Systems Management (Springer US)-Vol. 14, Iss: 1, pp 49-67
TL;DR: A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR of selected received packets.
Abstract: Active networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR, of selected received packets. The bound generalizes an edge-cut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened d-separation conditions are satisfied. The bound improves on the cut-set bound and its efficacy is demonstrated by showing that routing is rate-optimal for some commonly cited examples in the networking literature.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that, under the protocol model, when ns = Ω (log(n)1+α) out of the n nodes, each act as source of independent information for a multicast group consisting of m randomly chosen destinations, the per-session capacity in the presence of network coding (NC) has a tight bound of Θ(√n/ns√mlog(n).
Abstract: We consider a network with n nodes distributed uniformly in a unit square. We show that, under the protocol model, when ns = Ω (log(n)1+α) out of the n nodes, each act as source of independent information for a multicast group consisting of m randomly chosen destinations, the per-session capacity in the presence of network coding (NC) has a tight bound of Θ(√n/ns√mlog(n)) when m = O(n/log(n)) and Θ(1/ns) when m = Ω(n/log(n)). In the case of the physical model, we consider ns = n and show that the per-session capacity under the physical model has a tight bound of Θ(1/√mn) when m = O(n/(log(n))3), and Θ(1/n) when m = Ω(n/log(n)). Prior work has shown that these same order bounds are achievable utilizing only traditional store-and-forward methods. Consequently, our work implies that the network coding gain is bounded by a constant for all values of m. For the physical model we have an exception to the above conclusion when m is bounded by O(n/(log(n))3) and Ω(n/log(n)). In this range, the network coding gain is bounded by O((log(n))1/2).

14 citations


Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...Subsequent studies [22], [23] have shown that the (vertex) cut-set bounds are not tight and improved bounds can be obtained by employing more sophisticated edge-cuts....

    [...]

  • ...Studies such as those summarized above [20], [22], [23] do not readily capture the geometric constraints of multihop communication in wireless ad-hoc networks....

    [...]

Journal ArticleDOI
TL;DR: The notion of irreducible sets are introduced, which characterize implied functional dependence of communication networks, and are found to be the best among the known graph theoretic bounds for Networks with correlated sources and for networks with independent sources.
Abstract: Explicit characterization of the capacity region of communication networks is a long-standing problem. While it is known that network coding can outperform routing and replication, the set of feasible rates is not known in general. Characterizing the network coding capacity region requires the determination of the set of all entropic vectors. Furthermore, computing the explicitly known linear programming bound is infeasible in practice due to an exponential growth in complexity as a function of network size. This paper focuses on the fundamental problems of characterization and computation of outer bounds for multi-source multi-sink networks. Starting from the known local functional dependence induced by the communication network, we introduce the notion of irreducible sets, which characterize implied functional dependence. We provide recursions for the computation of all maximal irreducible sets. These sets act as information-theoretic bottlenecks, and provide an easily computable outer bound for networks with correlated sources. We extend the notion of irreducible sets (and resulting outer bound) for networks with independent sources. We compare our bounds with existing bounds in the literature. We find that our new bounds are the best among the known graph theoretic bounds for networks with correlated sources and for networks with independent sources.

14 citations


Cites background or methods or result from "Edge-Cut Bounds on Network Coding R..."

  • ...Unlike the case when sources are correlated, the problem of characterizing graphical bounds for networks with independent sources has been well investigated [2], [16]–[19], [30]....

    [...]

  • ...We will review and compare these bounds, such as cut-set bound [16], network sharing bound [17] and progressive d-separating edge-set bound [19] in Section IV....

    [...]

  • ...In Section IV, we compare our new bounds with previously known results: cut-set bound [16], network sharing bound [17], the notion of information dominance [18] and progressive d-separating edge-set bound [19]....

    [...]

  • ...4The definition of a functional dependence graph used here is different from that defined in Section III, see [19]....

    [...]

  • ...In [19] the authors describe a procedure to determine whether a given set of edges bounds the capacity of the given network....

    [...]

Proceedings ArticleDOI
06 Jul 2008
TL;DR: It is shown that network coding rate can be ominus(|V|) multiplicative factor smaller than meagerness, which is an upper bound on network coding rates for directed and undirected k-pairs networks.
Abstract: We consider network coding rates for directed and undirected k-pairs networks. For directed networks, meagerness is known to be an upper bound on network coding rates. We show that network coding rate can be ominus(|V|) multiplicative factor smaller than meagerness. For the undirected case, we show some progress in the direction of the k-pairs conjecture.

14 citations

Journal ArticleDOI
TL;DR: A new set of explicit network coding bounds, which combine different simple cuts of the network via a variety of set operations (not just the union), are established via their connections to extremal inequalities for submodular functions.
Abstract: An explicit characterization of the capacity region of the general network coding problem is one of the best known open problems in information theory. A simple set of bounds that is often used in the literature to show that certain rate tuples are infeasible are based on the graph-theoretic notion of cut. The standard cut-set bounds, however, are known to be loose in general when there are multiple messages to be communicated in the network. This paper focuses on broadcast networks, for which the standard cut-set bounds are closely related to union as a specific set operation to combine different simple cuts of the network. A new set of explicit network coding bounds, which combine different simple cuts of the network via a variety of set operations (not just the union), are established via their connections to extremal inequalities for submodular functions. The tightness of these bounds are demonstrated via applications to combination networks.

13 citations


Cites background or result from "Edge-Cut Bounds on Network Coding R..."

  • ...For broadcast networks, however, the proposed network coding bounds (dubbed as the PdE bounds) [8], [9] coincide with the standard cutset bounds....

    [...]

  • ...It is also worth mentioning that the generalized cut-set bounds proposed in this paper (as well as the PdE bounds [8], [9]) are a special case of the LP bounds by Yeung [2, Ch....

    [...]

  • ...For non-broadcast networks, the union of several simple cuts may not give rise to a super cut that separates the collection of the source nodes from the collection of the sink nodes and hence may not lead to any network coding bounds [8], [9]....

    [...]

  • ...We mention here that our paper was partly motivated by an earlier work by Kramer and Savari [8] and Kramer et al....

    [...]

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Additional excerpts

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...7 we choose Ed = {(3, 2), (3, 4), (5, 2), (5, 4)}, Sd = {2, 3}, [π(1), π(2)] = [2, 3]....

    [...]

Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.

8,533 citations


"Edge-Cut Bounds on Network Coding R..." refers background in this paper

  • ...For example, it is known that linear network coding is optimal for multicasting a single source in directed networks [1], [9]....

    [...]

  • ...The terminals can further perform network coding [1], [9], i....

    [...]

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...Network coding has been intensely studied since [1] presented a novel coding scheme that attains a cut-set bound for multicasting in networks....

    [...]

Book
01 Jan 1962
TL;DR: Ford and Fulkerson as mentioned in this paper set the foundation for the study of network flow problems and developed powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming.
Abstract: In this classic book, first published in 1962, L. R. Ford, Jr., and D. R. Fulkerson set the foundation for the study of network flow problems. The models and algorithms introduced in Flows in Networks are used widely today in the fields of transportation systems, manufacturing, inventory planning, image processing, and Internet traffic. The techniques presented by Ford and Fulkerson spurred the development of powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming. In addition, the book helped illuminate and unify results in combinatorial mathematics while emphasizing proofs based on computationally efficient construction. Flows in Networks is rich with insights that remain relevant to current research in engineering, management, and other sciences. This landmark work belongs on the bookshelf of every researcher working with networks.

4,341 citations