scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Edge-Cut Bounds on Network Coding Rates

01 Mar 2006-Journal of Network and Systems Management (Springer US)-Vol. 14, Iss: 1, pp 49-67
TL;DR: A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR of selected received packets.
Abstract: Active networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR, of selected received packets. The bound generalizes an edge-cut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened d-separation conditions are satisfied. The bound improves on the cut-set bound and its efficacy is demonstrated by showing that routing is rate-optimal for some commonly cited examples in the networking literature.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
29 Jun 2012
TL;DR: In this article, reduced FDGs are obtained from the original FDG by removing nodes that are not "essential" to obtain the same capacity region/bounds obtained using original FDGs, but require much less computation.
Abstract: Functional dependence graphs (FDG) are an important class of directed graph that capture the functional dependence relationship among a set of random variables. FDGs are frequently used in characterizing and calculating network coding capacity bounds. However, the order of an FDG is usually much larger than the original network and the complexity of computing bounds grows exponentially with the order of an FDG. In this paper, we introduce graph pre-processing techniques which deliver reduced FDGs. These reduced FDGs are obtained from the original FDG by removing nodes that are not “essential”. We show that the reduced FDGs give the same capacity region/bounds obtained using original FDGs, but require much less computation. The application of reduced FDGs for algebraic formulation of scalar linear network coding is also discussed.

5 citations


Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...Besides the progressive d-separating edge-set bound [3] and the functional dependence bound [5] which are obtained using FDGs, many other capacity bounds e....

    [...]

  • ...Variants of FDGs are used in [3] and [5] to characterize computable outer bounds on multi-session network coding capacity....

    [...]

Posted Content
TL;DR: In this article, the authors use entropy functions to characterise the set of rate-capacity tuples achievable with either zero decoding error, or vanishing decoding error for general network coding problems.
Abstract: In this paper, we use entropy functions to characterise the set of rate-capacity tuples achievable with either zero decoding error, or vanishing decoding error, for general network coding problems. We show that when sources are colocated, the outer bound obtained by Yeung, A First Course in Information Theory, Section 15.5 (2002) is tight and the sets of zero-error achievable and vanishing-error achievable rate-capacity tuples are the same. We also characterise the set of zero-error and vanishing-error achievable rate capacity tuples for network coding problems subject to linear encoding constraints, routing constraints (where some or all nodes can only perform routing) and secrecy constraints. Finally, we show that even for apparently simple networks, design of optimal codes may be difficult. In particular, we prove that for the incremental multicast problem and for the single-source secure network coding problem, characterisation of the achievable set is very hard and linear network codes may not be optimal.

5 citations

Journal ArticleDOI
TL;DR: This poster presents a probabilistic procedure to characterize the response of the immune system to laser-spot assisted, 3D image analysis of EMMARM.
Abstract: Reference EPFL-ARTICLE-171957doi:10.1155/2010/359475View record in Web of Science Record created on 2011-12-16, modified on 2016-08-09

5 citations

Posted Content
TL;DR: New techniques are developed which allow us to upper bound the network coding gap for the makespan of $k$ unicasts, proving this gap is at most polylogarithmic in $k$.
Abstract: We study network coding gaps for the problem of makespan minimization of multiple unicasts. In this problem distinct packets at different nodes in a network need to be delivered to a destination specific to each packet, as fast as possible. The network coding gap specifies how much coding packets together in a network can help compared to the more natural approach of routing. While makespan minimization using routing has been intensely studied for the multiple unicasts problem, no bounds on network coding gaps for this problem are known. We develop new techniques which allow us to upper bound the network coding gap for the makespan of $k$ unicasts, proving this gap is at most polylogarithmic in $k$. Complementing this result, we show there exist instances of $k$ unicasts for which this coding gap is polylogarithmic in $k$. Our results also hold for average completion time, and more generally any $\ell_p$ norm of completion times.

5 citations


Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...This conjecture was proven true for numerous classes of instances [20, 21, 31]....

    [...]

Journal ArticleDOI
TL;DR: It is demonstrated that under channel or traffic symmetry, the edge-cut bound upper-bounds general information rates, thus providing a capacity approximation result, is provided.
Abstract: The problem of designing near optimal strategies for multiple unicast traffic in wireline networks is wide open; however, channel symmetry or traffic symmetry can be leveraged to show that routing can achieve with a polylogarithmic approximation factor of the edge-cut bound. For the same problem, the edge-cut bound is known to only upper bound rates of routing flows and unlike the information theoretic cut-set bound, it does not upper bound (capacity-achieving) information rates with general strategies. In this paper, we demonstrate that under channel or traffic symmetry, the edge-cut bound upper-bounds general information rates, thus providing a capacity approximation result. The key technique is a combinatorial result relating edge-cut bounds to generalized network sharing bounds. Finally, we generalize the results to wireless networks via an intermediary class of combinatorial graphs known as polymatroidal networks-our main result is that a natural architecture separating the physical and networking layers is near optimal when the traffic is symmetric among source-destination pairs, even when the channel is asymmetric (due to asymmetric power constraints, or prior frequency allocation like frequency division duplexing). This result is complementary to an earlier work of two of the authors proving a similar result under channel symmetry.

5 citations

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Additional excerpts

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...7 we choose Ed = {(3, 2), (3, 4), (5, 2), (5, 4)}, Sd = {2, 3}, [π(1), π(2)] = [2, 3]....

    [...]

Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.

8,533 citations


"Edge-Cut Bounds on Network Coding R..." refers background in this paper

  • ...For example, it is known that linear network coding is optimal for multicasting a single source in directed networks [1], [9]....

    [...]

  • ...The terminals can further perform network coding [1], [9], i....

    [...]

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...Network coding has been intensely studied since [1] presented a novel coding scheme that attains a cut-set bound for multicasting in networks....

    [...]

Book
01 Jan 1962
TL;DR: Ford and Fulkerson as mentioned in this paper set the foundation for the study of network flow problems and developed powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming.
Abstract: In this classic book, first published in 1962, L. R. Ford, Jr., and D. R. Fulkerson set the foundation for the study of network flow problems. The models and algorithms introduced in Flows in Networks are used widely today in the fields of transportation systems, manufacturing, inventory planning, image processing, and Internet traffic. The techniques presented by Ford and Fulkerson spurred the development of powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming. In addition, the book helped illuminate and unify results in combinatorial mathematics while emphasizing proofs based on computationally efficient construction. Flows in Networks is rich with insights that remain relevant to current research in engineering, management, and other sciences. This landmark work belongs on the bookshelf of every researcher working with networks.

4,341 citations