scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Edge-Cut Bounds on Network Coding Rates

01 Mar 2006-Journal of Network and Systems Management (Springer US)-Vol. 14, Iss: 1, pp 49-67
TL;DR: A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR of selected received packets.
Abstract: Active networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR, of selected received packets. The bound generalizes an edge-cut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened d-separation conditions are satisfied. The bound improves on the cut-set bound and its efficacy is demonstrated by showing that routing is rate-optimal for some commonly cited examples in the networking literature.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2013
TL;DR: This thesis is the design of capacity-achieving network codes realizable by modern signal processing circuits using Arýkan's polarization theory of random variables, providing insight into information-theoretic concepts such as random binning, superposition coding, and Marton's construction.
Abstract: Author(s): Goela, Naveen | Advisor(s): Gastpar, Michael | Abstract: Communication over unreliable, interfering networks is one of the current challenges inengineering. For point-to-point channels, Shannon established capacity results in 1948, and it took more than forty years to find coded systems approaching the capacity limit with feasible complexity. Significant research efforts have gone into extending Shannon's capacity results to networks with many partial successes. By contrast, the development of low-complexity codes for networks has received limited attention to date. The focus of this thesis is the design of capacity-achieving network codes realizable by modern signal processing circuits. For classes of networks, the following codes have been invented on the foundation of algebraic structure and probability theory: i ) Broadcast codes which achieve multi-user rates on the capacity boundary of several types of broadcast channels. The codes utilize Arýkan's polarization theory of random variables, providing insight into information-theoretic concepts such as random binning, superposition coding, and Marton's construction. Reproducible experiments over block lengths n = 512, 1024, 2048 corroborate the theory; ii ) A network code which achieves the computing capacities of a countably infinite class of simple noiseless interfering networks. The code separates a network into irreducible parallel sub-networks and applies a new vector-space function alignment scheme inspired by the concept of interference alignment for channel communications. New bounds are developed to tighten the standardcut-set bound for multi-casting functions. As an additional example of low-complexity codes, reduced-dimension linear transforms and convex optimization methods are proposed for the lossy transmission of correlated sources across noisy networks. Surprisingly, simple un-coded or one-shot strategies achieve a performance which is exactly optimal in certain networks, or close to optimal in the low signal-to-noise regime relevant for sensor networks.

1 citations

01 Jan 2015
TL;DR: A novel delivery scheme is designed, Heterogeneous Coded Delivery (HCD), that builds on a prior scheme for the uniform demand case, but performs better in the non-uniform demand case and is evaluated for different caching policies.
Abstract: Author(s): Ramakrishnan, Abinesh | Advisor(s): Markopoulou, Athina | Abstract: In this thesis, we study intersession coding for multiple unicasts in wired and wireless network settings. In particular, we apply alignment techniques and investigate the effect of structure of the transfer matrix to their performance. In addition, we also look at the coded caching problem and we propose an efficient delivery scheme that outperforms state-of-the-art.The thesis is divided into three parts. In the first part, we consider the problem of network coding across three unicast sessions over a directed acyclic graph, where each unicast session has a min-cut of 1. We consider a network model in which the middle of the network can only perform random linear network coding. We adapt interference alignment technique, originally developed for the wireless interference channel, to construct a precoding-based linear scheme, which we refer to as precoding-based network alignment (PBNA). The primary difference between this setting and the wireless interference channel is that the network topology can introduce dependencies among the elements of the transfer matrix and can potentially affect the achievable rate of PBNA.We identify all these dependencies and we interpret them in terms of network topology. We also show that, depending on these network topologies, the optimal symmetric rate achieved by any precoding-based linear scheme can take only three possible values, all of which can be achieved by PBNA.In the second part, we consider the interference channel with $K$ transmitters and $K$ receivers all having a single antenna, wherein the $K \times K$ transfer matrix representing this channel has rank $D$ (less than $K$). The degrees of freedom of such channels are not known as the rank-deficient transfer matrix creates algebraic dependencies between the channel coefficients. We present a modified version of the alignment scheme, to handle these dependencies while aligning interference, and derive the sufficient conditions for achieving half rate per user using this scheme. We show the difficulties in proving these sufficient condition for $K=4$ and $K=5$ and we also show that these sufficient conditions are not satisfied for $K \ge 6$.Finally, we study the coded caching problem: a network with several users trying to access a database of files stored at a server through a shared bottleneck link is considered. Each user is equipped with a cache, where files can be prefetched according to a caching policy, which is mainly based on the popularities of the files. Coded caching tries to exploit coding opportunities created by cooperative caching and has been shown to significantly reduce the load on the shared link. Most prior work focused on optimizing the caching policy so as to minimize this expected load. Given the caching policy and the user demands, the problem of minimizing the load over the shared link is essentially an index coding problem. In this part of the thesis, we design a novel delivery scheme, Heterogeneous Coded Delivery (HCD), that builds on a prior scheme for the uniform demand case, but performs better in the non-uniform demand case. We evaluate this delivery scheme for different caching policies.

Cites background from "Edge-Cut Bounds on Network Coding R..."

  • ...Capacity outer bounds have been proposed based on the generalized edge cut condition of the underlying graph and the associated information-theoretic arguments, including fundamental regions in the entropy space [20], entropy calculus [29], the network-sharing bound [19], the information dominance condition [7], and the edge-cut bounds [30]....

    [...]

Journal ArticleDOI
TL;DR: This letter shows a routing scheme achieving the partition bound for a class of 3-layer networks and thus establishes explicit information capacity expression and proves the conjecture for the class of networks.
Abstract: An important unsolved conjecture in network coding theory states that network coding has no rate benefit over routing in undirected unicast networks. Recently, a first non-trivial information-theoretic bound, called partition bound, was characterized for symmetric rate in undirected unicast networks, and it was shown that the bound is achievable for two classes of networks by a routing scheme. In this letter, we focus on the problem of characterization of networks for which the partition bound is tight. In particular, we consider layered undirected unicast networks. We show a routing scheme achieving the partition bound for a class of 3-layer networks and thus establish explicit information capacity expression and prove the conjecture for the class of networks. We also show that there exists a 4-layer network for which the partition bound cannot be achieved by an optimal routing scheme.
Book ChapterDOI
TL;DR: In this article, it is shown that if the network topology is downward dominated, then the achievability of a given combination of source signals and channel capacities implies the existence of a feasible multicommodity flow.
Abstract: Information does not generally behave like a conservative fluid flow in communication networks with multiple sources and sinks. However, it is often conceptually and practically useful to be able to associate separate data streams with each source–sink pair, with only routing and no coding performed at the network nodes. This raises the question of whether there is a nontrivial class of network topologies for which achievability is always equivalent to ‘routability’, for any combination of source signals and positive channel capacities. This chapter considers possibly cyclic, directed, errorless networks with n source–sink pairs and mutually independent source signals. The concept of downward dominance is introduced and it is shown that, if the network topology is downward dominated, then the achievability of a given combination of source signals and channel capacities implies the existence of a feasible multicommodity flow.
Journal ArticleDOI
05 Mar 2015
TL;DR: It is proved that the reduced FDGs give the same network coding capacity region/bounds as that obtained by using the original FDGs and yet much less computation is required.
Abstract: Functional dependence graph (FDG) is an important class of directed graph that captures the functional dependence relationships of a set of random variables and it is frequently used in characterising network coding capacity bounds. Since the computational complexity of such bounds usually grows exponentially with the order of the FDG, it is desirable to find an FDG with the smallest size possible. To this end, some systematic graph reduction techniques are introduced in this study. The first reduction technique is performed on the original networks, where ‘non-essential’ edges are identified and eliminated. This is equivalent to node reduction in the corresponding FDG. Besides, the authors show that certain edges in the FDG may also be removed without affecting the functional dependence relationships of the random variables. The removal of the edges in the FDG may create new opportunities to reduce the order of the FDG. It is proved that the reduced FDGs give the same network coding capacity region/bounds as that obtained by using the original FDGs and yet much less computation is required.
References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Additional excerpts

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...7 we choose Ed = {(3, 2), (3, 4), (5, 2), (5, 4)}, Sd = {2, 3}, [π(1), π(2)] = [2, 3]....

    [...]

Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.

8,533 citations


"Edge-Cut Bounds on Network Coding R..." refers background in this paper

  • ...For example, it is known that linear network coding is optimal for multicasting a single source in directed networks [1], [9]....

    [...]

  • ...The terminals can further perform network coding [1], [9], i....

    [...]

  • ...7, we choose Ed = {(2, 3), (4, 3), (2, 5), (4, 5)}, Sd = {1, 2, 3}, [π(1), π(2), π(3)] = [3, 1, 2] and the resulting graph GEd is shown in Fig....

    [...]

  • ...Network coding has been intensely studied since [1] presented a novel coding scheme that attains a cut-set bound for multicasting in networks....

    [...]

Book
01 Jan 1962
TL;DR: Ford and Fulkerson as mentioned in this paper set the foundation for the study of network flow problems and developed powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming.
Abstract: In this classic book, first published in 1962, L. R. Ford, Jr., and D. R. Fulkerson set the foundation for the study of network flow problems. The models and algorithms introduced in Flows in Networks are used widely today in the fields of transportation systems, manufacturing, inventory planning, image processing, and Internet traffic. The techniques presented by Ford and Fulkerson spurred the development of powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming. In addition, the book helped illuminate and unify results in combinatorial mathematics while emphasizing proofs based on computationally efficient construction. Flows in Networks is rich with insights that remain relevant to current research in engineering, management, and other sciences. This landmark work belongs on the bookshelf of every researcher working with networks.

4,341 citations