scispace - formally typeset
Search or ask a question

Showing papers by "Damon Wischik published in 2011"


Proceedings ArticleDOI
15 Aug 2011
TL;DR: This work proposes using Multipath TCP as a replacement for TCP in large-scale data centers, as it can effectively and seamlessly use available bandwidth, giving improved throughput and better fairness on many topologies.
Abstract: The latest large-scale data centers offer higher aggregate bandwidth and robustness by creating multiple paths in the core of the net- work. To utilize this bandwidth requires different flows take different paths, which poses a challenge. In short, a single-path transport seems ill-suited to such networks.We propose using Multipath TCP as a replacement for TCP in such data centers, as it can effectively and seamlessly use available bandwidth, giving improved throughput and better fairness on many topologies. We investigate what causes these benefits, teasing apart the contribution of each of the mechanisms used by MPTCP.Using MPTCP lets us rethink data center networks, with a different mindset as to the relationship between transport protocols, rout- ing and topology. MPTCP enables topologies that single path TCP cannot utilize. As a proof-of-concept, we present a dual-homed variant of the FatTree topology. With MPTCP, this outperforms FatTree for a wide range of workloads, but costs the same.In existing data centers, MPTCP is readily deployable leveraging widely deployed technologies such as ECMP. We have run MPTCP on Amazon EC2 and found that it outperforms TCP by a factor of three when there is path diversity. But the biggest benefits will come when data centers are designed for multipath transports.

721 citations


Proceedings ArticleDOI
30 Mar 2011
TL;DR: It is shown that some 'obvious' solutions for multipath congestion control can be harmful, but that the proposed algorithm improves throughput and fairness compared to single-path TCP.
Abstract: Multipath TCP, as proposed by the IETF working group mptcp, allows a single data stream to be split across multiple paths. This has obvious benefits for reliability, and it can also lead to more efficient use of networked resources. We describe the design of a multipath congestion control algorithm, we implement it in Linux, and we evaluate it for multihomed servers, data centers and mobile clients. We show that some 'obvious' solutions for multipath congestion control can be harmful, but that our algorithm improves throughput and fairness compared to single-path TCP. Our algorithmis a drop-in replacement for TCP, and we believe it is safe to deploy.

632 citations


01 Oct 2011
TL;DR: A congestion control algorithm which couples the congestion control algorithms running on different subflows by linking their increase functions, and dynamically controls the overall aggresiveness of the multipath flow is presented, which is a practical algorithm that is fair to TCP at bottlenecks while moving traffic away from congested links.
Abstract: Often endpoints are connected by multiple paths, but communications are usually restricted to a single path per connection. Resource usage within the network would be more efficient were it possible for these multiple paths to be used concurrently. Multipath TCP is a proposal to achieve multipath transport in TCP. New congestion control algorithms are needed for multipath transport protocols such as Multipath TCP, as single path algorithms have a series of issues in the multipath context. One of the prominent problems is that running existing algorithms such as TCP New Reno independently on each path would give the multipath flow more than its fair share at a bottleneck link traversed by more than one of its subflows. Further, it is desirable that a source with multiple paths available will transfer more traffic using the least congested of the paths, hence achieving resource pooling. This would increase the overall utilization of the network and also its robustness to failure. This document presents a congestion control algorithm which couples the congestion control algorithms running on different subflows by linking their increase functions, and dynamically controls the overall aggresiveness of the multipath flow. The result is a practical algorithm that is fair to TCP at bottlenecks while moving traffic away from congested links.

400 citations


Journal ArticleDOI
TL;DR: It is shown that queue sizes grow linearly with time, under either generalized version of max-weight, and under either α-fair policy, and the growth rates are characterized, which is used to demonstrate examples of congestion collapse.
Abstract: We consider a switched network (i.e. a queueing network in which there are constraints on which queues may be served simultaneously), in a state of overload. We analyse the behaviour of two scheduling algorithms for multihop switched networks: a generalized version of max-weight, and the ?-fair policy. We show that queue sizes grow linearly with time, under either algorithm, and we characterize the growth rates. We use this characterization to demonstrate examples of congestion collapse, i.e. cases in which throughput drops as the switched network becomes more overloaded. We further show that the loss of throughput can be made arbitrarily small by the max-weight algorithm with weight function f(q)=q ? as ??0.

39 citations


Journal ArticleDOI
TL;DR: The claim of the following paper is that, once the authors do away with the crude control of “each flow may use only one path,” there should be some new control put in place—and, in fact, the proper control can be achieved by end systems on their own.
Abstract: 1988 was a powerful motivator for the quick deployment of Jacobson’s TCP. There is not yet a killer problem for which multipath congestion control is the only good solution. Perhaps we will be unlucky enough to find one. (It has been shown that simple greedy route choice by end users, combined with intelligent routing by network operators, can in theory lead to arbitrarily inefficient outcomes, but this has not been seen in practice.) Lacking a killer problem, the authors present four vignettes that illustrate the inefficiency and unfairness of a naïve approach to multipath, and that showcase the benefit of clever multipath congestion control. The niggling problems of naïve approaches to multipath could probably all be mitigated by special-case fixes such as “only use paths whose round trip times are within a factor of two of each other” or “no flow may use more than four paths at a time,” perhaps enforced by deep packet inspection. So, in effect, the authors present a choice between a single clean control architecture for multipath transmission, and a series of special-case fixes. The naïve approach to multipath, as studied in this paper, is to simply run separate TCP congestion control on each path. The clever alternative is to couple the congestion control on different paths, with the overall effect of shifting traffic away from morecongested paths onto less-congested paths; two research groups have inmULtIPath t Ra NS m ISSIoN foR the Internet—that is, allowing users to send some of their packets along one path and others along different paths—is an elegant solution still looking for the right problem. The most obvious benefit of multipath transmission is greater reliability. For example, I’d like my phone to use WiFi when it can, but seamlessly switch to cellular when needed, without disrupting my flow of data. In general, the only way to create a reliable network out of unreliable components is through redundancy, and multipath transmission is an obvious solution. The second benefit of multipath transmission is that it gives an extra degree of flexibility in sharing the network. Just as packet switching removed the artificial constraints imposed by splitting links into circuits, so too multipath removes the artificial constraints imposed by ‘splitting’ the network’s total capacity into separate links (see the accompanying figure). Flexibility comes with dangers. By building the Internet with packet switching, we no longer had the control over congestion that circuit switching provides (crude though it may be), and this led in 1988 to Internet congestion collapse. Van Jacobson realized there needed to be a new system for controlling congestion, and he had the remarkable insight that it could be achieved by end systems on their own. The Internet has been using his transmission control protocol (TCP) largely unchanged until recently. The flexibility offered by multipath transport also brings dangers. The claim of the following paper is that, once we do away with the crude control of “each flow may use only one path,” there should be some new control put in place—and, in fact, the proper control can be achieved by end systems on their own. That is to say, if multipath is packet switching 2.0, then it needs TCP 2.0. Internet congestion collapse in dependently devised an appropriate form of coupling. This is the approach under exploration in the mptcp working group at the IETF, although with some concessions to graceful coexistence with existing TCP. The differences between the two sorts of congestion control show up both in the overall throughput of the network, and also in how the network’s capacity is allocated. The authors use the framework of social welfare utility maximization to address both metrics in a unified way. This framework has been mainstream in theoretical research on congestion control for the past decade. But it is not mainstream in systems work, where more intuitive metrics such as average throughput and Jain’s fairness index hold sway, along with views like “Congestion is only a problem at access links, and if I’ve paid for two connections then I ought to be able to use two TCP flows.” These differences in language and culture have meant that the paper’s conclusions have not become systems orthodoxy. Now that multipath transport protocols are a hot topic in the network systems community, it is a good time to highlight this work, and to translate its conclusions into practical answers about systems such as data centers and multihomed mobile devices. The authors only address congestion control and path selection for an idealized model of moderately long-lived flows. There are still important questions to answer, such as: When is a flow long enough to make it worth opening a new path? When is a path so bad it should be closed?

7 citations


Proceedings ArticleDOI
01 Sep 2011
TL;DR: The hardness of low-delay network scheduling is an artifact of explicitly avoiding interference, or treating it as noise and can be overcome by a rather simple medium access algorithm that does not require information theoretic “block codes.”
Abstract: This paper looks at the problem of designing wireless medium access algorithms. Inter-user interference at the receivers is an important characteristic of wireless networks. We show that decoding (or canceling) this interference results in significant improvement in the system performance over protocols that either treat interference as noise, or explicitly avoid interference at the receivers by allowing at most one of the transmitters in its range to transmit. This improvement in performance is realized by means of a medium access algorithm with: (a) polynomial computational complexity per timeslot, (b) polynomially bounded expected queue-length at the transmitters, and (c) a throughput region that is at least a poly-logarithmic fraction of the largest possible throughput-region under any algorithm operating using that treats interference as noise. Thus, the hardness of low-delay network scheduling (a result by Shah, Tse and Tsitsiklis [1]) is an artifact of explicitly avoiding interference, or treating it as noise and can be overcome by a rather simple medium access algorithm that does not require information theoretic “block codes.”

1 citations