scispace - formally typeset
Search or ask a question
Author

Barath Raghavan

Bio: Barath Raghavan is an academic researcher from University of Southern California. The author has contributed to research in topics: The Internet & Network packet. The author has an hindex of 25, co-authored 68 publications receiving 4361 citations. Previous affiliations of Barath Raghavan include University of California, San Diego & University of California, Berkeley.


Papers
More filters
Proceedings ArticleDOI
28 Apr 2010
TL;DR: Hedera is presented, a scalable, dynamic flow scheduling system that adaptively schedules a multi-stage switching fabric to efficiently utilize aggregate network resources and delivers bisection bandwidth that is 96% of optimal and up to 113% better than static load-balancing methods.
Abstract: Today's data centers offer tremendous aggregate bandwidth to clusters of tens of thousands of machines. However, because of limited port densities in even the highest-end switches, data center topologies typically consist of multi-rooted trees with many equal-cost paths between any given pair of hosts. Existing IP multipathing protocols usually rely on per-flow static hashing and can cause substantial bandwidth losses due to long-term collisions.In this paper, we present Hedera, a scalable, dynamic flow scheduling system that adaptively schedules a multi-stage switching fabric to efficiently utilize aggregate network resources. We describe our implementation using commodity switches and unmodified hosts, and show that for a simulated 8,192 host data center, Hedera delivers bisection bandwidth that is 96% of optimal and up to 113% better than static load-balancing methods.

1,602 citations

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work defines a knee formally for continuous functions using the mathematical concept of curvature and compares its definition against alternatives, and evaluates Kneedle's accuracy against existing algorithms on both synthetic and real data sets and its performance in two different applications.
Abstract: Computer systems often reach a point at which the relative cost to increase some tunable parameter is no longer worth the corresponding performance benefit. These ``knees'' typically represent beneficial points that system designers have long selected to best balance inherent trade-offs. While prior work largely uses ad hoc, system-specific approaches to detect knees, we present Kneedle, a general approach to on line and off line knee detection that is applicable to a wide range of systems. We define a knee formally for continuous functions using the mathematical concept of curvature and compare our definition against alternatives. We then evaluate Kneedle's accuracy against existing algorithms on both synthetic and real data sets, and evaluate its performance in two different applications.

689 citations

Proceedings ArticleDOI
14 Nov 2011
TL;DR: The existing commonalities and important differences in data-oriented or content-centric network architectures are identified, and some remaining research issues are discussed, to emerge skeptical (but open-minded) about the value of this approach to networking.
Abstract: There have been many recent papers on data-oriented or content-centric network architectures. Despite the voluminous literature, surprisingly little clarity is emerging as most papers focus on what differentiates them from other proposals. We begin this paper by identifying the existing commonalities and important differences in these designs, and then discuss some remaining research issues. After our review, we emerge skeptical (but open-minded) about the value of this approach to networking.

501 citations

Proceedings ArticleDOI
27 Aug 2007
TL;DR: The design and implementation of distributed rate limiters are presented, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service's network traffic.
Abstract: Today's cloud-based services integrate globally distributed resources into seamless computing platforms. Provisioning and accounting for the resource usage of these Internet-scale applications presents a challenging technical problem. This paper presents the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service's network traffic. Our abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. We present two designs - one general purpose, and one optimized for TCP - that allow service operators to explicitly trade off between communication costs and system accuracy, efficiency, and scalability. Both designs are capable of rate limiting thousands of flows with negligible overhead (less than 3% in the tested configuration). We demonstrate that our TCP-centric design is scalable to hundreds of nodes while robust to both loss and communication delay, making it practical for deployment in nationwide service providers.

244 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: This paper presents the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery and are compatible both with middleboxes and with TCP's existing congestion control and loss recovery.
Abstract: To serve users quickly, Web service providers build infrastructure closer to clients and use multi-stage transport connections. Although these changes reduce client-perceived round-trip times, TCP's current mechanisms fundamentally limit latency improvements. We performed a measurement study of a large Web service provider and found that, while connections with no loss complete close to the ideal latency of one round-trip time, TCP's timeout-driven recovery causes transfers with loss to take five times longer on average.In this paper, we present the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery. Proactive, Reactive, and Corrective are three qualitatively-different, easily-deployable mechanisms that (1) proactively recover from losses, (2) recover from them as quickly as possible, and (3) reconstruct packets to mask loss. Crucially, the mechanisms are compatible both with middleboxes and with TCP's existing congestion control and loss recovery. Our large-scale experiments on Google's production network that serves billions of flows demonstrate a 23% decrease in the mean and 47% in 99th percentile latency over today's TCP.

228 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations

Journal ArticleDOI
01 Jan 1986
TL;DR: The New York Review ofBooks as mentioned in this paper is now over twenty years old and it has attracted controversy since its inception, but it is the controversies that attract the interest of the reader and to which the history, especially an admittedly impressionistic survey, must give some attention.
Abstract: It comes as something ofa surprise to reflect that the New York Review ofBooks is now over twenty years old. Even people of my generation (that is, old enough to remember the revolutionary 196os but not young enough to have taken a very exciting part in them) think of the paper as eternally youthful. In fact, it has gone through years of relatively quiet life, yet, as always in a competitive journalistic market, it is the controversies that attract the interest of the reader and to which the history (especially an admittedly impressionistic survey that tries to include something of the intellectual context in which a journal has operated) must give some attention. Not all the attacks which the New York Review has attracted, both early in its career and more recently, are worth more than a brief summary. What do we now make, for example, of Richard Kostelanetz's forthright accusation that 'The New York Review was from its origins destined to publicize Random House's (and especially [Jason] Epstein's) books and writers'?1 Well, simply that, even if the statistics bear out the charge (and Kostelanetz provides some suggestive evidence to support it, at least with respect to some early issues), there is nothing surprising in a market economy about a publisher trying to push his books through the pages of a journal edited by his friends. True, the New York Review has not had room to review more than around fifteen books in each issue and there could be a bias in the selection of

2,430 citations

Proceedings ArticleDOI
01 Nov 2010
TL;DR: An empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers, which includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications.
Abstract: Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks.

2,119 citations

Journal ArticleDOI
TL;DR: The SDN architecture and the OpenFlow standard in particular are presented, current alternatives for implementation and testing of SDN-based protocols and services are discussed, current and future SDN applications are examined, and promising research directions based on the SDN paradigm are explored.
Abstract: The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a ''radical new idea in networking'', promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm.

2,013 citations