scispace - formally typeset
Search or ask a question
Author

Yan Cai

Bio: Yan Cai is an academic researcher from University of Massachusetts Amherst. The author has contributed to research in topics: Network packet & Burstiness. The author has an hindex of 6, co-authored 12 publications receiving 94 citations.

Papers
More filters
Proceedings ArticleDOI
14 Jun 2009
TL;DR: This work proposes a novel packet pacing mechanism that can smooth traffic bursts and shows the effectiveness of the pacer on in terms of reduced network congestion and improving network throughput.
Abstract: The demand for more bandwidth has lead to proposals for an all-optical network core. Due to inherent constraints of optical technology, only routers with small packet buffers are feasible to be implemented. In order to ensure efficient operation of such small-buffer networks, it is necessary to ensure that traffic is less bursty than in conventional networks. We propose a novel packet pacing mechanism that can smooth traffic bursts. Our theoretical analysis shows that our pacing scheme can guarantee that queue length of routers is BIBO stable. Experimental results from our prototype implementation show the effectiveness of our pacer on in terms of reduced network congestion and improving network throughput.

19 citations

Journal ArticleDOI
TL;DR: A Queue Length Based Pacing (QLBP) algorithm is presented that paces network traffic using a single queue and that can be implemented with small computational and memory overhead and can improve connection throughput in small-buffer networks.
Abstract: Packet losses in the network have a considerable performance impact on transport-layer throughput. For reliable data transfer, lost packets require retransmissions and thus cause very long delays. This tail of the packet delay distribution causes performance problems. There are several approaches to trading off networking resources up-front to reduce long delays for some packets (e.g., forward error correction, network coding). We propose packet pacing as an alternative that changes traffic characteristics favorably by adding intentional delay in packet transmissions. This intentional delay counters the principle of best effort but can reduce the burstiness of traffic and improve overall network operation - in particular in network with small packet buffers. As a result, pacing improves transport-layer performance, providing a tradeoff example where small amounts of additional delay can significantly increase connection bandwidth. We present a Queue Length Based Pacing (QLBP) algorithm that paces network traffic using a single queue and that can be implemented with small computational and memory overhead. We present a detailed analysis on delay bounds and the quantitative impact of QLBP pacing on network traffic. Through simulation, we show how the proposed pacing technique can improve connection throughput in small-buffer networks.

14 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work presents a pacing system that proactively shapes traffic in the edge network to reduce burstiness and shows that it can achieve higher throughput than end-system based pacing.
Abstract: For the optical packet-switching routers to be widely deployed in the Internet, the size of packet buffers on routers has to be significantly small. Such small-buffer networks rely on traffic with low levels of burstiness to avoid buffer overflows and packet losses. We present a pacing system that proactively shapes traffic in the edge network to reduce burstiness. Our queue length based pacing uses an adaptive pacing on a single queue and paces traffic indiscriminately where deployed. In this work, we show through analysis and simulation that this pacing approach introduces a bounded delay and that it effectively reduces traffic burstiness. We also show that it can achieve higher throughput than end-system based pacing.

11 citations

Proceedings ArticleDOI
13 Apr 2008
TL;DR: This paper proposes a three-stage load- balancing switch along with output load-balancing to address the mis-sequencing problem and shows that the proposed scheme provides a delay guarantee bounded by the delay of an OQ switch with the same input traffic plus a constant while achieving 100% throughput for admissible traffic with (sigma, rho) -upper constraint.
Abstract: Recently there has been a great deal of interest in load-balancing switches due to their simple architecture and high bandwidth. In this paper we propose a three-stage load- balancing switch along with output load-balancing to address the mis-sequencing problem. We show that our proposed scheme provides a delay guarantee bounded by the delay of an OQ switch with the same input traffic plus a constant while achieving 100% throughput for admissible traffic with (sigma, rho) -upper constraint.

8 citations

Journal ArticleDOI
TL;DR: This paper addresses the mis-sequencing problem by introducing a three-stage load-balancing switch architecture enhanced with an output load- Balancing mechanism that achieves a high forwarding capacity while preserving the order of packets without the need of costly online scheduling algorithms.
Abstract: There has been a great deal of interest recently in load-balancing switches due to their simple architecture and high forwarding bandwidth. Nevertheless, the mis-sequencing problem of the original load-balancing switch hinders the performance of underlying TCP applications. Several load-balancing switch designs have been proposed to address this mis-sequencing issue. They solve this mis-sequencing problem at the cost of either algorithmic complexity or special hardware requirements. In this paper, we address the mis-sequencing problem by introducing a three-stage load-balancing switch architecture enhanced with an output load-balancing mechanism. This three-stage load-balancing switch achieves a high forwarding capacity while preserving the order of packets without the need of costly online scheduling algorithms. Theoretical analyses and simulation results show that this three-stage load-balancing switch provides a transmission delay that is upper-bounded by that of an output-queued switch plus a constant that depends only on the number of input/output ports, indicating the same forwarding capacity as an output-queued switch.

8 citations


Cited by
More filters
Proceedings ArticleDOI
23 Oct 2013
TL;DR: The performance of multi-path TCP in the wild is explored using one commercial Internet service provider and three major cellular carriers in the US to answer the following questions: How much can a user benefit from using multi- path TCP over cellular and WiFi relative to using the either interface alone.
Abstract: With the popularity of mobile devices and the pervasive use of cellular technology, there is widespread interest in hybrid networks and on how to achieve robustness and good performance from them. As most smart phones and mobile devices are equipped with dual interfaces (WiFi and 3G/4G), a promising approach is through the use of multi-path TCP, which leverages path diversity to improve performance and provide robust data transfers. In this paper we explore the performance of multi-path TCP in the wild, focusing on simple 2-path multi-path TCP scenarios. We seek to answer the following questions: How much can a user benefit from using multi-path TCP over cellular and WiFi relative to using the either interface alone? What is the impact of flow size on average latency? What is the effect of the rate/route control algorithm on performance? We are especially interested in understanding how application level performance is affected when path characteristics (e.g., round trip times and loss rates) are diverse. We address these questions by conducting measurements using one commercial Internet service provider and three major cellular carriers in the US.

300 citations

Journal ArticleDOI
TL;DR: It is found that a combination of different traffic control techniques may be necessary at particular entities and layers in the network to improve the variety of performance metrics, and despite significant research efforts, there are still open problems that demand further attention from the research community.
Abstract: Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today’s cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control, including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters, which have been receiving increasing attention recently and pose interesting and novel research problems. To measure the performance of datacenter networks, different performance metrics have been used, such as flow completion times, deadline miss rate, throughput, and fairness. Depending on the application and user requirements, some metrics may need more attention. While investigating different traffic control techniques, we point out the tradeoffs involved in terms of costs, complexity, and performance. We find that a combination of different traffic control techniques may be necessary at particular entities and layers in the network to improve the variety of performance metrics. We also find that despite significant research efforts, there are still open problems that demand further attention from the research community.

106 citations

Journal ArticleDOI
TL;DR: By proposing a quantization principle, the time-varying transition rate matrix (TRM) is quantized into a series of finite TRMs with norm bounded uncertainties, so that the difficulties of theTime-Varying TRM confronted in system analysis and synthesis can be overcome.

64 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed dynamic network traffic partitioning scheme outperforms the conventional ones in terms of packet distribution performance especially robustness against malicious attacks.

46 citations