scispace - formally typeset
Search or ask a question
Author

Hui Zhang

Bio: Hui Zhang is an academic researcher from National University of Singapore. The author has contributed to research in topics: Computer science & Materials science. The author has an hindex of 75, co-authored 200 publications receiving 27206 citations. Previous affiliations of Hui Zhang include University of California, Berkeley & Naval Postgraduate School.


Papers
More filters
Proceedings Article
01 Jan 2000
TL;DR: The potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred and the results indicate that the performance penalties are low both from the application and the network perspectives.

2,372 citations

Journal ArticleDOI
01 Oct 1995
TL;DR: Several service disciplines that are proposed in the literature to provide per-connection end-to-end performance guarantees in packet-switching networks are surveyed and a general framework for studying and comparing these disciplines is presented.
Abstract: While today's computer networks support only best-effort service, future packet-switching integrated-services networks will have to support real-time communication services that allow clients to transport information with performance guarantees expressed in terms of delay, delay jitter, throughput, and loss rate. An important issue in providing guaranteed performance service is the choice of the packet service discipline at switching nodes. In this paper, we survey several service disciplines that are proposed in the literature to provide per-connection end-to-end performance guarantees in packet-switching networks. We describe their mechanisms, their similarities and differences and the performance guarantees they can provide. Various issues and tradeoffs in designing service disciplines for guaranteed performance service are discussed, and a general framework for studying and comparing these disciplines are presented. >

1,226 citations

Proceedings ArticleDOI
24 Mar 1996
TL;DR: It has been proven that the delay bound provided by WFQ is within one packet transmission time of that provided by GPS, and a new packet approximation algorithm of GPS called worst-case fair weighted fair queueing (WF/sup 2/Q) is proposed.
Abstract: The generalized processor sharing (GPS) discipline is proven to have two desirable properties: (a) it can provide an end-to-end bounded-delay service to a session whose traffic is constrained by a leaky bucket; (b) it can ensure fair allocation of bandwidth among all back logged sessions regardless of whether or not their traffic is constrained. The former property is the basis for supporting guaranteed service traffic while the later property is important for supporting best-effort service traffic. Since GPS uses an idealized fluid model which cannot be realized in the real world, various packet approximation algorithms of the GPS have been proposed. Among these, weighted fair queueing (WFQ) also known as packet generalized processor sharing (PGPS) has been considered to be the best one in terms of accuracy. In particular, it has been proven that the delay bound provided by WFQ is within one packet transmission time of that provided by GPS. We show that, contrary to popular belief there could be large discrepancies between the services provided by the packet WFQ system and the fluid GPS system. We argue that such a discrepancy will adversely effect many congestion control algorithms that rely on services similar to those provided by GPS. A new packet approximation algorithm of GPS called worst-case fair weighted fair queueing (WF/sup 2/Q) is proposed. The service provided by WF/sup 2/Q is almost identical to that of GPS, differing no more than one maximum size packet.

1,188 citations

Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work proposes using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay), and proposes the GNP approach, based on absolute coordinates computed from modeling the Internet as a geometric space.
Abstract: We propose using coordinates-based mechanisms in a peer-to-peer architecture to predict Internet network distance (i.e. round-trip propagation and transmission delay). We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is based on relative coordinates that are simply the distances from a host to some special network nodes. We propose the second mechanism, called global network positioning (GNP), which is based on absolute coordinates computed from modeling the Internet as a geometric space. Since end hosts maintain their own coordinates, these approaches allow end hosts to compute their inter-host distances as soon as they discover each other. Moreover, coordinates are very efficient in summarizing inter-host distances, making these approaches very scalable. By performing experiments using measured Internet distance data, we show that both coordinates-based schemes are more accurate than the existing state of the art system IDMaps, and the GNP approach achieves the highest accuracy and robustness among them.

1,177 citations

Journal ArticleDOI
TL;DR: Narada as discussed by the authors is an alternative architecture for end-to-end multicast, where end systems implement all multicast related functionality including membership management and packet replication, and self-organize into an overlay structure using a fully distributed protocol.
Abstract: The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred.

1,048 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations

Journal ArticleDOI
TL;DR: Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Abstract: A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.

3,518 citations

Journal ArticleDOI
TL;DR: An overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and the ant colony optimization (ACO) metaheuristic is presented.
Abstract: This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.

2,862 citations