scispace - formally typeset
Search or ask a question

Showing papers on "Load balancing (computing) published in 2003"


Proceedings ArticleDOI
05 Mar 2003
TL;DR: This work examines super-peer networks in detail, gaming an understanding of their fundamental characteristics and performance tradeoffs, and presents practical guidelines and a general procedure for the design of an efficient super- Peer-to-peer network.
Abstract: A super-peer is a node in a peer-to-peer network that operates both as a server to a set of clients, and as an equal in a network of super-peers. Super-peer networks strike a balance between the efficiency of centralized search, and the autonomy, load balancing and robustness to attacks provided by distributed search. Furthermore, they take advantage of the heterogeneity of capabilities (e.g., bandwidth, processing power) across peers, which recent studies have shown to be enormous. Hence, new and old P2P systems like KaZaA and Gnutella are adopting super-peers in their design. Despite their growing popularity, the behavior of super-peer networks is not well understood. For example, what are the potential drawbacks of super-peer networks? How can super-peers be made more reliable? How many clients should a super-peer take on to maximize efficiency? we examine super-peer networks in detail, gaming an understanding of their fundamental characteristics and performance tradeoffs. We also present practical guidelines and a general procedure for the design of an efficient super-peer network.

905 citations


Book ChapterDOI
12 Oct 2003
TL;DR: The issues of multipath routing in MANETs are examined to support application constraints such as reliability, load-balancing, energy-conservation, and Quality-of-Service (QoS).
Abstract: Mobile ad hoc networks (MANETs) consist of a collection of wireless mobile nodes which dynamically exchange data among themselves without the reliance on a fixed base station or a wired backbone network MANET nodes are typically distinguished by their limited power, processing, and memory resources as well as high degree of mobility In such networks, the wireless mobile nodes may dynamically enter the network as well as leave the network Due to the limited transmission range of wireless network nodes, multiple hops are usually needed for a node to exchange information with any other node in the network Thus routing is a crucial issue to the design of a MANET In this paper, we specifically examine the issues of multipath routing in MANETs Multipath routing allows the establishment of multiple paths between a single source and single destination node It is typically proposed in order to increase the reliability of data transmission (ie, fault tolerance) or to provide load balancing Load balancing is of especial importance in MANETs because of the limited bandwidth between the nodes We also discuss the application of multipath routing to support application constraints such as reliability, load-balancing, energy-conservation, and Quality-of-Service (QoS)

525 citations


Journal ArticleDOI
01 Sep 2003
TL;DR: In this survey, the problem-solving paradigm of ACO is explicated and compared to traditional routing algorithms along the issues of routing information, routing overhead and adaptivity.
Abstract: Although an ant is a simple creature, collectively a colony of ants performs useful tasks such as finding the shortest path to a food source and sharing this information with other ants by depositing pheromone. In the field of ant colony optimization (ACO), models of collective intelligence of ants are transformed into useful optimization techniques that find applications in computer networking. In this survey, the problem-solving paradigm of ACO is explicated and compared to traditional routing algorithms along the issues of routing information, routing overhead and adaptivity. The contributions of this survey include 1) providing a comparison and critique of the state-of-the-art approaches for mitigating stagnation (a major problem in many ACO algorithms), 2) surveying and comparing three major research in applying ACO in routing and load-balancing, and 3) discussing new directions and identifying open problems. The approaches for mitigating stagnation discussed include: evaporation, aging, pheromone smoothing and limiting, privileged pheromone laying and pheromone-heuristic control. The survey on ACO in routing/load-balancing includes comparison and critique of ant-based control and its ramifications, AntNet and its extensions, as well as ASGA and SynthECA. Discussions on new directions include an ongoing work of the authors in applying multiple ant colony optimization in load-balancing.

503 citations


Book ChapterDOI
21 Feb 2003
TL;DR: This paper explores the space of designing load-balancing algorithms that uses the notion of “virtual servers” and presents three schemes that differ primarily in the amount of information used to decide how to re-arrange load.
Abstract: Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.

473 citations


Proceedings ArticleDOI
20 Oct 2003
TL;DR: This paper describes a protocol called RelnForM to deliver packets at desired reliability at a proportionate communication cost, and shows that for uniform unit disk graphs, the number of edge-disjoint paths between nodes is equal to the average node degree with very high probability.
Abstract: Sensor networks are meant for sensing and disseminating information about the environment they sense. The criticality of a sensed phenomenon determines its importance to the end user. Hence, data dissemination in a sensor network should be information aware. Such information awareness is essential firstly to disseminate critical information more reliably and secondly to consume network resources proportional to the criticality of information. In this paper, we describe a protocol called RelnForM to deliver packets at desired reliability at a proportionate communication cost. RelnForm sends multiple copies of each packet along multiple paths from source to sink, such that data is delivered at the desired reliability. It uses the concept of dynamic packet state in context of sensor networks, to control the number of paths required for the desired reliability, and does so using only local knowledge of channel error rates and topology. We show that for uniform unit disk graphs, the number of edge-disjoint paths between nodes is equal to the average node degree with very high probability. RelnForm utilizes this property in its randomized forwarding mechanism which results in use of all possible paths and efficient load balancing.

456 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: An algorithm is proposed to network these sensors in to well define clusters with less energy-constrained gateway nodes acting as cluster-heads, and balance load among these gateways to improve the lifetime of the system.
Abstract: Wireless sensor networks have potential to monitor environments for both military and civil applications. Due to inhospitable conditions these sensors are not always deployed uniformly ion the area of interest. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial to extend the life of the sensors. Sensors' energy cannot support long haul communication to reach a remote command site and thus requires many levels of hops or a gateway to forward the data on behalf of the sensor. In this paper, we propose an algorithm to network these sensors in to well define clusters with less energy-constrained gateway nodes acting as cluster-heads, and balance load among these gateways. Simulation results show how our approach can balance the load and improve the lifetime of the system.

417 citations


Proceedings ArticleDOI
22 Apr 2003
TL;DR: The objective is to compute the feasible partitioning that results in minimum energy consumption on multiple identical processors by using variable voltage earliest-deadline-first scheduling and develops a framework where load balancing plays a major role in producing energy-efficient partitionings.
Abstract: In this paper, we address the problem of partitioning periodic real-time tasks in a multiprocessor platform by considering both feasibility and energy-awareness perspectives: our objective is to compute the feasible partitioning that results in minimum energy consumption on multiple identical processors by using variable voltage earliest-deadline-first scheduling. We show that the problem is NP-hard in the strong sense on m /spl ges/ 2 processors even when feasibility is guaranteed a priori. Then, we develop our framework where load balancing plays a major role in producing energy-efficient partitionings. We evaluate the feasibility and energy-efficiency performances of partitioning heuristics experimentally.

329 citations


Book ChapterDOI
21 Feb 2003
TL;DR: This paper suggests the direct application of the “power of two choices” paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives, and considers how associating a small constant number of hash values with a key can be extended to support other load balancing strategies, including load-stealing or load-shedding, as well as providing natural fault-tolerance mechanisms.
Abstract: Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can be achieved more simply and more cost-effectively. First, we suggest the direct application of the “power of two choices” paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally be extended to support other load balancing strategies, including load-stealing or load-shedding, as well as providing natural fault-tolerance mechanisms.

305 citations


Book ChapterDOI
21 Feb 2003
TL;DR: This paper explores a new point in design space in which increased memory usage and constant background communication overheads are tolerated to reduce file lookup times and increase stability to failures and churn.
Abstract: A peer-to-peer (p2p) distributed hash table (DHT) system allows hosts to join and fail silently (or leave), as well as to insert and retrieve files (objects). This paper explores a new point in design space in which increased memory usage and constant background communication overheads are tolerated to reduce file lookup times and increase stability to failures and churn. Our system, called Kelips, uses peer-to-peer gossip to partially replicate file index information. In Kelips, (a) under normal conditions, file lookups are resolved within 1 RPC, independent of system size, and (b) membership changes (e.g., even when a large number of nodes fail) are detected and disseminated to the system quickly. Per-node memory requirements are small in medium-sized systems. When there are failures, lookup success is ensured through query rerouting. Kelips achieves load balancing comparable to existing systems. Locality is supported by using topologically aware gossip mechanisms. Initial results of an ongoing experimental study are also discussed.

298 citations


01 Dec 2003
TL;DR: Kelips as mentioned in this paper uses peer-to-peer gossip to partially replicate file index information to reduce file lookup times and increase stability to failures and churn in P2P distributed hash tables.
Abstract: A peer-to-peer (p2p) distributed hash table (DHT) system allows hosts to join and fail silently (or leave), as well as to insert and retrieve files (objects). This paper explores a new point in design space in which increased memory usage and constant background communication overheads are tolerated to reduce file lookup times and increase stability to failures and churn. Our system, called Kelips, uses peer-to-peer gossip to partially replicate file index information. In Kelips, (a) under normal conditions, file lookups are resolved within 1 RPC, independent of system size, and (b) membership changes (e.g., even when a large number of nodes fail) are detected and disseminated to the system quickly. Per-node memory requirements are small in medium-sized systems. When there are failures, lookup success is ensured through query rerouting. Kelips achieves load balancing comparable to existing systems. Locality is supported by using topologically aware gossip mechanisms. Initial results of an ongoing experimental study are also discussed.

296 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: The results reveals that in comparison with general single-path routing protocol, multipath routing mechanism creates more overheads but provides better performance in congestion and capacity provided that the route length is within a certain upper bound which is derivable.
Abstract: Research on multipath routing protocols to provide improved throughput and route resilience as compared with single-path routing has been explored in details in the context of wired networks. However, multipath routing mechanism has not been explored thoroughly in the domain of ad hoc networks. In this paper, we analyze and compare reactive single-path and multipath routing with load balance mechanisms in ad hoc networks, in terms of overhead, traffic distribution and connection throughput. The results reveals that in comparison with general single-path routing protocol, multipath routing mechanism creates more overheads but provides better performance in congestion and capacity provided that the route length is within a certain upper bound which is derivable. The analytical results are further confirmed by simulation.

Patent
30 Jun 2003
TL;DR: In this paper, a processor-accessible media include processor-executable instructions that, when executed, direct a system to perform actions including: receiving host status information from multiple hosts; and making load balancing decisions responsive to the received host status.
Abstract: In a first exemplary implementation, one or more processor-accessible media include processor-executable instructions that, when executed, direct a system to perform actions including: receiving host status information from multiple hosts; and making load balancing decisions responsive to the received host status information. In a second exemplary implementation, a system includes: session tracking infrastructure that is adapted to acquire session information; and load balancing infrastructure that is adapted to utilize the session information when routing connection requests to multiple hosts. In a third exemplary implementation, one or more processor-accessible media include processor-executable instructions that, when executed, direct a system to perform actions including: acquiring session information for multiple session contexts from one or more applications that established the multiple session contexts; and routing network traffic to the one or more applications responsive to the acquired session information.

Proceedings ArticleDOI
25 Aug 2003
TL;DR: This paper considers how optics can be used to scale capacity and reduce power in a router, and describes two different implementations based on technology available within the next three years.
Abstract: Routers built around a single-stage crossbar and a centralized scheduler do not scale, and (in practice) do not provide the throughput guarantees that network operators need to make efficient use of their expensive long-haul links. In this paper we consider how optics can be used to scale capacity and reduce power in a router. We start with the promising load-balanced switch architecture proposed by C-S. Chang. This approach eliminates the scheduler, is scalable, and guarantees 100% throughput for a broad class of traffic. But several problems need to be solved to make this architecture practical: (1) Packets can be mis-sequenced, (2) Pathological periodic traffic patterns can make throughput arbitrarily small, (3) The architecture requires a rapidly configuring switch fabric, and (4) It does not work when linecards are missing or have failed. In this paper we solve each problem in turn, and describe new architectures that include our solutions. We motivate our work by designing a 100Tb/s packet-switched router arranged as 640 linecards, each operating at 160Gb/s. We describe two different implementations based on technology available within the next three years.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: It is demonstrated that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest.
Abstract: Third generation code-division multiple access (CDMA) systems propose to provide packet data service through a high speed shared channel with intelligent and fast scheduling at the base-stations. In the current approach base-stations schedule independently of other base-stations. We consider scheduling schemes in which scheduling decisions are made jointly for a cluster of cells thereby enhancing performance through interference avoidance and dynamic load balancing. We consider algorithms that assume complete knowledge of the channel quality information from each of the base-stations to the terminals at the centralized scheduler as well as a two-tier scheduling strategy that assumes only the knowledge of the long term channel conditions at the centralized scheduler. We demonstrate that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest. Since the load balancing is achieved through centralized scheduling, our scheme can adapt to time-varying traffic patterns dynamically.

Book ChapterDOI
R. Levy1, J. Nagarajarao1, Giovanni Pacifici1, A. Spreitzer1, Asser N. Tantawi1, Alaa Youssef1 
24 Mar 2003
TL;DR: The average response time is used as the performance metric for a performance management system for cluster-based Web services that supports multiple classes of Web services traffic and allocates server resources dynamically so to maximize the expected value of a given cluster utility function in the face of fluctuating loads.
Abstract: We present an architecture and prototype implementation of a performance management system for cluster-based Web services. The system supports multiple classes of Web services traffic and allocates server resources dynamically so to maximize the expected value of a given cluster utility function in the face of fluctuating loads. The cluster utility is a function of the performance delivered to the various classes, and this leads to Differentiated Service. In this paper we use the average response time as the performance metric. The management system is transparent: it requires no changes in the client code, the server code, or the network interface between them. The system performs three performance management tasks: resource allocation, load balancing, and server overload protection. We use two nested levels of management mechanism. The inner level centers on queuing and scheduling of request messages. The outer level is a feedback control loop that periodically adjusts the scheduling weights and server allocations of the inner level. The feedback controller is based on an approximate first-principles model of the system, with parameters derived from continuous monitoring. We focus on SOAP-based Web services. We report experimental results that show the dynamic behavior of the system.

Patent
07 Aug 2003
TL;DR: A communications network switch as mentioned in this paper is a method and apparatus for balancing the loading of aggregated network links of the trunk, thereby increasing the data transmission rate through the trunk by using a load balancing unit.
Abstract: A communications network switch includes a plurality of network ports for transmitting and receiving packets to and from network nodes via network links, each of the packets having a destination address and a source address, the switch being operative to communicate with at least one trunking network device via at least one trunk formed by a plurality of aggregated network links. The communications network switch provides a method and apparatus for balancing the loading of aggregated network links of the trunk, thereby increasing the data transmission rate through the trunk. The switch includes: a packet buffer for temporarily storing a packet received at a source port of the network ports, the packet having a source address value, and a destination address value indicating a destination node that is communicatively coupled with the switch via a data path including a trunk; a packet routing unit for determining a destination trunked port associated with the packet, the destination trunked port including a subset of the plurality of network ports, the destination trunked port being coupled to the destination node via the data path; and load balancing unit for selecting a destination port associated with the packet from the subset of network ports; whereby transmission loading of the aggregated network links of the trunk is balanced. In varying embodiments, the load balancing unit is operative to select destination ports from the subsets of network ports as a function of source port ID values, source addresses, and destination addresses.

Proceedings ArticleDOI
19 May 2003
TL;DR: This paper presents QUEST, a QoS framework that can simultaneously achieve QoS assurances and good load balancing in SON, and provides an initial service composition and dynamic service composition, to address the problem.
Abstract: Many value-added and content delivery services are being offered via service level agreements (SLAs). These services can be interconnected to form a service overlay network (SON) over the Internet. Service composition in SON has emerged as a cost-effective approach to quickly creating new services. Previous research has addressed the reliability, adaptability, and compatibility issues for composed services. However little has been done to manage generic quality-of-service (QoS) provisioning for composed services, based on the SLA contracts of individual services. In this paper we present QUEST a QoS assUred composEable Service infrasTructure, to address the problem. QUEST framework provides: (1) initial service composition, which can compose a qualified service path under multiple QoS constraints (e.g., response time, availability). If multiple qualified service paths exist, QUEST chooses the best one according to the load balancing metric; and (2) dynamic service composition, which can dynamically recompose the service path to quickly recover from service outages and QoS violations. Different from the previous work, QUEST can simultaneously achieve QoS assurances and good load balancing in SON.

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter addresses power conservation for clusters of workstations or PCs by developing systems that dynamically turn cluster nodes on - to be able to handle the load imposed on the system efficiently - and off - to save power under lighter load.
Abstract: In this chapter we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on - to be able to handle the load imposed on the system efficiently - and off - to save power under lighter load. The key component of our systems is an algorithm that makes cluster reconfiguration decisions by considering the total load imposed on the system and the power and performance implications of changing the current configuration. The algorithm is implemented in two common cluster-based systems: a network server and an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems.

Patent
Ballard C. Bare1
12 Feb 2003
TL;DR: In this paper, the authors proposed a method for disseminating MAC addresses for discovered network devices through a plurality of network switches which cooperate to enable maintaining multiple active paths between such devices.
Abstract: A method for disseminating MAC addresses for discovered network devices through a plurality of network switches which cooperate to enable maintaining multiple active paths between such devices. Where a plurality of network switches cooperate through load balancing protocols to enable simultaneous use of multiple paths between, protocols of the present invention permit newly discovered MAC addresses attached to ports of an edge switch to be disseminated through the network switches. When an edge switch detects a device having a previously unknown MAC address, a MAC address information packet is generated and disseminated from the edge switch the other switched of the same load balance domain. The packet is preferably, in effect, broadcast using the pruned broadcast tree constructed and maintained by other protocols related to the present invention. Each intermediate switch on the broadcast tree eventually receives the MAC address information packet from a neighboring switch in the load balance domain. The received MAC address information packet is used to update MAC address tables in the receiving switch. If appropriate in accordance with the pruned broadcast tree, the received MAC address packet is forwarded from each receiving intermediate switch to other neighbor switches in the load balance domain.

Book ChapterDOI
30 Jun 2003
TL;DR: The number of steps required to reach a pure Nash Equilibrium in a load balancing scenario where each job behaves selfishly and attempts to migrate to a machine which will minimize its cost is studied.
Abstract: We study the number of steps required to reach a pure Nash Equilibrium in a load balancing scenario where each job behaves selfishly and attempts to migrate to a machine which will minimize its cost. We consider a variety of load balancing models, including identical, restricted, related and unrelated machines. Our results have a crucial dependence on the weights assigned to jobs. We consider arbitrary weights, integer weights, K distinct weights and identical (unit) weights. We look both at an arbitrary schedule (where the only restriction is that a job migrates to a machine which lowers its cost) and specific efficient schedulers (such as allowing the largest weight job to move first).

Proceedings ArticleDOI
01 May 2003
TL;DR: A load-balanced adaptive routing algorithm for torus networks, GOAL - Globally Oblivious Adaptive Locally - that provides high throughput on adversarial traffic patterns, matching or exceeding fully randomized routing and exceeding the worst-case performance of Chaos, RLB, and minimal routing by more than 40%.
Abstract: We introduce a load-balanced adaptive routing algorithm for torus networks, GOAL - Globally Oblivious Adaptive Locally - that provides high throughput on adversarial traffic patterns, matching or exceeding fully randomized routing and exceeding the worst-case performance of Chaos [2], RLB [14], and minimal routing [8] by more than 40%. GOAL also preserves locality to provide up to 4.6× the throughput of fully randomized routing [19] on local traffic. GOAL achieves global load balance by randomly choosing the direction to route in each dimension. Local load balance is then achieved by routing in the selected directions adaptively. We compare the throughput, latency, stability and hot-spot performance of GOAL to six previously published routing algorithms on six specific traffic patterns and 1,000 randomly generated permutations.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: A node-centric algorithm that constructs a load-balanced tree in sensor networks of asymmetric architecture is designed and it is found that the algorithm achieves routing trees that are more effectively balanced than the routing based on breadth-first search (BFS) and shortest-path obtained by Dijkstra's algorithm.
Abstract: By spreading the workload across a sensor network, load balancing reduces hot spots in the sensor network and increases the energy lifetime of the sensor network. In this paper, we design a node-centric algorithm that constructs a load-balanced tree in sensor networks of asymmetric architecture. We utilize a Chebyshev Sum metric to evaluate via simulation the balance of the routing trees produced by our algorithm. We find that our algorithm achieves routing trees that are more effectively balanced than the routing based on breadth-first search (BFS) and shortest-path obtained by Dijkstra's algorithm.

Patent
21 Oct 2003
TL;DR: In this article, a system comprising a host system (300), a driver (302), and a plurality of host bus adapters (312-316) in communication with the driver is described.
Abstract: A system comprising a host system (300), a driver (302) in communication with a host system, and a plurality of host bus adapters (312-316) in communication with the driver. The host bus adapters provide a plurality of data transmission paths between the host system and a storage device. The driver is operable to adjust data transmission loads between the paths without burdening the operating system (318).

Proceedings ArticleDOI
02 Apr 2003
TL;DR: Simulation results shows that irrespective of the routing protocol used, this approach improves the lifetime of the system and Load balanced clustering increases the system stability and improves the communication between different nodes in the system.
Abstract: Wireless sensor networks have received increasing attention in recent few years. In many military and civil applications of sensor networks, sensors are constrained in onboard energy supply and are left unattended. Energy, size and cost constraints of such sensors limit the communication range. Therefore, multi-hop wireless connectivity is required to forward data on their behalf to a remote command site. In this paper, we investigate the performance of an algorithm to network these sensors into well defined clusters with less-energy-constrained gateway nodes acting as clusterheads as well as to balance the load among these gateways. Load balanced clustering increases the system stability and improves the communication between different nodes in the system. To evaluate the efficiency of this approach, we studied the performance of sensor networks by applying various different routing protocols. Simulation results shows that irrespective of the routing protocol used, this approach improves the lifetime of the system.

Patent
07 Feb 2003
TL;DR: In this article, a controller provides access to two or more disparate networks in parallel, through direct or indirect network interfaces, through which traffic is routed through one or more other disparate networks.
Abstract: Methods, configured storage media, and systems are provided for communications using two or more disparate networks in parallel to provide load balancing across network connections, greater reliability, and/or increased security. A controller provides access to two or more disparate networks in parallel, through direct or indirect network interfaces. When one attached network fails, the failure is sensed by the controller and traffic is routed through one or more other disparate networks. When all attached disparate networks are operating, one controller preferably balances the load between them.

Patent
David L. Chavez1
30 Apr 2003
TL;DR: In this paper, a load balancing method for packet-switched networks is presented, which includes the steps of: (1) providing a set of Internet Protocol (IP) addresses corresponding to a Universal Resource Locator (URL), wherein the ordering of the IP addresses in the set of IP addresses is indicative of a corresponding desirability of contacting each of the addresses and wherein the set
Abstract: A method for effecting load balancing in a packet-switched network is provided. In one embodiment, the method includes the steps of: (a) providing a set of Internet Protocol (IP) addresses corresponding to a Universal Resource Locator (URL), wherein the ordering of the IP addresses in the set of IP addresses is indicative of a corresponding desirability of contacting each of the IP addresses and wherein the set of IP addresses are in a first order; (b) receiving activity-related information associated with at least one of the IP addresses; and (c) reordering the set of IP addresses to be in a second order different from the first order.

Patent
15 Jan 2003
TL;DR: Load balancing among fast reroute backup tunnels (508, 510) in a label switched network (200) is achieved in this article, where a packing algorithm is used to associate individual label switched paths (LSPs) with individual backup tunnels.
Abstract: Load balancing among fast reroute backup tunnels (508, 510) in a label switched network (200) is achieved. M backup tunnels (508, 510) may be used to protect N parallel paths (502, 504, 406). A single backup tunnel (508, 510) may protect multiple parallel paths (502, 504, 406), saving on utilization of network resources such as router state and signaling information. A single path (502, 504, 406) may be protected by multiple backup tunnels (508, 510), assuring that bandwidth guarantees are met under failure conditions even when no one backup tunnel (508, 510) with sufficient bandwidth may be found. A packing algorithm is used to associate individual label switched paths (LSPs) with individual backup tunnels (508, 510). If an LSP cannot be assigned to a backup tunnel (508, 510), it may be either rejected, or additional bandwidth is allocated to existing backup tunnels (508, 510), or a new backup tunnel is established.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the combination of DNS dispatching with redirection schemes that use centralized or distributed control on the basis of global or local state information and conclude that distributed algorithms are preferable over their centralized counterpart because they provide stable performance, take content-aware dispatching decisions, limit the percentage of redirected requests, and their implementation is much simpler than that required by centralized schemes.
Abstract: Replication of information among multiple servers is necessary to support high request rates to popular Web sites. We consider systems that maintain one interface to users, even it they consist of multiple nodes with visible IP addresses that are distributed among different networks. In these systems, first-level dispatching is achieved through the Domain Name System (DNS) during the address lookup phase. Distributed Web systems can use a request redirection mechanism as second-level dispatching because the DNS routing scheme has limited control on offered load. Redirection is always executed by the servers, but there are many alternatives that are worth investigating. We explore the combination of DNS dispatching with redirection schemes that use centralized or distributed control on the basis of global or local state information. In fully distributed schemes, DNS dispatching is carried out by simple algorithms because load sharing is taken by some redirection mechanisms that each server activates autonomously. On the other hand, in fully centralized schemes, redirection is used as a tool to enforce decisions taken by the same centralized entity that provides the first-level dispatching. We also investigate hybrid strategies. We conclude that distributed algorithms are preferable over their centralized counterpart because they provide stable performance, take content-aware dispatching decisions, limit the percentage of redirected requests, and their implementation is much simpler than that required by centralized schemes.

Proceedings ArticleDOI
22 Apr 2003
TL;DR: An agent-based grid management infrastructure is coupled with a performance-driven task scheduler that has been developed for local grid load balancing, which significantly improves grid application execution performance and resource utilisation.
Abstract: Load balancing is a key concern when developing parallel and distributed computing applications. The emergence of computational grids extends this problem, where issues of cross-domain and large-scale scheduling must also be considered. In this paper an agent-based grid management infrastructure is coupled with a performance-driven task scheduler that has been developed for local grid load balancing. Each grid scheduler utilises predictive application performance data and an iterative heuristic algorithm to engineer local load balancing across multiple processing nodes. At a higher level, a hierarchy of homogeneous agents are used to represent multiple grid resources. Agents cooperate with each other to balance workload in the global grid environment using service advertisement and discovery mechanisms. A case study is included with corresponding experimental results to demonstrate that both local schedulers and agents contribute to overall grid load balancing, which significantly improves grid application execution performance and resource utilisation.

Proceedings ArticleDOI
20 Oct 2003
TL;DR: The localizer refines the overlay in a way that reflects geographic locality so as to reduce network overload and evenly balance the number of neighbors of each node in the overlay, thereby sharing the load evenly as well as improving the resilience to random node failures or disconnections.
Abstract: The growth of peer-to-peer applications on the Internet motivates interest in general purpose overlay networks. The construction of overlays connecting a large population of transient nodes poses several challenges. First, connections in the overlays should reflect the underlying network topology, in order to avoid overloading the network and to allow god application performance. Second, connectivity among active nodes of the overlay should be maintained, even in the presence of high failure rates or when a large proportion of nodes are not active. Finally, the cost of using the overlay should be spread evenly among peer nodes for fairness reasons as well as for the sake of application performance. To preserve scalability, we seek solutions to these issues that can be implemented in a fully decentralized manner and rely on local knowledge from each node. In this paper, we propose an algorithm called the localizer which addresses these three key challenges. The localizer refines the overlay in a way that reflects geographic locality so as to reduce network overload. Simultaneously, it helps to evenly balance the number of neighbors of each node in the overlay, thereby sharing the load evenly as well as improving the resilience to random node failures or disconnections. The proposed algorithm is presented and evaluated in the context of an unstructured peer-to-peer overlay network produced using the Scamp protocol. We provide a theoretical analysis of the various aspects of the algorithm. Simulation results based on a realistic network topology model confirm the analysis and demonstrate the localizer efficiency.