scispace - formally typeset
Search or ask a question

Showing papers on "Load balancing (computing) published in 2009"


Proceedings ArticleDOI
16 Aug 2009
TL;DR: VL2 is a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics, and is built on a working prototype.
Abstract: To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds - sustaining a rate that is 94% of the maximum possible.

2,350 citations


Proceedings ArticleDOI
16 Aug 2009
TL;DR: Experiments in the testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.
Abstract: This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic.BCube exhibits graceful performance degradation as the server and/or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components.Our implementation experiences show that BCube can be seamlessly integrated with the TCP/IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.

1,639 citations


Book ChapterDOI
25 Nov 2009
TL;DR: A new protocol based on gossiping is described that does scale well and provides timely detection, and is extended to discover and leverage the underlying network topology for much improved resource utilization.
Abstract: Failure Detection is valuable for system management, replication, load balancing, and other distributed services. To date, Failure Detection Services scale badly in the number of members that are being monitored. This paper describes a new protocol based on gossiping that does scale well and provides timely detection. We analyze the protocol, and then extend it to discover and leverage the underlying network topology for much improved resource utilization. We then combine it with another protocol, based on broadcast, that is used to handle partition failures.

512 citations


Patent
31 May 2009
TL;DR: In this article, a system for commoditizing data center networking is described, which includes an interconnection topology for a data center having a plurality of servers and nodes of a network in the data center through which data packets may be routed.
Abstract: A system for commoditizing data center networking is disclosed. The system includes an interconnection topology for a data center having a plurality of servers and a plurality of nodes of a network in the data center through which data packets may be routed. The system uses a routing scheme where the routing is oblivious to the traffic pattern between nodes in the network, and wherein the interconnection topology contains a plurality of paths between one or more servers. The multipath routing may be Valiant load balancing. It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service, full mesh agility, and unregimented server capacity at low cost.

391 citations


Book
11 May 2009
TL;DR: This survey shows in this survey how to extend the primal—dual method to the setting of online algorithms, and shows its applicability to a wide variety of fundamental problems.
Abstract: The primal—dual method is a powerful algorithmic technique that has proved to be extremely useful for a wide variety of problems in the area of approximation algorithms for NP-hard problems. The method has its origins in the realm of exact algorithms, e.g., for matching and network flow. In the area of approximation algorithms, the primal—dual method has emerged as an important unifying design methodology, starting from the seminal work of Goemans and Williamson [60] We show in this survey how to extend the primal—dual method to the setting of online algorithms, and show its applicability to a wide variety of fundamental problems. Among the online problems that we consider here are the weighted caching problem, generalized caching, the set-cover problem, several graph optimization problems, routing, load balancing, and the problem of allocating ad-auctions. We also show that classic online problems such as the ski rental problem and the dynamic TCP-acknowledgement problem can be solved optimally using a simple primal—dual approach. The primal—dual method has several advantages over existing methods. First, it provides a general recipe for the design and analysis of online algorithms. The linear programming formulation helps detecting the difficulties of the online problem, and the analysis of the competitive ratio is direct, without a potential function appearing "out of nowhere." Finally, since the analysis is done via duality, the competitiveness of the online algorithm is with respect to an optimal fractional solution, which can be advantageous in certain scenarios.

366 citations


Journal ArticleDOI
TL;DR: The proposed online algorithm is a simple mixture of inter- and intra-cell handover mechanisms for existing users and user association control and cell-site selection mechanisms for newly arriving users, and uses a notion of expected throughput as the decision making metric in conventional systems.
Abstract: Next-generation cellular networks will provide higher cell capacity by adopting advanced physical layer techniques and broader bandwidth. Even in such networks, boundary users would suffer from low throughput due to severe intercell interference and unbalanced user distributions among cells, unless additional schemes to mitigate this problem are employed. In this paper, we tackle this problem by jointly optimizing partial frequency reuse and load-balancing schemes in a multicell network. We formulate this problem as a network-wide utility maximization problem and propose optimal offline and practical online algorithms to solve this. Our online algorithm turns out to be a simple mixture of inter- and intra-cell handover mechanisms for existing users and user association control and cell-site selection mechanisms for newly arriving users. A remarkable feature of the proposed algorithm is that it uses a notion of expected throughput as the decision making metric, as opposed to signal strength in conventional systems. Extensive simulations demonstrate that our online algorithm can not only closely approximate network-wide proportional fairness but also provide two types of gain, interference avoidance gain and load balancing gain, which yield 20~100% throughput improvement of boundary users (depending on traffic load distribution), while not penalizing total system throughput.We also demonstrate that this improvement cannot be achieved by conventional systems using universal frequency reuse and signal strength as the decision making metric.

335 citations


Proceedings ArticleDOI
16 Oct 2009
TL;DR: This paper presents the design and implementation of a novel memory-compression-based VM migration approach (MECOM) that first uses memory compression to provide fast, stable virtual machine migration, while guaranteeing the virtual machine services to be slightly affected.
Abstract: Live migration of virtual machines has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance, and power-saving, especially in clusters or data centers. Although pre-copy is a predominantly used approach in the state of the art, it is difficult to provide quick migration with low network overhead, due to a great amount of transferred data during migration, leading to large performance degradation of virtual machine services. This paper presents the design and implementation of a novel memory-compression-based VM migration approach (MECOM) that first uses memory compression to provide fast, stable virtual machine migration, while guaranteeing the virtual machine services to be slightly affected. Based on memory page characteristics, we design an adaptive zero-aware compression algorithm for balancing the performance and the cost of virtual machine migration. Pages are quickly compressed in batches on the source and exactly recovered on the target. Experiment demonstrates that compared with Xen, our system can significantly reduce 27.1% of downtime, 32% of total migration time and 68.8% of total transferred data on average.

326 citations


Journal ArticleDOI
TL;DR: A VHO decision algorithm is developed that enables a wireless access network to not only balance the overall load among all attachment points but also maximize the collective battery lifetime of mobile nodes (MNs) and maintain load balancing.
Abstract: There are currently a large variety of wireless access networks, including the emerging vehicular ad hoc networks (VANETs). A large variety of applications utilizing these networks will demand features such as real-time, high-availability, and even instantaneous high-bandwidth in some cases. Therefore, it is imperative for network service providers to make the best possible use of the combined resources of available heterogeneous networks (wireless area networks (WLANs), Universal Mobile Telecommunications Systems, VANETs, Worldwide Interoperability for Microwave Access (WiMAX), etc.) for connection support. When connections need to migrate between heterogeneous networks for performance and high-availability reasons, seamless vertical handoff (VHO) is a necessary first step. In the near future, vehicular and other mobile applications will be expected to have seamless VHO between heterogeneous access networks. With regard to VHO performance, there is a critical need to develop algorithms for connection management and optimal resource allocation for seamless mobility. In this paper, we develop a VHO decision algorithm that enables a wireless access network to not only balance the overall load among all attachment points (e.g., base stations and access points) but also maximize the collective battery lifetime of mobile nodes (MNs). In addition, when ad hoc mode is applied to 3/4G wireless data networks, VANETs, and IEEE 802.11 WLANs for a more seamless integration of heterogeneous wireless networks, we devise a route-selection algorithm for forwarding data packets to the most appropriate attachment point to maximize collective battery lifetime and maintain load balancing. Results based on a detailed performance evaluation study are also presented here to demonstrate the efficacy of the proposed algorithms.

311 citations


Journal ArticleDOI
TL;DR: A checklist is provided as a guideline so that a network designer can choose an appropriate multipath routing protocol to meet the network's application objectives.

283 citations


Journal ArticleDOI
TL;DR: This paper presents a new load balancing technique by controlling the size of WLAN cells (i.e., AP's coverage range), which is conceptually similar to cell breathing in cellular networks, and develops a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP.
Abstract: Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). This goal is typically achieved when the load of access points (APs) is balanced. Recent studies on operational WLANs, however, have shown that AP load is often substantially uneven. To alleviate such imbalance of load, several load balancing schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i.e., AP's coverage range), which is conceptually similar to cell breathing in cellular networks. The proposed scheme does not require any modification to the users neither the IEEE 802.11 standard. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. We also consider the problem of network-wide min-max load balancing. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods.

215 citations


Journal ArticleDOI
31 Mar 2009
TL;DR: A new class of network architectures is presented which enables flow processing and forwarding at unprecedented flexibility and low cost.
Abstract: The Internet has seen a proliferation of specialized middlebox devices that carry out crucial network functionality such as load balancing, packet inspection and intrusion detection. Recent advances in CPU power, memory, buses and network connectivity have turned commodity PC hardware into a powerful network platform. Furthermore, commodity switch technologies have recently emerged offering the possibility to control the switching of flows in a fine-grained manner. Exploiting these new technologies, we present a new class of network architectures which enables flow processing and forwarding at unprecedented flexibility and low cost.

Proceedings ArticleDOI
14 Jun 2009
TL;DR: A mathematical framework for quantitative investigations of self-optimizing wireless networks (SON) with focus on the 3GPP Long-Term Evaluation (LTE) system is presented, exemplified by basic investigations on load balancing.
Abstract: We present a mathematical framework for quantitative investigations of self-optimizing wireless networks (SON) with focus on the 3GPP Long-Term Evaluation (LTE) system. Basic target functions, such as the signal-to-noise ratio distribution, the number of satisfied users, or energy efficiency are derived as a figure of merit, including the impact of adaptation of downlink transmit power adaptation, antenna tilt, and the handover parameter. The framework is exemplified by basic investigations on load balancing.

Patent
18 Nov 2009
TL;DR: In this article, a session initiation message at a particular port is forwarded to one or more of the access servers based on the configured preferred access server for the particular port for that port.
Abstract: In one embodiment, for each port of an access node in an access-based computer network, one access server of a plurality of access servers is configured as a preferred access server for that port. Upon receiving a session initiation message at a particular port, the access node forwards the session initiation message to one or more of the access servers based on the configured preferred access server for the particular port.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: A simple model is designed and implemented which decreases the migration time of virtual machines by shared storage and fulfills the zero-downtime relocation of virtual Machines by transforming them as Red Hat cluster services and proposes a distributed load balancing algorithm COMPARE_AND_BALANCE based on sampling to reach an equilibrium solution.
Abstract: EUCALYPTUS, an open source cloud-computing framework, is still lack of load balancing. In the paper, we provide a kind of implementation by adaptive live migration of virtual machines. We design and implement a simple model which decreases the migration time of virtual machines by shared storage and fulfills the zero-downtime relocation of virtual machines by transforming them as Red Hat cluster services. During the migration process, we also keep the inclusion relationship between VLAN and virtual machines. We propose a distributed load balancing algorithm COMPARE_AND_BALANCE based on sampling to reach an equilibrium solution. The experimental results show that it converges quickly.

Patent
28 May 2009
TL;DR: In this paper, the authors describe techniques for providing load balancing functionality among multiple computing nodes, such as dynamically scaling a group of multiple nodes for which the load balancing is performed, and defined criteria used for the dynamic scaling may be determined in various manners and based on various factors.
Abstract: Techniques are described for providing load balancing functionality among multiple computing nodes. In some situations, the provided load balancing functionality includes dynamically scaling a group of multiple computing nodes for which the load balancing is performed, such as to dynamically expand and/or shrink the quantity of computing nodes in the group based on predefined criteria. At least some of the computing nodes of a group may be part of one or more physical computer networks in one or more geographical locations under control of a user or other entity, and at least some of the dynamic scaling of the group may use one or more other computing nodes that are part of a remote computer network (e.g., a virtual computer network provided under the control of a network-accessible service). The defined criteria used for the dynamic scaling may be determined in various manners and based on various factors.

Patent
11 Aug 2009
TL;DR: In this article, a method and apparatus for serving content requests using global and local load balancing techniques is provided, where web site content is cached using two or more point of presences (POPs), wherein each POP has at least one DNS server.
Abstract: A method and apparatus for serving content requests using global and local load balancing techniques is provided. Web site content is cached using two or more point of presences (POPs), wherein each POP has at least one DNS server. Each DNS server is associated with the same anycast IP address. A domain name resolution request is transmitted to the POP in closest network proximity for resolution based on the anycast IP address. Once the domain name resolution request is received at a particular POP, local load balancing techniques are performed to dynamically select the appropriate Web server at the POP for use in resolving the domain name resolution request. Approaches are described for handling bursts of traffic at a particular POP, security, and recovering from the failure of various components of the system.

Journal ArticleDOI
TL;DR: This paper proposes an explicit exchange of information on congestion status among neighboring satellites, dubbed "Explicit Load Balancing" (ELB) scheme, which ensures a better distribution of traffic over the entire satellite constellation.
Abstract: Non-geostationary (NGEO) satellite communication systems offer an array of advantages over their terrestrial and geostationary counterparts. They are seen as an integral part of next- generation ubiquitous communication systems. Given the non-uniform distribution of users in satellite footprints, due to several geographical and/or climatic constraints, some Inter-Satellite Links (ISLs) are expected to be heavily loaded with data packets while others remain underutilized. Such scenario obviously leads to congestion of the heavily loaded links. It ultimately results in buffer overflows, higher queuing delays, and significant packet drops. To guarantee a better distribution of traffic among satellites, this paper proposes an explicit exchange of information on congestion status among neighboring satellites. Indeed, a satellite notifies its congestion status to its neighboring satellites. When it is about to get congested, it requests its neighboring satellites to decrease their data forwarding rates by sending them a self status notification signaling message. In response, the neighboring satellites search for less congested paths that do not include the satellite in question and communicate a portion of data, primarily destined to the satellite, via the retrieved paths. This operation avoids both congestion and packet drops at the satellite. It also ensures a better distribution of traffic over the entire satellite constellation. The proposed scheme is dubbed "Explicit Load Balancing" (ELB) scheme. While the multi-path routing concept of ELB has many advantages, it may lead to persistent packet reordering. In case of connection-oriented protocols, this phenomenon results in unnecessary shrinkage of the data transmission rate. A solution to this issue is also incorporated in the design of ELB. The interactions of ELB with mechanisms that provide different QoS by differentiating traffic (e.g., Differentiated Services) are also discussed. The good performance of ELB, in terms of better traffic distribution, higher throughput, and lower packet drops, is verified via a set of simulations using the Network Simulator (NS).

Patent
23 Feb 2009
TL;DR: In this paper, a load balancing cluster includes a switch having a plurality of ports; and the plurality of servers connected to at least some of the plurality ports of the switch, each server is addressable by the same virtual Internet Protocol (VIP) address.
Abstract: A load-balancing cluster includes a switch having a plurality of ports; and a plurality of servers connected to at least some of the plurality of ports of the switch. Each server is addressable by the same virtual Internet Protocol (VIP) address. Each server in the cluster has a mechanism constructed and adapted to respond to connection requests at the VIP by selecting one of the plurality of servers to handle that connection, wherein the selecting is based, at least in part, on a given function of information used to request the connection; and a firewall mechanism constructed and adapted to accept all requests for the VIP address for a particular connection only on the server that has been selected to handle that particular connection. The selected server determines whether it is responsible for the request and may hand it off to another cluster member.

Journal ArticleDOI
TL;DR: It is shown that sending the traffic generated by each sensor node through multiple paths, instead of a single path, allows significant energy conservation.
Abstract: Wireless sensor networks (WSNs) require protocols that make judicious use of the limited energy capacity of the sensor nodes. In this paper, the potential performance improvement gained by balancing the traffic throughout the WSN is investigated. We show that sending the traffic generated by each sensor node through multiple paths, instead of a single path, allows significant energy conservation. A new analytical model for load-balanced systems is complemented by simulation to quantitatively evaluate the benefits of the proposed load-balancing technique. Specifically, we derive the set of paths to be used by each sensor node and the associated weights (i.e., the proportion of utilization) that maximize the network lifetime.

Patent
24 Sep 2009
TL;DR: In this paper, a distributed storage resources are managed based on data collected from online monitoring of workloads on the storage resources and performance characteristics of the storage resource, and load metrics are calculated from the collected data and used to identify workloads that are migration candidates and storage units that are migrating destinations.
Abstract: Distributed storage resources are managed based on data collected from online monitoring of workloads on the storage resources and performance characteristics of the storage resources. Load metrics are calculated from the collected data and used to identify workloads that are migration candidates and storage units that are migration destinations, so that load balancing across the distributed storage resources can be achieved.

Journal ArticleDOI
TL;DR: This paper proposes two protocols, GREES-L andGREES-M, which combine geographic routing and energy efficient routing techniques and take into account the realistic lossy wireless channel condition and the renewal capability of environmental energy supply when making routing decisions.
Abstract: Wireless sensor networks are characterized by multihop wireless lossy links and resource constrained nodes. Energy efficiency is a major concern in such networks. In this paper, we study Geographic Routing with Environmental Energy Supply (GREES) and propose two protocols, GREES-L and GREES-M, which combine geographic routing and energy efficient routing techniques and take into account the realistic lossy wireless channel condition and the renewal capability of environmental energy supply when making routing decisions. Simulation results show that GREESs are more energy efficient than the corresponding residual energy based protocols and geographic routing protocols without energy awareness. GREESs can maintain higher mean residual energy on nodes, and achieve better load balancing in terms of having smaller standard deviation of residual energy on nodes. Both GREES-L and GREES-M exhibit graceful degradation on end-to-end delay, but do not compromise the end-to-end throughput performance.

Proceedings ArticleDOI
14 Feb 2009
TL;DR: This paper introduces idempotent work tealing, and presents several new algorithms that exploit the relaxed semantics to deliver better performance, and finds that the best algorithm (with LIFO extraction) outperforms existing algorithms in nearly all cases, and often by significant margins.
Abstract: Load balancing is a technique which allows efficient parallelization of irregular workloads, and a key component of many applications and parallelizing runtimes. Work-stealing is a popular technique for implementing load balancing, where each parallel thread maintains its own work set of items and occasionally steals items from the sets of other threads.The conventional semantics of work stealing guarantee that each inserted task is eventually extracted exactly once. However, correctness of a wide class of applications allows for relaxed semantics, because either: i) the application already explicitly checks that no work is repeated or ii) the application can tolerate repeated work.In this paper, we introduce idempotent work tealing, and present several new algorithms that exploit the relaxed semantics to deliver better performance. The semantics of the new algorithms guarantee that each inserted task is eventually extracted at least once-instead of exactly once.On mainstream processors, algorithms for conventional work stealing require special atomic instructions or store-load memory ordering fence instructions in the owner's critical path operations. In general, these instructions are substantially slower than regular memory access instructions. By exploiting the relaxed semantics, our algorithms avoid these instructions in the owner's operations.We evaluated our algorithms using common graph problems and micro-benchmarks and compared them to well-known conventional work stealing algorithms, the THE Cilk and Chase-Lev algorithms. We found that our best algorithm (with LIFO extraction) outperforms existing algorithms in nearly all cases, and often by significant margins.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: Design of efficient routing schemes for multi-radio multi- channel wireless mesh network is much more challenging compared to the single-channel case, and a routing metric to minimize the end-to-end delay is designed, considering not only the transmission delay at the medium access control (MAC) layer, but also the queuingdelay at the network layer.
Abstract: This paper studies how to select a path with the minimum cost in terms of expected end-to-end delay (EED) in a multi-radio wireless mesh network Different from the previous efforts, the new EED metric takes the queuing delay into account, since the end-to-end delay consists of not only the transmission delay over the wireless links but also the queuing delay in the buffer In addition to minimizing the end-to-end delay, the EED metric implies the concept of load balancing We develop EED- based routing protocols for both single-channel and multi-channel wireless mesh networks In particular for the multi-radio multi- channel case, we develop a generic iterative approach to calculate a multi-radio achievable bandwidth (MRAB) for a path, taking the impacts of inter/intra-flow interference and space/channel diversity into account The MRAB is then integrated with EED to form the metric of weighted end-to-end delay (WEED) As a byproduct of MRAB, a channel diversity coefficient can be defined to quantitatively represent the channel diversity along a given path Both numerical analysis and simulation studies are presented to validate the performance of the routing protocol based on the EED/WEED metric, with comparison to some well- known routing metrics I INTRODUCTION Routing in wireless mesh networks has been a hot re- search area in recent years, with the objective to achieve as high throughput as possible over the network The main methodology adopted by most of the existing work is selecting path based on interference-aware or load-balancing routing metrics to reduce network-wide channel contentions It has been revealed that the capacity of a single-radio multi-hop wireless network can not scale up with the network size, due to the co-channel interference (1)-(3) The multi-radio multi-channel connection has been widely considered as an efficient approach to increase the wireless network capacity (8) Design of efficient routing schemes for multi-radio multi- channel wireless mesh network is much more challenging compared to the single-channel case Many popular multimedia applications, eg, voice over IP, IPTV, and on-line gaming, have strict delay requirement In this paper, we aim at designing a routing metric to minimize the end-to-end delay, considering not only the transmission delay at the medium access control (MAC) layer, but also the queuing delay at the network layer Most of the previous studies focus only on the transmission delay of the packet

Journal ArticleDOI
TL;DR: This work presents the divide-conquer-swap strategy and shows that this model converges towards completeness, and addresses the problem of making distributed reasoning scalable and load-balanced.

Patent
22 Jul 2009
TL;DR: In this article, the authors describe techniques for establishing an overall label switched path (LSP) for load balancing network traffic being sent across a network using the a resource reservation protocol such as Resource Reservation Protocol with Traffic Engineering (RSVP-TE).
Abstract: Techniques are describe for establishing an overall label switched path (LSP) for load balancing network traffic being sent across a network using the a resource reservation protocol such as Resource Reservation Protocol with Traffic Engineering (RSVP-TE). The techniques include extensions to the RSVP-TE protocol that enable a router to send Path messages for establishing a tunnel that includes a plurality of sub-paths for the overall LSP. The tunnel may comprise a single RSVP-TE Label Switched Path (LSP) that is configured to load balance network traffic across different sub-paths of the RSVP-TE LSP over the network.

Patent
17 Nov 2009
TL;DR: In this paper, a dynamic and real-time load factor that can be shared with other network elements is presented, which can be used in determining the relative load among a set of network elements and in distributing new sessions requests as well as existing session on the set of networks.
Abstract: Methods and systems for providing a dynamic and real time load factor that can be shared with other network elements is disclosed. The load factor can be used in determining the relative load among a set of network elements and in distributing new sessions requests as well as existing session on the set of network elements. The load factor can also be used for determining to which network element a user equipment is handed off. The dynamic load factor can also be shared amongst network elements to determine how the load is balanced among the network elements, such as a mobility management entity (MME).

Journal ArticleDOI
TL;DR: A new channel-quality based user association mechanism inspired by the operation of the infrastructure-based WLANs is proposed, and it is shown that wireless mesh networks that use the proposed association mechanisms are more capable in meeting the needs of QoS-sensitive applications.
Abstract: The user association mechanism specified by the IEEE 802.11 standard does not consider the channel conditions and the AP load in the association process. Employing the mechanism in its plain form in wireless mesh networks we may only achieve low throughput and low user transmission rates. In this paper we design a new association framework in order to provide optimal association and network performance. In this framework we propose a new channel-quality based user association mechanism inspired by the operation of the infrastructure-based WLANs. Besides, we enforce our framework by proposing an airtime-metric based association mechanism that is aware of the uplink and downlink channel conditions as well as the communication load. We then extend the functionality of this mechanism in a cross-layer manner taking into account information from the routing layer, in order to fit it in the operation of wireless mesh networks. Lastly, we design a hybrid association scheme that can be efficiently applied in real deployments to improve the network performance. We evaluate the performance of our system through simulations and we show that wireless mesh networks that use the proposed association mechanisms are more capable in meeting the needs of QoS-sensitive applications.

Journal ArticleDOI
01 Aug 2009
TL;DR: A novel repartitioning hypergraph model for dynamic load balancing that accounts for both communication volume in the application and migration cost to move data, in order to minimize the overall cost is presented.
Abstract: In parallel adaptive applications, the computational structure of the applications changes over time, leading to load imbalances even though the initial load distributions were balanced. To restore balance and to keep communication volume low in further iterations of the applications, dynamic load balancing (repartitioning) of the changed computational structure is required. Repartitioning differs from static load balancing (partitioning) due to the additional requirement of minimizing migration cost to move data from an existing partition to a new partition. In this paper, we present a novel repartitioning hypergraph model for dynamic load balancing that accounts for both communication volume in the application and migration cost to move data, in order to minimize the overall cost. The use of a hypergraph-based model allows us to accurately model communication costs rather than approximate them with graph-based models. We show that the new model can be realized using hypergraph partitioning with fixed vertices and describe our parallel multilevel implementation within the Zoltan load balancing toolkit. To the best of our knowledge, this is the first implementation for dynamic load balancing based on hypergraph partitioning. To demonstrate the effectiveness of our approach, we conducted experiments on a Linux cluster with 1024 processors. The results show that, in terms of reducing total cost, our new model compares favorably to the graph-based dynamic load balancing approaches, and multilevel approaches improve the repartitioning quality significantly.

Journal ArticleDOI
TL;DR: The various load metrics are discussed and the principles behind several existing load balanced ad hoc routing protocols are summarized and a qualitative comparison of theVarious load metrics and load balanced routing protocols is presented.
Abstract: Mobile ad hoc networks are collections of mobile nodes that can dynamically form temporary networks without the need for pre-existing network infrastructure or centralized administration. These nodes can be arbitrarily located and can move freely at any given time. Hence, the network topology can change rapidly and unpredictably. Because wireless link capacities are usually limited, congestion is possible in MANETs. Hence, balancing the load in a MANET is important since nodes with high loads will deplete their batteries quickly, thereby increasing the probability of disconnecting or partitioning the network. This article discusses the various load metrics and summarizes the principles behind several existing load balanced ad hoc routing protocols. Finally, a qualitative comparison of the various load metrics and load balanced routing protocols is presented.

Journal ArticleDOI
TL;DR: Besides proving that BUBBLE-FOS/C converges towards a local optimum, this paper develops a much faster method for the improvement of partitionings, based on a different diffusive process, which is restricted to local areas of the graph and also contains a high degree of parallelism.