scispace - formally typeset
Search or ask a question

Showing papers on "Overlay network published in 2009"


Journal ArticleDOI
G. Boudreau1, J. Panicker1, Ning Guo1, Rui Chang1, Neng Wang1, S. Vrzic1 
TL;DR: Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms.
Abstract: This article provides an overview of contemporary and forward looking inter-cell interference coordination techniques for 4G OFDM systems with a specific emphasis on implementations for LTE. Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms. The applicability, complexity, and performance gains possible with each of these techniques based on simulations and empirical measurements will be highlighted for specific cellular topologies relevant to LTE macro, pico, and femto deployments for both standalone and overlay networks.

748 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on minimizing the energy consumption of an IP over WDM network and develop efficient approaches ranging from mixed integer linear programming (MILP) models to heuristics.
Abstract: As the Internet expands in reach and capacity, the energy consumption of network equipment increases. To date, the cost of transmission and switching equipment has been considered to be the major barrier to growth of the Internet. But energy consumption rather than cost of the component equipment may eventually become a barrier to continued growth. Research efforts on ldquogreening the Internetrdquo have been initiated in recent years, aiming to develop energy-efficient network architectures and operational strategies so as to reduce the energy consumption of the Internet. The direct benefits of such efforts are to reduce the operational costs in the network and cut the greenhouse footprint of the network. Second, from an engineering point of view, energy efficiency will assist in reducing the thermal issues associated with heat dissipation in large data centers and switching nodes. In the present research, we concentrate on minimizing the energy consumption of an IP over WDM network. We develop efficient approaches ranging from mixed integer linear programming (MILP) models to heuristics. These approaches are based on traditional virtual-topology and traffic grooming designs. The novelty of the framework involves the definition of an energy-oriented model for the IP over WDM network, the incorporation of the physical layer issues such as energy consumption of each component and the layout of optical amplifiers in the design, etc. Extensive optimization and simulation studies indicate that the proposed energy-minimized design can significantly reduce energy consumption of the IP over WDM network, ranging from 25% to 45%. Moreover, the proposed designs can also help equalize the power consumption at each network node. This is useful for real network deployment, in which each node location may be constrained by a limited electricity power supply. Finally, it is also interesting and useful to find that an energy-efficient network design is also a cost-efficient design because of the fact that IP router ports play a dominating role in both energy consumption and network cost in the IP over WDM network.

487 citations


Journal ArticleDOI
TL;DR: This research shows that in more than 50% of investigated scenarios, it is better to route through the nodes recommended by Akamai than to use the direct paths, and develops low-overhead pruning algorithms that avoidAkamai-driven paths when they are not beneficial.
Abstract: To enhance Web browsing experiences, content distribution networks (CDNs) move Web content "closer" to clients by caching copies of Web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab (PL) vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low-overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. Because these Akamai nodes are part of a closed system, we provide a method for mapping Akamai-recommended paths to those in a generic overlay and demonstrate that these one-hop paths indeed outperform direct ones.

287 citations


Proceedings Article
24 Feb 2009
TL;DR: This paper concentrates on the back-end which is, to this knowledge, the first commercial implementation of a scalable, high-performance content-addressable secondary storage delivering global duplicate elimination, per-block user-selectable failure resiliency, self-maintenance including automatic recovery from failures with data and network overlay rebuilding.
Abstract: HYDRAstor is a scalable, secondary storage solution aimed at the enterprise market. The system consists of a back-end architectured as a grid of storage nodes built around a distributed hash table; and a front-end consisting of a layer of access nodes which implement a traditional file system interface and can be scaled in number for increased performance. This paper concentrates on the back-end which is, to our knowledge, the first commercial implementation of a scalable, high-performance content-addressable secondary storage delivering global duplicate elimination, per-block user-selectable failure resiliency, self-maintenance including automatic recovery from failures with data and network overlay rebuilding. The back-end programming model is based on an abstraction of a sea of variable-sized, content-addressed, immutable, highly-resilient data blocks organized in a DAG (directed acyclic graph). This model is exported with a low-level API allowing clients to implement new access protocols and to add them to the system on-line. The API has been validated with an implementation of the file system interface. The critical factor for meeting the design targets has been the selection of proper data organization based on redundant chains of data containers. We present this organization in detail and describe how it is used to deliver required data services. Surprisingly, the most complex to deliver turned out to be on-demand data deletion, followed (not surprisingly) by the management of data consistency and integrity.

273 citations


Journal ArticleDOI
TL;DR: The paper presents extensive empirical analysis of the protocol along with theoretical analysis of certain aspects of its behavior, and describes a practical application of T-Man for building Chord distributed hash table overlays efficiently from scratch.

241 citations


Proceedings Article
22 Apr 2009
TL;DR: A suite of complexity models are developed that describe the routing design and configuration of a network in a succinct fashion, abstracting away details of the underlying configuration languages and are predictive of the issues operators face when reconfiguring their networks.
Abstract: Operator interviews and anecdotal evidence suggest that an operator's ability to manage a network decreases as the network becomes more complex. However, there is currently no way to systematically quantify how complex a network's design is nor how complexity may impact network management activities. In this paper, we develop a suite of complexity models that describe the routing design and configuration of a network in a succinct fashion, abstracting away details of the underlying configuration languages. Our models, and the complexity metrics arising from them, capture the difficulty of configuring control and data plane behaviors on routers. They also measure the inherent complexity of the reachability constraints that a network implements via its routing design. Our models simplify network design and management by facilitating comparison between alternative designs for a network. We tested our models on seven networks, including four university networks and three enterprise networks. We validated the results through interviews with the operators of five of the networks, and we show that the metrics are predictive of the issues operators face when reconfiguring their networks.

193 citations


Journal ArticleDOI
TL;DR: This paper introduces MetaCDN, a system that exploits 'Storage Cloud' resources, creating an integrated overlay network that provides a low cost, high performance CDN for content creators and consumers.

157 citations


01 Jan 2009
TL;DR: A distributed pub/sub system for scalable information dissemination can be decomposed in three functional layers: namely the overlay infrastructure, the event routing and the algorithm for matching events against subscriptions.
Abstract: Since the early nineties, anonymous and asynchronous dissemination of information has been a basic building block for typical distributed application such as stock exchanges, news tickers and air-traffic control. With the advent of ubiquitous computing and of the ambient intelligence, information dissemination solutions have to face challenges such as the exchange of huge amounts of information, large and dynamic number of participants possibly deployed over a large network (e.g. peerto-peer systems), mobility and scarcity of resources (e.g. mobile ad-hoc and sensor networks) [9]. Publish/Subscribe (pub/sub) systems are a key technology for information dissemination. Each participant in a pub/sub communication system can take on the role of a publisher or a subscriber of information. Publishers produce information in form of events, which is consumed by subscribers issuing subscriptions representing their interest only in specific events. The main semantical characterization of pub/sub is in the way events flow from senders to receivers: receivers are not directly targeted from publisher, but rather they are indirectly addressed according to the content of events. Thanks to this anonymity, publishers and subscribers exchange information without directly knowing each other, this enabling the possibility for the system to seamlessly expand to massive, Internet-scale size. Interaction between publishers and subscribers is actually mediated by the pub/sub system, that in general is constituted by a set of distributed nodes that coordinate among themselves in order to dispatch published events to all (and possibly only) interested subscribers. A distributed pub/sub system for scalable information dissemination can be decomposed in three functional layers: namely the overlay infrastructure, the event routing and the algorithm for matching events against subscriptions. The overlay infrastructure represents the organization of the various entities that compose the system, (e.g., overlay network of dedicated servers, peer-topeer structured overlay, etc.) while event routing is the mechanism for dispatching information from publishers to subscribers. Event routing has to effectively exploit

102 citations


Proceedings ArticleDOI
10 Aug 2009
TL;DR: This paper presents a distributed and self-stabilizing algorithm that constructs a (variant of the) skip graph in polylogarithmic time from any initial state in which the overlay network is still weakly connected.
Abstract: Peer-to-peer systems rely on scalable overlay networks that enable efficient routing between its members. Hypercubic topologies facilitate such operations while each node only needs to connect to a small number of other nodes. In contrast to static communication networks, peer-to-peer networks allow nodes to adapt their neighbor set over time in order to react to join and leave events and failures. This paper shows how to maintain such networks in a robust manner. Concretely, we present a distributed and self-stabilizing algorithm that constructs a (variant of the) skip graph in polylogarithmic time from any initial state in which the overlay network is still weakly connected. This is an exponential improvement compared to previously known self-stabilizing algorithms for overlay networks. In addition, individual joins and leaves are handled locally and require little work.

94 citations


Proceedings ArticleDOI
25 Sep 2009
TL;DR: This paper establishes a peer-to-peer overlay over the Internet, using cellular Internet access, and establishes a structure for the overlay, a prototype implementation in a simulation environment, and results that underline the feasibility of such a system in a city scenario.
Abstract: In this paper we propose a traffic information system based on the distribution of knowledge provided by the cars themselves. Prior work in this area attempted to realize this distribution via vehicular ad-hoc networks, i.e., by direct communication between cars. Such an approach faces serious problems due to capacity constraints, high data dissemination latencies, and limited initial deployment of the required technology. In this paper, we present a solution that is not based on ad-hoc networking, but is still fully decentralized. It establishes a peer-to-peer overlay over the Internet, using cellular Internet access. We present a structure for the overlay, a prototype implementation in a simulation environment, and results that underline the feasibility of such a system in a city scenario. We also provide an estimate of expected user benefits when our system is used for dynamic route guidance.

86 citations


Patent
22 Dec 2009
TL;DR: In this article, a multi-party commitment method is provided whereby a joining node uses contributions provided by contributor nodes in a peer-to-peer overlay network to generate a node identifier.
Abstract: A multi-party commitment method is provided whereby a joining node uses contributions provided by contributor nodes in a peer-to-peer overlay network to generate a node identifier. The joining node generates a first contribution and sends a join request to an introducer node (or a plurality of contributor nodes), where the join request seeks to obtain one or more contributions for generating the node identifier within an identifier space of the overlay network. A hash of the first contribution may be included as part of the join request. In response, the joining node may receive a plurality of contributions, wherein the contributions are bound to each other and the first contribution by a prior external multi-node commitment operation. The joining node can then generate its node identifier as a function of the first contribution and the received contributions. Consequently, collusion between nodes and malicious manipulation during ID generation can be frustrated.

Proceedings ArticleDOI
Yong Bai1, Juejia Zhou1, Lan Chen1
30 Nov 2009
TL;DR: This paper investigates flexible spectrum usages for LTE network that consists of overlaying macrocell and femtocell and proposes hybrid spectrum usage to take advantage of their merits.
Abstract: This paper investigates flexible spectrum usages for LTE network that consists of overlaying macrocell and femtocell. In such a networking environment, shared spectrum usage and partitioned spectrum usage are two options to be employed between two radio tiers. After recognizing the pros and cons of these two usages, we propose hybrid spectrum usage to take advantage of their merits. In our proposal, the femtocells embedded in macrocell are differentiated to inner and outer femtocells, which operate in partitioned spectrum usage and shared spectrum usage, respectively. Analysis and performance evaluation are given to illustrate and justify our proposed method on improving spectrum utilization for wireless overlay network.

Journal ArticleDOI
TL;DR: A new load sharing scheme for voice and elastic data services in a cellular/WLAN integrated network is proposed to effectively serve elastic data traffic and improve the multiplexing gain.
Abstract: With the interworking between a cellular network and wireless local area networks (WLANs), an essential aspect of resource management is taking advantage of the overlay network structure to efficiently share the multi-service traffic load between the interworked systems. In this study, we propose a new load sharing scheme for voice and elastic data services in a cellular/WLAN integrated network. Admission control and dynamic vertical handoff are applied to pool the free bandwidths of the two systems to effectively serve elastic data traffic and improve the multiplexing gain. To further combat the cell bandwidth limitation, data calls in the cell are served under an efficient service discipline, referred to as shortest remaining processing time (SRPT). The SRPT can well exploit the heavy-tailedness of data call size to improve the resource utilization. An accurate analytical model is developed to determine an appropriate size threshold so that data calls are properly distributed to the integrated cell and WLAN, taking into account the load conditions and traffic characteristics. It is observed from extensive simulation and numerical analysis that the new scheme significantly improves the overall system performance.

Proceedings ArticleDOI
17 Aug 2009
TL;DR: In this paper, the authors present three schemes for decentralized online social networks (OSNs), where each user stores his own personal data in his own machine, which they call a Virtual Individual Server (VIS).
Abstract: Online Social Networks (OSNs) have become enormously popular. However, two aspects of many current OSNs have important implications with regards to privacy: their centralized nature and their acquisition of rights to users' data. Recent work has proposed decentralized OSNs as more privacy-preserving alternatives to the prevailing OSN model. We present three schemes for decentralized OSNs. In all three, each user stores his own personal data in his own machine, which we term a Virtual Individual Server (VIS). VISs self-organize into peer-to-peer overlay networks, one overlay per social group with which the VIS owner wishes to share information. The schemes differ in where VISs and data reside: (a) on a virtualized utility computing infrastructure in the cloud, (b) on desktop machines augmented with socially-informed data replication, and (c) on desktop machines during normal operation, with failover to a standby virtual machine in the cloud when the primary VIS becomes unavailable. We focus on tradeoffs between these schemes in the areas of privacy, cost, and availability.

Proceedings ArticleDOI
09 Nov 2009
TL;DR: This work formalizes the concept of membership concealment, discusses a number of attacks against existing systems and proposes three proof-of-concept MCON designs that resist those attacks: one that is more efficient, another that isMore robust to membership churn, and a third that balances efficiency and robustness.
Abstract: We introduce the concept of membership-concealing overlay networks (MCONs), which hide the real-world identities of participants. We argue that while membership concealment is orthogonal to anonymity and censorship resistance, pseudonymous communication and censorship resistance become much easier if done over a membership-concealing network. We formalize the concept of membership concealment, discuss a number of attacks against existing systems and present real-world attack results. We then propose three proof-of-concept MCON designs that resist those attacks: one that is more efficient, another that is more robust to membership churn, and a third that balances efficiency and robustness. We show theoretical and simulation results demonstrating the feasibility and performance of our schemes.

Journal ArticleDOI
TL;DR: A new randomized self-stabilizing distributed algorithm for cluster definition in communication graphs of bounded degree processors, designed for message passing systems in which a distributed spanning tree is defined and in which processors communicate using bounded links capacity.

Patent
Zhefeng Yan1, Jiahao Wei1
11 Sep 2009
TL;DR: In this article, the authors proposed a P2P network system that includes multiple local overlay networks, each comprising multiple proxy service peers, and a global overlay network composed of the proxy services peers of all local overlay network.
Abstract: The present invention relates to a P2P network system. The P2P network system includes: multiple local overlay networks, each comprising multiple proxy service peers; a global overlay network composed of the proxy service peers of all local overlay networks. The proxy service peer is adapted to respond to the request of the requesting peer, query the local overlay network or global overlay network, and return the address information of the requested peer or the requested proxy service peer to the requesting peer. The present invention also relates to a proxy service peer applicable to the foregoing network system, and a method of peer interworking between P2P overlay networks based on the foregoing system. The present invention relieves the load of the proxy service peer, avoids blindness of the requesting peer in selecting the proxy service peer, and achieves load balance between proxy service peers.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper uses a game-theoretic framework in which infinitesimal users of a network select the source of content, and the traffic engineer decides how the traffic will route through the network, and forms a game and proves the existence of equilibria.
Abstract: In this paper we explore the interaction between content distribution and traffic engineering. Because a traffic engineer may be unaware of the structure of content distribution systems or overlay networks, this management of the network does not fully anticipate how traffic might change as a result of his actions. Content distribution systems that assign servers at the application level can respond very rapidly to changes in the routing of the network. Consequently, the traffic engineer's decisions may almost never be applied to the intended traffic. We use a game-theoretic framework in which infinitesimal users of a network select the source of content, and the traffic engineer decides how the traffic will route through the network. We formulate a game and prove the existence of equilibria. Additionally, we present a setting in which equilibria are socially optimal, essentially unique, and stable. Conditions under which efficiency loss may be bounded are presented, and the results are extended to the cases of general overlay networks and multiple autonomous systems.

Proceedings ArticleDOI
20 Jul 2009
TL;DR: The goal of this paper is to shed some lights on the assumption that TCP is the dominant transport protocol on the Internet, by evaluating the amount of UDP and TCP traffic, in terms of flows, packets and bytes, on traces collected in the period 2002-2009 on several backbone links located in the US and Sweden.
Abstract: It is still an accepted assumption that Internet traffic is dominated by TCP However, the rise of new streaming applications (eg IPTV such as PPStream, PPLive) and new P2P protocols (eg uTP) trying to avoid traffic shaping techniques (such as RST packet injection) are expected to increase the usage of UDP as transport protocol Since UDP lacks congestion-control, this could potentially raise serious concerns about fairness and stability in the Internet The goal of this paper is to shed some lights on the assumption that TCP is the dominant transport protocol on the Internet We evaluate the amount of UDP and TCP traffic, in terms of flows, packets and bytes, on traces collected in the period 2002-2009 on several backbone links located in the US and Sweden According to our best available data,the use of UDP as transport protocol is gaining popularity in the recent years, especially in terms of flow numbers A first analysis suggests that most UDP flows use random high ports and carry few packets and little data This indicates that the recent increases in UDP traffic are a side product of the general increase of P2P traffic, using random ports in order to evade detection and utilizing UDP as signaling traffic for establishing P2P overlay networks

Book ChapterDOI
03 Jul 2009
TL;DR: A robust distributed information system which is resilient to Sybil attacks of arbitrary scale is described and how to organize heterogeneous nodes of arbitrary non-uniform capabilities in an overlay network such that the paths between any two nodes do not include nodes of lower capacities is shown.
Abstract: This paper shows how to build and maintain a distributed heap which we call SHELL. In contrast to standard heaps, our heap is oblivious in the sense that its structure only depends on the nodes currently in the network but not on the past. This allows for fast join and leave operations which is desirable in open distributed systems with high levels of churn and frequent faults. In fact, a node fault or departure can be fixed in SHELL in a constant number of communication rounds, which significantly improves the best previous bound for distributed heaps. SHELL has interesting applications. First, we describe a robust distributed information system which is resilient to Sybil attacks of arbitrary scale . Second, we show how to organize heterogeneous nodes of arbitrary non-uniform capabilities in an overlay network such that the paths between any two nodes do not include nodes of lower capacities. This property is useful, e.g., for streaming. All these features can be achieved without sacrificing scalability: our heap has a de Bruijn like topology with node degree O (log2 n ) and network diameter O (logn ), n being the total number of nodes in the system.

Proceedings ArticleDOI
03 Mar 2009
TL;DR: A new system, called BGPmon, for monitoring the Border Gateway Protocol, which enables scalable real-time monitoring data distribution by allowing monitors to peer with each other and form an overlay network to provide new services and features without modifying the monitors.
Abstract: This paper presents a new system, called BGPmon, for monitoring the Border Gateway Protocol (BGP). BGP is the routing protocol for the global Internet. Monitoring BGP is important for both operations and research; a number of public and private BGP monitors are deployed and widely used. These existing monitors typically collect data using a full implementation of a BGP router. In contrast, BGPmon eliminates the unnecessary functions of route selection and data forwarding to focus solely on the monitoring function. BGPmon uses a publish/subscribe overlay network to provide real-time access to vast numbers of peers and clients. All routing events are consolidated into a single XML stream. XML allows us to add additional features such as labeling updates to allow easy identification of useful data by clients. Clients subscribe to BGPmon and receive the XML stream, performing tasks such as archiving, filtering, or real-time data analysis. BGPmon enables scalable real-time monitoring data distribution by allowing monitors to peer with each other and form an overlay network to provide new services and features without modifying the monitors. We illustrate the effectiveness of the BGPmon data using the Cyclops route monitoring system.

Proceedings ArticleDOI
02 Mar 2009
TL;DR: ProtoPeer is a peer-to-peer systems prototyping toolkit that allows for switching between the event-driven simulation and live network deployment without changing any of the application code.
Abstract: Simulators are a commonly used tool in peer-to-peer systems research. However, they may not be able to capture all the details of a system operating in a live network. Transitioning from the simulation to the actual system implementation is a non-trivial and time-consuming task. We present ProtoPeer, a peer-to-peer systems prototyping toolkit that allows for switching between the event-driven simulation and live network deployment without changing any of the application code. ProtoPeer defines a set of APIs for message passing, message queuing, timer operations as well as overlay routing and managing the overlay neighbors. Users can plug in their own custom implementations of most of the parts of ProtoPeer including custom network models for simulation and custom message passing over different network stacks. ProtoPeer is not only a framework for building systems but also for evaluating them. It has a unified system-wide infrastructure for event injection, measurement logging, measurement aggregation and managing evaluation scenarios. The simulator scales to tens of thousands of peers and gives accurate predictions closely matching the live network measurements.

Journal ArticleDOI
TL;DR: This work considers the problem of designing an efficient and robust distributed random number generator for peer-to-peer systems that is easy to implement and works even if all communication channels are public and shows that a new generator together with a light-weight rule recently proposed can keep various structured overlay networks in a robust state even under a constant fraction of adversarial peers.

Journal ArticleDOI
TL;DR: This paper proposes a novel least-biased end-to-end network diagnosis (in short, LEND) system for inferring link-level properties like loss rate and demonstrates that such diagnosis can be achieved with fine granularity and in near real-time even for reasonably large overlay networks.
Abstract: Internet fault diagnosis is extremely important for end-users, overlay network service providers (like Akamai [), and even Internet service providers (ISPs). However, because link-level properties cannot be uniquely determined from end-to-end measurements, the accuracy of existing statistical diagnosis approaches is subject to uncertainty from statistical assumptions about the network. In this paper, we propose a novel least-biased end-to-end network diagnosis (in short, LEND) system for inferring link-level properties like loss rate. We define a minimal identifiable link sequence (MILS) as a link sequence of minimal length whose properties can be uniquely identified from end-to-end measurements. We also design efficient algorithms to find all the MILSs and infer their loss rates for diagnosis. Our LEND system works for any network topology and for both directed and undirected properties and incrementally adapts to network topology and property changes. It gives highly accurate estimates of the loss rates of MILSs, as indicated by both extensive simulations and Internet experiments. Furthermore, we demonstrate that such diagnosis can be achieved with fine granularity and in near real-time even for reasonably large overlay networks. Finally, LEND can supplement existing statistical inference approaches and provide smooth tradeoff between diagnosis accuracy and granularity.

Journal ArticleDOI
16 Oct 2009
TL;DR: These findings suggest that forwarding information through complex networks, such as the Internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.
Abstract: We show that complex (scale-free) network topologies naturally emerge from hyperbolic metric spaces. Hyperbolic geometry facilitates maximally efficient greedy forwarding in these networks. Greedy forwarding is topology-oblivious. Nevertheless, greedy packets find their destinations with 100% probability following almost optimal shortest paths. This remarkable efficiency sustains even in highly dynamic networks. Our findings suggest that forwarding information through complex networks, such as the Internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.

Patent
23 Sep 2009
TL;DR: In this paper, a plurality of transparent access points (TAPs) are coupled between one or more clients and servers and a wide area network (WAN) to enable the clients to communicate with the servers and the TAPs via permanently established secure links.
Abstract: A method and apparatus for processing an overlay network infrastructure. In one embodiment, the method comprises a plurality of transparent access points (TAPs). Each TAP is communicably coupled between one or more clients and servers and a wide area network (WAN) to enable the one or more clients to communicate with the one or more servers, and is coupled to other of the TAPs via permanently, established secure links. The overlay network also comprises a controller coupled to each of the TAPs via a secure connection to configure the TAPs with information to enable each TAP to know what services are available and from which of the TAPs each of the services can be accessed.

Proceedings ArticleDOI
19 Aug 2009
TL;DR: This paper discusses how to reduce the total energy consumption of multiple peer computers to perform types of programs on P2P overlay networks at macro level and proposes models for realizing energy-efficient computation in P2p systems.
Abstract: Information systems are composed of various types of computers interconnected in types of networks like wireless networks. In addition, information systems are being shifted from the traditional client-server model to the peer-to-peer (P2P) model. The P2P systems are composed of peer processes (peers). They are scalable and fully distributed without centralized coordinators. It is getting more significant to discuss how to reduce the total energy consumption of computers in information systems in addition to developing algorithms to minimize the computation time and memory size. Low-energy CPUs and storage devices like SSD are now being developed at architecture level. In this paper, we do not discuss the hardware specification of each computer. We discuss how to reduce the total energy consumption of multiple peer computers to perform types of programs on P2P overlay networks at macro level. We propose models for realizing energy-efficient computation in P2P systems. We also discuss an allocation algorithm of a process to a peer computer so that the deadline constraint is satisfied and the total energy consumption is reduced.

Journal ArticleDOI
TL;DR: An innovative scheme, hierarchical exponential region organization (HERO), to tackle the problem of accurately locating the positions of moving vehicles in real time by bounding the maximum number of hops the query is routed and guarantees to meet the real-time constraint associated with each vehicle.
Abstract: Intelligent transportation systems have become increasingly important for the public transportation in Shanghai. In response, ShanghaiGrid (SG) project aims to provide abundant intelligent transportation services to improve the traffic condition. A challenging service in SG is to accurately locate the positions of moving vehicles in real time. In this paper, we present an innovative scheme, hierarchical exponential region organization (HERO), to tackle this problem. In SG, the location information of individual vehicles is actively logged in local nodes which are distributed throughout the city. For each vehicle, HERO dynamically maintains an advantageous hierarchy on the overlay network of local nodes to conservatively update the location information only in nearby nodes. By bounding the maximum number of hops the query is routed, HERO guarantees to meet the real-time constraint associated with each vehicle. A small-scale prototype system implementation and extensive simulations based on the real road network and trace data of vehicle movements from Shanghai demonstrate the efficacy of HERO.

Journal ArticleDOI
TL;DR: The ability of a selected set of local rules to foster self-organization of what is originally a random graph into a structured network is measured and it is demonstrated that an overlay rewiring process based purely on local decisions and interactions can result in efficient load-balancing without central planning.
Abstract: In this paper, we investigate the global self-aggregation dynamics arising from local decision-based rewiring of an overlay network, used as an abstraction for an autonomic service-oriented architecture. We measure the ability of a selected set of local rules to foster self-organization of what is originally a random graph into a structured network. Scalability issues with respect to the key parameters of system size and diversity are extensively discussed. Conflicting goals are introduced, in the form of a population of nodes actively seeking to acquire neighbours of a type different from their own, resulting in decreased local homogeneity. We show that a ‘secondary’ self-organization process ensues, whereby nodes spontaneously cluster according to their implicit objective. Finally, we introduce dynamic goals by making the preferred neighbour type a function of the local characteristics of a simulated workload. We demonstrate that in this context, an overlay rewiring process based purely on local decisions and interactions can result in efficient load-balancing without central planning. We conclude by discussing the implications of our findings for the design of future distributed applications, the likely influence of other factors and of extreme parameter values on the ability of the system to self-organize and the potential improvements to our framework.

Journal ArticleDOI
TL;DR: HiGLOB as mentioned in this paper is a general framework for global load balancing in structured peer-to-peer (P2P) systems, where each node has two key components: a histogram manager maintains a histograms that reflects a global view of the distribution of the load in the system, and a load-balancing manager redistributes the load whenever the node becomes overloaded or underloaded.
Abstract: Over the past few years, peer-to-peer (P2P) systems have rapidly grown in popularity and have become a dominant means for sharing resources. In these systems, load balancing is a key challenge because nodes are often heterogeneous. While several load-balancing schemes have been proposed in the literature, these solutions are typically ad hoc, heuristic based, and localized. In this paper, we present a general framework, HiGLOB, for global load balancing in structured P2P systems. Each node in HiGLOB has two key components: 1) a histogram manager maintains a histogram that reflects a global view of the distribution of the load in the system, and 2) a load-balancing manager that redistributes the load whenever the node becomes overloaded or underloaded. We exploit the routing metadata to partition the P2P network into nonoverlapping regions corresponding to the histogram buckets. We propose mechanisms to keep the cost of constructing and maintaining the histograms low. We further show that our scheme can control and bound the amount of load imbalance across the system. Finally, we demonstrate the effectiveness of HiGLOB by instantiating it over three existing structured P2P systems: Skip Graph, BATON, and Chord. Our experimental results indicate that our approach works well in practice.