scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Computer Communications and Networks in 2006"


Proceedings ArticleDOI
01 Oct 2006
TL;DR: It is concluded that, in order to achieve the highest throughput under a typical flyover UAV flight path, both the UAV and the ground station should use omni-directional dipole antennas positioned horizontally, with their respective antenna null pointing to a direction perpendicular to the Uav's flight path.
Abstract: We report measured performance of 802.11a wireless links from an unmanned aerial vehicle (UAV) to ground stations. In a set of field experiments, we record the received signal strength indicator (RSSI) and measure the raw link-layer throughput for various antenna orientations, communication distances and ground-station elevations. By comparing the performance of 32 simultaneous pairs of UAV and ground station configurations, we are able to conclude that, in order to achieve the highest throughput under a typical flyover UAV flight path, both the UAV and the ground station should use omni-directional dipole (as opposed to high-gain, narrow- beam) antennas positioned horizontally, with their respective antenna null pointing to a direction perpendicular to the UAV's flight path. In addition, a moderate amount of elevation of the ground stations can also improve performance significantly.

108 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: A new algorithm is proposed, called LERP (Lightpath Establishment with Regenerator Placement), that enables to solve the problem of RWA in guaranteeing the feasibility of the obtained solution.
Abstract: Over the last decade, numerous routing and wavelength assignment (RWA) algorithms have been developed for WDM optical networks planning. Most of these algorithms neglect the feasibility of the obtained lightpaths. In this paper, we propose a new algorithm, called LERP (Lightpath Establishment with Regenerator Placement), that enables to solve the problem of RWA in guaranteeing the feasibility of the obtained solution. A lightpath is said admissible if the BER (Bit Error Rate) at its destination node remains acceptable (remains under a given threshold). In the case of BER non-admissibility, one or more electrical regenerators may be placed along the lightpath. The LERP algorithm aims at minimizing the number of regenerators necessary to guarantee the quality of transmission along the lightpath. The originality of the our approach consists in considering simultaneously four physical layer impairments, namely, chromatic dispersion, polarization mode dispersion, amplified spontaneous emission and non-linear phase shift. The efficiency of the LERP algorithm is demonstrated via a numerical comparison with one of the alternative solutions proposed in the literature. Numerical simulations have been carried out in the context of the 18-node NSF network.

69 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: A theoretical nonachievable lower bound of the estimation MSE under the total bit rate constraint is stated and it is shown that the proposed algorithm is quasi-optimal within a factor 2.2872 of the theoretical lower bound.
Abstract: We consider the distributed parameter estimation in wireless sensor networks where a total bit rate constraint is imposed. There is a tradeoff between the number of active sensors and the quantization bit rate for each active sensor to minimize the estimation mean square error (MSE). We first present an optimal distributed estimation algorithm for homogeneous sensor networks and introduce a concept of the equivalent 1-bit MSE function. Then, we propose a quasi-optimal distributed estimation algorithm for heterogeneous sensor networks, which is also based on the equivalent 1-bit MSE function, and the upper bound of the estimation MSE of the proposed algorithm is also addressed. Furthermore, a theoretical lower bound of the estimation MSE under the total bit rate constraint is stated and it is shown that our proposed algorithm is quasi-optimal within a factor 2.2872 of the theoretical lower bound. Simulation results also show that significant reduction in estimation MSE is achieved by the proposed algorithm when compared to other uniform methods.

68 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: New abstractions for information transfer and data services in the network are introduced, which overcome the constraints of the end-to-end argument that has dominated current network designs.
Abstract: Next-generation network architectures will be governed by the need for flexibility. Heterogeneous end-systems, novel communication abstractions, and security and manageability challenges will require networks to provide a broad range of services that go beyond the simple store-and-forward capabilities of today's Internet. This paper introduces new abstractions for information transfer and data services in the network, which overcome the constraints of the end-to-end argument that has dominated current network designs. The explicit separation of communication and processing allows the composition of a variety of information transfer patterns that expand the capabilities of the network to meet its next-generation challenges. We discuss implementation issues that arise in this network architecture and present several examples of how applications can utilize the proposed abstractions. This present a first step towards a unified view of the convergence of networking, processing, and their distributed applications.

52 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: Measure the performance impact of anycast on DNS reveals an inherent trade-off between increasing the percentage of queries answered by the closest server and the stability of the DNS zone, measured by the number of query failures and server switches.
Abstract: In this paper, we measure the performance impact of anycast on DNS. We study four top-level DNS servers to evaluate how anycast improves DNS service and compare different anycast configurations. Increased availability is one of the supposed advantages of anycast and we found that indeed the number of observed outages was smaller for anycast, suggesting that it provides a mostly stable service. On the other hand, outages can last up to multiple minutes, mainly due to slow BGP convergence. We also found that anycast indeed reduces query latency. Furthermore, depending on the anycast configuration used, 37% to 80% of the queries are directed to the closest anycast instance. Our measurements revealed an inherent trade-off between increasing the percentage of queries answered by the closest server and the stability of the DNS zone, measured by the number of query failures and server switches. We believe that these findings will help network providers to deploy anycast more effectively in the future.

41 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work characterizes how the introduction of a few sensor nodes with better capabilities can reduce the number of total required sensors without sacrificing the coverage and the broadcast reachability.
Abstract: While most existing research efforts in the area of wireless sensor networks have focused on networks with identical nodes, deploying sensors with different capabilities has become a feasible choice. In this paper, we focus on sensor networks with two types of nodes that differ in their capabilities, and discuss the effects of heterogeneity of sensing and transmission ranges on the network coverage and broadcast reachability. Our work characterizes how the introduction of a few sensor nodes with better capabilities can reduce the number of total required sensors without sacrificing the coverage and the broadcast reachability. Analytical results are validated via simulations. This work can serve as a guideline for designing large-scale sensor networks cost-effectively. It can also be extended to more complicated heterogeneous wireless sensor networks with more than two types of sensors.

34 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The objective of this paper is to design and implement a TOE offload system which attempts to offload the processing of TCP/IP protocols onto the authors' designed host bus adapter and shows that the experimental results show that the offloading system can provide better tcp/IP transmission rate up to 296 Mbps as receiver and 239 Mbps as sender compared with embedded OS based solutions.
Abstract: With the increasing network speed over Ethernet, the servers and communication systems has become burdened with the large amount of TCP/IP processing required. The main reason for the CPU bottleneck is the TCP/IP stack being processing at a rate less than network speed. In recent years, TCP/IP offload engine (TOE) is emerging as an attractive solution, which can reduce the host CPU overhead and improve network performance at the same time. The objective of this paper is therefore to design and implement a TOE offload system which attempts to offload the processing of TCP/IP protocols onto our designed host bus adapter. We have also implemented our TOE acceleration hardware block and associated TCP firmware to accomplish this goal. The experimental results show that our offload system can provide better TCP/IP transmission rate up to 296 Mbps as receiver and 239 Mbps as sender compared with embedded OS based solutions.

34 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents an analysis of structured peer-to-peer systems taking into consideration Zipf-like requests distribution and proposes a novel approach for load balancing taking into account object popularity, based on dynamic routing table reorganization.
Abstract: In the past few years, several DHT-based abstractions for peer-to-peer systems have been proposed. The main characteristic is to associate nodes (peers) with objects (keys) and to construct distributed routing structures to support efficient location. These approaches partially consider the load problem by balancing storage of objects without, however, considering lookup traffic. In this paper we present an analysis of structured peer-to-peer systems taking into consideration Zipf-like requests distribution. Based on our analysis, we propose a novel approach for load balancing taking into account object popularity. It is based on dynamic routing table reorganization in order to balance the routing load and on caching objects to balance the request load. We can therefore significantly improve the load balancing of traffic in these systems, and consequently their scalability and performance. Results from experimental evaluation demonstrate the effectiveness of our approach.

29 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The results show that the proposed algorithm can find a tree of a good approximation to the optimal tree and has a high degree of scalability.
Abstract: This paper considers the problem of constructing data gathering trees in a wireless sensor network for a group of sensor nodes to send collected information to a single sink node. Sensors form application-directed groups and the sink node communicates with the group members, called source nodes, to gather the desired data using a multicast tree rooted at the sink node [7]. The data gathering tree contains the sink node, all the source nodes, and some other non-source nodes. Our goal of constructing such a data gathering tree is to minimize the number of non-source nodes to be included in the tree so as to save energies of as many non-source nodes as possible. It can be shown that the optimization problem is NP-hard. We first propose an approximation algorithm with a performance ratio of four, and then give a distributed algorithm corresponding to the approximation algorithm. Extensive simulations are performed to study the performance of the proposed algorithm. The results show that the proposed algorithm can find a tree of a good approximation to the optimal tree and has a high degree of scalability.

24 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: A novel hierarchical GMPLS-based framework for provisioning all-optical and opto-electronic multi-domain DWDM networks and several topology abstraction schemes are proposed for aggregating domain-level state to improve routing scalability and lower inter-domain blocking.
Abstract: As DWDM networks proliferate there is a growing need to address the issue of distributed interdomain lightpath provisioning. Although inter-domain provisioning has been well-studied for packet/cellswitching networks, the wavelength dimension presents many additional challenges. This paper develops a novel hierarchical GMPLS-based framework for provisioning all-optical and opto-electronic multi-domain DWDM networks. In particular, several topology abstraction schemes are proposed for aggregating domain-level state to improve routing scalability and lower inter-domain blocking. Inter-domain lightpath RWA and signaling schemes are also tabled. Performance analysis results are presented along with directions for future work.

24 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The results show that overlay routing has the potential to reduce the round trip time by 40 milliseconds and increase network connectivity by 7% on average, and even simple static overlay paths can reduce end-to-end delay, while the dynamic algorithm does better.
Abstract: The prosperity of various real-time applications triggers significant challenges for the Internet to meet the critical end-to-end delay requirements. This paper investigates the feasibility and practical issues of using overlay routing to improve end-to-end delay performance, with the analysis of round trip time data collected over three months by the all pairs pings project between each pair of hundreds of nodes on the Planet- Lab. The results show: 1) overlay routing has the potential to reduce the round trip time by 40 milliseconds and increase network connectivity by 7% on average; 2) even simple static overlay paths can reduce end-to-end delay, while the dynamic algorithm does better; 3) Over 80% of the shortest overlay paths have no more than 4 hops, and a simple algorithm leveraging only one relay node can efficiently take the advantages of overlay routing.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A virtual small world network is constructed by adding virtual long links to the network to reduce the chance of a protocol encountering local minima in greedy mode, and thus decrease the chance to invoke inefficient methods.
Abstract: Routing is the foremost issue in mobile ad hoc networks (MANETs). In a wireless environment characterized by small bandwidth and limited computational resources, position-based routing is attractive because it requires little communication and storage overhead. To guarantee delivery and improve performance, most position-based routing protocols, e.g. GFG, forward a message in greedy mode until the message is forwarded to a node that has no neighbor closer to the destination, which is called a local minimum. They then switch to a less efficient mode. Face routing, where the message is forwarded along the perimeter of the void, is one example. This paper tackles the void problem with two new methods. First, we construct a virtual small world network by adding virtual long links to the network to reduce the chance of a protocol encountering local minima in greedy mode, and thus decrease the chance to invoke inefficient methods. Second, we use the virtual force method to recover from local minima without relying on face routing. We combine these two methods to be our new purely greedy routing protocol SWING. Simulation shows that SWING finds shorter routes than the state of art geometric routing protocol GOAFR, though with a longer route establishment time. More importantly, SWING is purely greedy which works even if position information is inaccurate, also it can be directly applied to the 3D MANET models. A theoretical proof that it guarantees delivery is given.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The design and evaluation of Midas are presented, an approach to support multi-attribute range queries on R-DHT that indexes multi- attribute resources using a d-to-one mapping scheme, and optimizes a range query by searching only for available keys.
Abstract: R-DHT is a class of DHT whereby each node supports "read-only" accesses to its key-value pairs, but does not allow key-value pairs belonging to other nodes to be written on it. Recently, supporting efficient multi-attribute range queries on DHT has been an active area of research. This paper presents the design and evaluation of Midas, an approach to support multi-attribute range queries on R-DHT. Midas indexes multi-attribute resources using a d-to-one mapping scheme, and optimizes a range query by searching only for available keys. Our simulation results show that Midas on R-DHT achieves a higher lookup resiliency than conventional DHT, and it has a lower cost of query processing when the query selectivity is much larger than the number of query results.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: Simulation results demonstrate that the proposed congestion-triggered multipath routing scheme can effectively improve network performance by exploiting routing redundancies inherent in the network topology.
Abstract: We present a multipath routing scheme that is designed to increase throughput and alleviate congestion in networks employing shortest path routing. The multipath routing scheme consists of an algorithm to determine a set of multiple disjoint or partially disjoint paths and a mechanism for distributing traffic over a multipath route to reduce the traffic load on a congested link. The algorithm for finding multipath routes is based on shortest path routing and does not require pre-establishment of paths or support for source routing. The mechanism for multipath traffic distribution is triggered at a node when the average load on an outgoing link exceeds a threshold. Our simulation results demonstrate that the proposed congestion-triggered multipath routing scheme can effectively improve network performance by exploiting routing redundancies inherent in the network topology.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: An ILP formulation to compute the working path and flow p-cycles for the current demand to minimize the working and spare capacity required by the demand is developed and a demand teardown procedure is described that accurately computes the network capacities that can be reclaimed when a demand departs the network.
Abstract: Span protecting p-cycle has been shown to be a promising approach for survivable WDM network design because of its ability to achieve ring-like recovery speed while maintaining the capacity efficiency of a mesh-restorable network. In [12], the concept of span protecting p-cycle is extended to path-segment protecting p-cycle (flow p-cycle for short) which can protect a multi-span segment of a working path. An ILP model that computes the optimal placement of flow p-cycles for protecting a given set of demands (i.e., static traffic) is given in [12]. In this paper, we present an algorithm that uses flow p-cycles for service protection in dynamic traffic scenario. When a demand arrives at the network, a working path needs to be selected for the demand and a set of flow p-cycles need to be configured for protecting the demand. To utilize the network capacity efficiently, we propose to reuse the existing flow p-cycles in the network to protect the current demand. Based on this idea, we develop an ILP formulation to compute the working path and flow p-cycles for the current demand to minimize the working and spare capacity required by the demand. We also describe a demand teardown procedure that accurately computes the network capacities that can be reclaimed when a demand departs the network. Simulation results show that flow p-cycle outperforms span p-cycle to a considerable extent for dynamic traffic.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The relationship between node density, sensing range, and the possible intrusion distance before the intruder is detected by any of the sensors is established.
Abstract: Wireless sensor networks present a feasible and economic solution to some of the most challenging problems such as intrusion detection. In this paper, we establish relationship between node density, sensing range, and the possible intrusion distance before the intruder is detected by any of the sensors. We also extend our model to the multi-sensor joint detection case where an event can only be detected by k (k > 1) sensors simultaneously. Furthermore, we consider the effect of node heterogeneity on the intrusion detection in wireless sensor network, where two types of sensors with different sensing ranges are deployed. All the analytical results are validated via simulations. Our analysis can provide very useful insights in choosing network design parameters of sensor networks so that specified performance requirements can be met, thereby enhancing the acceptance of sensor networks.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper develops a disjoint multipath routing strategy using colored trees with an objective to minimize the total cost of the routing paths in a network and demonstrates through extensive simulations that the developed technique is extremely effective in optimizing the averagecost of the paths.
Abstract: Multi-path routing (MPR) is an effective strategy to achieve robustness, load balancing, congestion reduction, and increased throughput by transmitting data over multiple paths. Disjoint multi-path routing (DMPR) requires the multiple paths to be link- or node-disjoint. Implementation of both MPR and DMPR poses significant challenges in obtaining loop-free multiple (disjoint) paths and effectively forwarding the data over the multiple paths, the latter being significant in data-gram networks. In this paper, we develop a disjoint multipath routing strategy using colored trees with an objective to minimize the total cost of the routing paths in a network. Two trees, namely red and blue, rooted at a given drain is formed. We demonstrate through extensive simulations that the developed technique is extremely effective in optimizing the average cost of the paths. In addition, we also observe that the developed approach minimizes the average minimum (minimum of the two paths) cost, which is lower than that obtained by earlier algorithms. The colored tree approach simply doubles the size of the routing table when two link- or node-disjoint paths to a specific node is needed.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper devise a queuing network to model NP resources and application work flows, and uses queuing theory and operational analysis to obtain performance metrics on throughput and response time, at the component level as well as at the system level.
Abstract: Network processors (NP) are designed to provide both performance and flexibility through parallel and programmable architecture, making them superior to general-purpose processors on performance and to hardware-based solutions on flexibility. But NPs also introduce new challenges. It is important to study the limitations of NP architectures so that one can take full advantage of NP resources to achieve the required performance for a given application. It is therefore desirable to develop a general framework for analyzing performance of NP-based applications. This paper presents an analytical method for solving this problem. In particular, we devise a queuing network to model NP resources and application work flows. We then use queuing theory and operational analysis to obtain performance metrics on throughput and response time, among other things, at the component level as well as at the system level. We apply our performance model to SpliceNP, a TCP splicing implementation of content-aware switches on network processors presented in [10], and show that the analytical results using our models match the experimental results from actual implementation.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The threshold-based TCP Vegas are able to distinguish whether the increases in packet delay are due to network congestion, or due to burst contentions at low traffic loads, and have higher throughput for a TCP connection compared to TCP Vegas and the loss- based TCP implementations, such as TCP Sack.
Abstract: Due to the bufferless nature of optical burst switched network, contentions occur even at low traffic loads, leading to burst losses. Contention resolution schemes, such as burst retransmission and deflection, can reduce burst losses, especially at low traffic loads. However, both schemes result in additional packet delay for the packets in bursts that are retransmitted or deflected. The additional packet delay affects the performance of delay-based TCP implementations that rely on packet delay to estimate available bandwidth in networks and to detect network congestion state. In this paper, we discuss the issues of TCP Vegas over OBS networks and propose a threshold-based TCP Vegas version that is suitable for the characteristics of OBS networks. The threshold-based TCP Vegas are able to distinguish whether the increases in packet delay are due to network congestion, or due to burst contentions at low traffic loads. Our simulation results show that the threshold-based TCP Vegas has higher throughput for a TCP connection compared to TCP Vegas and the loss-based TCP implementations, such as TCP Sack.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper proposes a novel tag reading protocol, Relay-MAC, which aims at reducing the information sent over the network and the energy spent in collision detection and handling by introducing deliberate sequencing at runtime, and validates its feasibility using simulation studies.
Abstract: A promising application for RFID tags is to trace valuable assets in an inventory. In such systems, the key challenge is to achieve reliable and energy-efficient tag reads. This paper proposes a novel tag reading protocol, Relay-MAC, which aims at reducing the information sent over the network and the energy spent in collision detection and handling by introducing deliberate sequencing at runtime. This paper provides an in-depth study of the design issues one may face in implementing such a protocol on RFID tags, and validates its feasibility using simulation studies. These studies clearly demonstrate that Relay- MAC can yield much better throughput and energy conservation when compared to a conventional select-and-read protocol.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The queueing performance of the random access protocol without piggyback is analyzed, which is non- exhaustive service of an M/G/1 type queue with set-up times by taking a binary exponential backoff mechanism into consideration.
Abstract: In this paper, we consider the queueing performance of IEEE802.16 random access protocol for sporadic data transmission with a binary exponential backoff algorithm. The random access protocol of IEEE 802.16 is physically based on orthogonal frequency-division-multiple-access and code- division-multiple-access with time division duplexing mode, i.e., multichannel- and multicode- slotted Aloha. In a medium access control layer, the protocol is a type of demand-assigned multiple access with and without piggyback, in which a bandwidth request may be allowed either before transmitting data or at the end of data transmission. We analyze the queueing performance of the random access protocol without piggyback, which is non- exhaustive service of an M/G/1 type queue with set-up times by taking a binary exponential backoff mechanism into consideration. The performance is presented according to traffic load, number of subscriber stations and retransmission probability.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents a new technique, utilizing blossom inequalities, which can find solutions to the design problem of wireless interference, and considers the physical additive interference model, which is the physically more realistic one.
Abstract: Wireless mesh networks are emerging as the next important arena for multhiop wireless networking research. Due to several characteristics of these networks, they are amenable to network capacity and resource design in the same manner as more traditional wired networks, with the important difference that wireless interference must be accounted for in design. Studies have already appeared in the literature on such network design. In this paper, we consider this design problem. We mention previous formulations, which address the binary model of interference. We then consider the physical additive interference model, which is the physically more realistic one. So far in literature, the additive nature of interference has been ignored in this context for simplification. We show that existing techniques are not sufficient to address this case, and go on to present a new technique, utilizing blossom inequalities, which can find solutions to this problem. Numerical results show that our approach provides good results in practice.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents a novel geocast routing protocol for symbolically addressed messages that can operate on simple symbolic location models, and shows how to improve the performance of message forwarding by integrating a light-weight layer 3 multicast protocol.
Abstract: Geocast, which allows for forwarding messages to hosts residing at specified geographic areas, is a promising communication paradigm with a wide range of applications. Geocast target areas can be specified either by geometric figures or symbolic addresses, such as /usa/f 1/miami/market-street. In this paper, we present a novel geocast routing protocol for symbolically addressed messages. Compared to geocast protocols based on geometric information, our protocol can operate on simple symbolic location models, and message forwarding does not require costly geometric operations. The proposed protocol is based on an overlay network that is mapped to an IP-based network infrastructure. The overlay network is structured in a hierarchical fashion, to ensure a scalable global geocast service supporting also large target areas. Although our protocol does not rely on a layer 3 multicast protocol, we also show how to improve the performance of message forwarding by integrating a light-weight layer 3 multicast protocol. Our evaluations of the protocol underline the scalability of our approach and show good routing quality leading to short message paths. I.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work proposes several relay selection strategies for distributed space-time cooperative systems, in which the Alamouti scheme is used, and develops an efficient algorithm to select the optimal cooperative nodes which can maximize the instantaneous signal-to-noise ratio at the destination.
Abstract: We propose several relay selection strategies for distributed space-time cooperative systems, in which the Alamouti scheme is used. Our strategies select the best two nodes among the source and all relays to assist the packet transmissions from the source to the destination. In the amplify-and-forward mode, we develop an efficient algorithm to select the optimal cooperative nodes which can maximize the instantaneous signal-to-noise ratio at the destination. Under the decode-and-forward mode, our relay selection strategies depend on the successful decoding results at the relays. We design our schemes by considering both cases with and without CRC. In addition, we integrate the request- to-send/clear-to-send mechanism into our proposed system and analyze the system throughputs. The simulation results show that our proposed strategies can significantly improve the system throughput and achieve the diversity gain as compared to the conventional existing schemes.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: It is proved that the hybrid Markov predictors can get close performance with order-2 Markov predictor at much lower expense by conditional entropy analysis and user mobility data analysis and can alleviate the zero probability problem in order-k Markov model to some extent.
Abstract: Path prediction is an important issue in QoS of wireless networks The paper points out problems in some existed path prediction schemes, especially the state space expansion problem in order-k Markov predictor And it firstly proposes a step-k Markov model and validates its feasibility Secondly, a hybrid Markov predictor model and its improved models are put forward based on the step-k Markov model Because of the order-2 Markov model's best performance in order-k Markov models, the Hybrid Markov model takes the order-2 Markov model as its target The state space's complexity of the Hybrid Markov Model is 0(N) while the order-2 Markov model is O(N2) And the memory demand of the hybrid Markov model is O(N2) while Order-2 Markov model is O(N3) Finally, it is proved that the hybrid Markov predictor can get close performance with order-2 Markov predictor at much lower expense by conditional entropy analysis and user mobility data analysis Also it can alleviate the zero probability problem in order-k Markov model to some extent The hybrid Markov predictor is more practical than order-k Markov predictors under WLAN

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents and evaluates a speculative route invalidation scheme aimed at reducing the convergence delays for large-scale failures, which collects statistics from BGP updates received at a router and identifies Autonomous Systems that are likely to be "unstable".
Abstract: The border gateway protocol (BGP) has been known to suffer from large convergence delays after failures. We have also found that the impact of a failure rises sharply with the size of the failure. In this paper we present and evaluate a speculative route invalidation scheme aimed at reducing the convergence delays for large-scale failures. Our scheme collects statistics from BGP updates received at a router and identifies Autonomous Systems (ASes) that are likely to be "unstable". Routes that contain these ASes are marked as invalid and not propagated further. This cuts down the number of invalid routes during the convergence process and results in a significant improvement in the convergence delay.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The proposed sampling algorithm is tailored to enhance the capability of network based IDS at detecting denial-of-service (DoS) attacks and adaptively reduces the volume of data that would be analyzed by an IDS, but it also maintains the intrinsic self-similar characteristic of network traffic.
Abstract: There is an emerging need for the traffic processing capability of network security mechanisms, such as intrusion detection systems (IDS), to match the high throughput of today's high-bandwidth networks. Recent research has shown that the vast majority of security solutions deployed today are inadequate for processing traffic at a sufficiently high rate to keep pace with the network's bandwidth. To alleviate this problem, packet sampling schemes at the front end of network monitoring systems (such as an IDS) have been proposed. However, existing sampling algorithms are poorly suited for this task especially because they are unable to adapt to the trends in network traffic. Satisfying such a criterion requires a sampling algorithm to be capable of controlling its sampling rate to provide sufficient accuracy at minimal overhead. To meet this utopian goal, adaptive sampling algorithms have been proposed. In this paper, we put forth an adaptive sampling algorithm based on weighted least squares prediction. The proposed sampling algorithm is tailored to enhance the capability of network based IDS at detecting denial-of-service (DoS) attacks. Not only does the algorithm adaptively reduce the volume of data that would be analyzed by an IDS, but it also maintains the intrinsic self-similar characteristic of network traffic. The latter characteristic of the algorithm can be used by an IDS to detect DoS attacks by using the fact that a change in the self-similarity of network traffic is a known indicator of a DoS attack.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work considers the uplink power control problem in a single cell, multi-user, CDMA wireless data system and forms a cooperative game using the Nash bargaining solution concept, in order to determine the socially optimum solution, which is both Pareto efficient and fair.
Abstract: We consider the uplink power control problem in a single cell, multi-user, CDMA wireless data system and formulate it as a cooperative game. We use the Nash bargaining solution concept, in order to determine the socially optimum solution, which is both Pareto efficient and fair. In our formulation, the BS plays the role of the arbitrator, i.e., solves the power control problem, and broadcasts the relevant information to all users in order to enforce convergence to the optimal operating point. The comparison of the cooperative scheme to the non-cooperative scheme shows significant reduction in the transmission power of the mobile terminals.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper proposes adding deadline reuse to flow aggregation networks, and revise the deadline reuse method for stateless core networks, so that guaranteed throughput can be achieved while maintaining a lower end-to-end delay bound.
Abstract: The Internet traditionally provides best effort service to all applications. While elastic applications are satisfied by this service, inelastic applications such as interactive audio and video suffer from end-to-end delay guarantees. Although guaranteed rate schedulers were developed to provide such guarantees, their scalability has been a concern because they maintain per-flow state. In an effort to reduce per-flow state, two methods have been proposed: stateless core networks and flow aggregation. Stateless core networks require no per-flow state at the routers, while flow aggregation maintains state for a small number of aggregate flows. Although flow aggregation maintains more state, it provides a lower end-to-end delay bound than stateless core networks. The original proposals of these two techniques did not provide guaranteed throughput, that is, flows could be temporarily denied service if they exceeded their reserved rates at earlier times. Recently, guaranteed throughput has been incorporated into the stateless core model through the reuse of deadlines. This is similar to the deadline reuse found in earlier stateful protocols that provide guaranteed throughput. In this paper, we propose adding deadline reuse to flow aggregation networks. In this way, guaranteed throughput can be achieved while maintaining a lower end-to-end delay bound. In addition, we revise the deadline reuse method for stateless core networks.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A novel design for emulating the backbone networks of cluster-of-clusters, named NemC, can support the fine-grained network delay resolution minimizing the additional overheads and can emulate the low delay and high bandwidth backbone networks more accurately than existing emulators such as NISTNet and NetEm.
Abstract: A large number of clusters are being used in all different organizations such as universities, laboratories, etc. These clusters are, however, usually independent from each other even in the same organization or building. To provide a single image of such clusters to users and utilize them in an integrated manner, cluster-of-clusters has been suggested. However, since research groups usually do not have the actual backbone networks for cluster-of-clusters, which can be reconfigured with respect to delay, packet loss, etc. as needed, it is not feasible to carry out practical research over realistic environments. Accordingly, the demand for an efficient way to emulate the backbone networks for cluster-of-clusters is overreaching. In this paper, we suggest a novel design for emulating the backbone networks of cluster-of-clusters. The emulator named NemC can support the fine-grained network delay resolution minimizing the additional overheads. The experimental results show that NemC can emulate the low delay and high bandwidth backbone networks more accurately than existing emulators such as NISTNet and NetEm. We also present a case study showing the performance of MPI applications over cluster-of-clusters environment using NemC.