scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2005"


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This work proposes an IDS that fits the demands and restrictions of WSNs and results reveal that the proposed IDS is efficient and accurate in detecting different kinds of simulated attacks.
Abstract: Wireless sensor networks (WSNs) have many potential applications. Furthermore, in many scenarios WSNs are of interest to adversaries and they become susceptible to some types of attacks since they are deployed in open and unprotected environments and are constituted of cheap small devices. Preventive mechanisms can be applied to protect WSNs against some types of attacks. However, there are some attacks for which there is no known prevention methods. For these cases, it is necessary to use some mechanism of intrusion detection. Besides preventing the intruder from causing damages to the network, the intrusion detection system (IDS) can acquire information related to the attack techniques, helping in the development of prevention systems. In this work we propose an IDS that fits the demands and restrictions of WSNs. Simulation results reveal that the proposed IDS is efficient and accurate in detecting different kinds of simulated attacks.

460 citations


Book ChapterDOI
21 Jun 2005
TL;DR: This paper explores how a new congestion control algorithm — Rate Control Protocol (RCP) — comes much closer to emulating PS over a broad range of operating conditions, and shows that under a wide range of traffic characteristics and network conditions, RCP’s performance is very close to ideal processor sharing.
Abstract: Most congestion control algorithms try to emulate processor sharing (PS) by giving each competing flow an equal share of a bottleneck link. This approach leads to fairness, and prevents long flows from hogging resources. For example, if a set of flows with the same round trip time share a bottleneck link, TCP’s congestion control mechanism tries to achieve PS; so do most of the proposed alternatives, such as eXplicit Control Protocol (XCP). But although they emulate PS well in a static scenario when all flows are long-lived, they do not come close to PS when new flows arrive randomly and have a finite amount of data to send, as is the case in today’s Internet. Typically, flows take an order of magnitude longer to complete with TCP or XCP than with PS, suggesting large room for improvement. And so in this paper, we explore how a new congestion control algorithm — Rate Control Protocol (RCP) — comes much closer to emulating PS over a broad range of operating conditions. In RCP, a router assigns a single rate to all flows that pass through it. The router does not keep flow-state, and does no per-packet calculations. Yet we are able to show that under a wide range of traffic characteristics and network conditions, RCP’s performance is very close to ideal processor sharing.

236 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This paper examines threats to the security of the WiMax/ 802.16 broadband wireless access technology and evaluates the likelihood, impact and risk according to the threat assessment methodology proposed by the ETSI.
Abstract: This paper examines threats to the security of the WiMax/ 802.16 broadband wireless access technology. Threats associated with the physical layer and MAC layer are reviewed in detail. The likelihood, impact and risk are evaluated according to a threat assessment methodology proposed by the ETSI. Threats are listed and ranked according to the level of risk they represent. This work can be used to prioritize future research directions in WiMax/802.16 security.

151 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This work develops a realistic trust model, and an architecture that supports it, and designs a solution with several types of user incentives as part of the structure to obtain a balance between privacy and audit requirements in vehicular networks.
Abstract: We investigate how to obtain a balance between privacy and audit requirements in vehicular networks. Challenging the current trend of relying on asymmetric primitives within VANETs, our investigation is a feasibility study of the use of symmetric primitives, resulting in some efficiency improvements of potential value. More specifically, we develop a realistic trust model, and an architecture that supports our solution. In order to ascertain that most users will not find it meaningful to disconnect or disable transponders, we design our solution with several types of user incentives as part of the structure. Examples of resulting features include anonymous toll collection; improved emergency response; and personalized and route-dependent traffic information.

125 citations


Book ChapterDOI
21 Jun 2005
TL;DR: It is shown that the same qualities of service can be achieved in a realistic heterogeneous backbone network in the sense that the capacity required by VLB is very close to the lower bound of total capacity needed by any architecture in order to support all traffic matrices.
Abstract: Network operators would like their network to support current and future traffic matrices, even when links and routers fail. Not surprisingly, no backbone network can do this today: It is hard to accurately measure the current matrix, and harder still to predict future ones. Even if the matrices are known, how do we know a network will support them, particularly under failures? As a result, today’s networks are designed in a somewhat ad-hoc fashion, using rules-of-thumb and crude estimates of current and future traffic. Previously we proposed the use of Valiant Load-balancing (VLB) for backbone design. It can guarantee 100% throughput to any traffic matrix, even under link and router failures. Our initial work was limited to homogeneous backbones in which routers had the same capacity. In this paper we extend our results in two ways: First, we show that the same qualities of service (guaranteed support of any traffic matrix with or without failure) can be achieved in a realistic heterogeneous backbone network; and second, we show that VLB is optimal, in the sense that the capacity required by VLB is very close to the lower bound of total capacity needed by any architecture in order to support all traffic matrices.

104 citations


Book ChapterDOI
21 Jun 2005
TL;DR: The proposed scheme separates congestion indications from wireless packet erasures by exploiting ECN and substantially improves TCP performance even for packet loss rates up to 30%, thus extending the dynamic range and performance of TCP over networks with lossy links.
Abstract: TCP performance over wireless links suffers substantially when packet error rates increase beyond about 1% - 5% This paper proposes end-end mechanisms to improve TCP performance over lossy networks with potentially much higher packet loss rates Our proposed scheme separates congestion indications from wireless packet erasures by exploiting ECN Timeout effects due to packet erasures are combated using a dynamic and adaptive Forward Error Correction (FEC) scheme that includes adaptation of TCP’s Maximum Segment Size Proactive and reactive FEC overhead enhance TCP SACK to protect original segments and retransmissions respectively Dynamically changing the MSS tailors the number of segments in the window for optimal performance SACK and timeout mechanisms are used as a last resort ns-2 simulations show that our scheme substantially improves TCP performance even for packet loss rates up to 30%, thus extending the dynamic range and performance of TCP over networks with lossy (eg, wireless) links

85 citations


Book ChapterDOI
02 Feb 2005
TL;DR: A novel approach to the congestion control and resource allocation problem of elastic and real-time traffic in telecommunication networks is presented and it is shown that a utility proportional fair resource allocation also ensures utility max-min fairness for all users sharing a single path in the network.
Abstract: In this paper, we present a novel approach to the congestion control and resource allocation problem of elastic and real-time traffic in telecommunication networks. With the concept of utility functions, where each source uses a utility function to evaluate the benefit from achieving a transmission rate, we interpret the resource allocation problem as a global optimization problem. The solution to this problem is characterized by a new fairness criterion, utility proportional fairness. We argue that it is an application level performance measure, i.e. the utility that should be shared fairly among users. As a result of our analysis, we obtain congestion control laws at links and sources that are globally stable and provide a utility proportional fair resource allocation in equilibrium. We show that a utility proportional fair resource allocation also ensures utility max-min fairness for all users sharing a single path in the network. As a special case of our framework, we incorporate utility max-min fairness for the entire network. To implement our approach, neither per-flow state at the routers nor explicit feedback beside ECN (Explicit Congestion Notification) from the routers to the end-systems is required.

64 citations


Book ChapterDOI
21 Jun 2005
TL;DR: This paper proposes a server model, called stochastic service curve, which characterizes the service of the server by two Stochastic processes: an ideal service process and an impairment process and shows that with the concept of stoChastic service Curve, these challenges can be well addressed.
Abstract: Many communication networks such as wireless networks only provide stochastic service guarantees. For analyzing stochastic service guarantees, research efforts have been made in the past few years to develop stochastic network calculus, a probabilistic version of (min, +) deterministic network calculus. However, many challenges have made the development difficult. Some of them are closely related to server modeling, which include output characterization, concatenation property, stochastic backlog guarantee, stochastic delay guarantee, and per-flow service under aggregation. In this paper, we propose a server model, called stochastic service curve to facilitate stochastic service guarantee analysis. We show that with the concept of stochastic service curve, these challenges can be well addressed. In addition, we introduce strict stochastic server to help find the stochastic service curve of a stochastic server, which characterizes the service of the server by two stochastic processes: an ideal service process and an impairment process.

63 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This paper proposes approaches for route authentication and trust-based route selection to defeat attacks on the network and discusses the proposed approaches in detail, outlining possible attacks and defenses against them.
Abstract: In this paper, we consider the security of geographical forwarding (GF) -- a class of algorithms widely used in ad hoc and sensor networks. In GF, neighbors exchange their location information, and a node forwards packets to the destination by picking a neighbor that moves the packet closer to the destination. There are a number of attacks that are possible on geographic forwarding. One of the attacks is predicated on misbehaving nodes falsifying their location information. The first contribution of the paper is to propose a location verification algorithm that addresses this problem. The second contribution of the paper is to propose approaches for route authentication and trust-based route selection to defeat attacks on the network. We discuss the proposed approaches in detail, outlining possible attacks and defenses against them.

60 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This paper proposes a generic authentication process and a new taxonomy that clarifies similarities and differences among authentication protocols reported in the literature and motivates the need for an authentication management architecture.
Abstract: Ad hoc networks, such as sensor and mobile ad hoc networks, must overcome a myriad of security challenges to realize their potential in both civil and military applications. Typically, ad hoc networks are deployed in un-trusted environments. Consequently, authentication is a precursor to any secure interactions in these networks. Recently, numerous authentication protocols have been proposed for ad hoc networks. To date, there is no common framework to evaluate these protocols. Towards developing such a framework, this paper proposes a generic authentication process and a new taxonomy that clarifies similarities and differences among authentication protocols reported in the literature. The taxonomy is based upon the role of nodes in the authentication function, establishment of credentials, and type of credentials. We also motivate the need for an authentication management architecture and discuss some open research issues.

56 citations


Proceedings ArticleDOI
13 Oct 2005
TL;DR: This paper illustrates the important role that the features of the last hop play in the context of multicast key management, and proposes several schemes to distribute the keys while focusing on the topology of thelast hop.
Abstract: In this paper we study multicast key management. We illustrate the important role that the features of the last hop play in the context of multicast key management. We then propose several schemes to distribute the keys while focusing on the topology of the last hop. We also show the importance of these schemes when considering last hop wireless networks such as 3G networks. Towards this we have also considered the current proposal for multicasting in 3G networks and shown the advantages that would ensue by considering the proposed last hop sensitive key management schemes in such bandwidth constrained networks.

Book ChapterDOI
Shansi Ren1, Qun Li1, Haining Wang1, Xin Chen1, Xiaodong Zhang1 
21 Jun 2005
TL;DR: An analytical model facilitates performance evaluation of a sensing schedule, network deployment, and sensing scheduling protocol design and designs a set of sensing scheduling protocols to achieve targeted object detection quality while minimizing power consumption.
Abstract: Object detection quality and network lifetime are two conflicting aspects of a sensor network, but both are critical to many sensor applications such as military surveillance. Probabilistic coverage is an appropriate approach to balancing the conflicting design requirements of monitoring applications. Under probabilistic coverage, we present an analytical model to analyze object detection quality with respect to different network conditions and sensor scheduling schemes. Our analytical model facilitates performance evaluation of a sensing schedule, network deployment, and sensing scheduling protocol design. Applying the model to real sensor networks, we design a set of sensing scheduling protocols to achieve targeted object detection quality while minimizing power consumption. The correctness of our model and the effectiveness of the proposed protocols are validated through extensive simulation experiments.

Book ChapterDOI
02 Feb 2005
TL;DR: This model, consisting of coupled Kermack-McKendrick equations, captures both the measured scanning activity of the worm and the network limitation of its spread, i.e., the effective scan-rate per worm/infective.
Abstract: We present a simple, deterministic mathematical model for the spread of randomly scanning and bandwidth-saturating Internet worms Such worms include Slammer and Witty, both of which spread extremely rapidly Our model, consisting of coupled Kermack-McKendrick equations, captures both the measured scanning activity of the worm and the network limitation of its spread, ie, the effective scan-rate per worm/infective We fit our model to available data for the Slammer worm and demonstrate its ability to accurately represent Slammer's total scan-rate to the core

Book ChapterDOI
Yang Su1, Thomas R. Gross1
21 Jun 2005
TL;DR: The Wireless eXplicit Congestion control Protocol (WXCP), a new explicit flow control protocol for wireless multi-hop networks based on XCP, is described and simulations show that WXCP outperforms current TCP implementations in terms of efficiency and fairness.
Abstract: TCP experiences serious performance degradation in wireless multi-hop networks with its probe-based, loss-driven congestion control scheme. We describe the Wireless eXplicit Congestion control Protocol (WXCP), a new explicit flow control protocol for wireless multi-hop networks based on XCP. We highlight the approaches taken by WXCP to address the difficulties faced by the current TCP implementation in wireless multi-hop networks. Simulations with ns-2 show that WXCP outperforms current TCP implementations in terms of efficiency and fairness.

Book ChapterDOI
02 Feb 2005
TL;DR: The main contribution of the paper is the analysis of the performance bottlenecks of PC-based open-source software routers and the evaluation of the solutions currently available to overcome them.
Abstract: We consider IP routers based on off-the-shelf personal computer (PC) hardware running the Linux open-source operating system. The choice of building IP routers with off-the-shelf hardware stems from the wide availability of documentation, the low cost associated with large-scale production, and the continuous evolution driven by the market. On the other hand, open-source software provides the opportunity to easily modify the router operation so as to suit every need. The main contribution of the paper is the analysis of the performance bottlenecks of PC-based open-source software routers and the evaluation of the solutions currently available to overcome them.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: The main contribution of this work is to propose a new concept: the AutoNomouS Wireless sEnsor netwoRk (ANSWER) whose mission is to provide in-situ users with secure information that enhances their context awareness.
Abstract: The main contribution of this work is to propose a new concept: the AutoNomouS Wireless sEnsor netwoRk (ANSWER) whose mission is to provide in-situ users with secure information that enhances their context awareness. ANSWER finds immediate applications to both overt and covert operations ranging from tactical battlefield surveillance to crisis management and homeland security. ANSWER is capable of performing sophisticated analyses for detecting trends and identifying unexpected, coherent, and emergent behavior.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: A solution based on swarm intelligence paradigm that is more adapted for this kind of problems is proposed, making the effectiveness of "traditional" protocols based on analytical models questionable.
Abstract: In the last few years, the advance of multimedia applications has prompted researchers to undertake the task of routing multimedia data through Manet. This task is rather difficult due to the highly dynamic topology of mobile ad hoc networks and their limited bandwidth. Actually, different routing algorithms are proposed in order to route various kinds of sources (such as voice, video, or data) with diverse traffic characteristics and Quality of Service Requirements (QoS). These algorithms must take into account significant traffic problems such as packet losses, transmission delays, delay variations, etc, caused mainly by congestion in the networks. The prediction of these problems in real time is quite difficult, making the effectiveness of "traditional" protocols based on analytical models questionable. We propose in this paper a solution based on swarm intelligence paradigm that we find more adapted for this kind of problems.

Book ChapterDOI
21 Jun 2005
TL;DR: This work introduces a model of overlays that incorporates correlated link capacities and linear capacity constraints (LCC) to formulate hidden shared bottlenecks and shows that LCC-overlay is perfectly accurate and, hence, enjoys much higher efficiency than the inaccurate independent overlay.
Abstract: Previous work have assumed an independent model for overlay networks: a graph with independent link capacities We introduce a model of overlays (LCC-overlay) which incorporates correlated link capacities by formulating shared bottlenecks as linear capacity constraints We define metrics to measure overlay quality We show that LCC-overlay is perfectly accurate and hence enjoys much better quality than the inaccurate independent overlay We discover that even the restricted node-based LCC yields significantly better quality We study two problems in the context of LCC-graphs: widest-path and maximum-flow We also outline a distributed algorithm to efficiently construct an LCC-overlay

Proceedings ArticleDOI
13 Oct 2005
TL;DR: A new distributed power control protocol is formulated, Load-Aware Power Control (LAPC), that heuristically considers low end-to-end latency when selecting power levels.
Abstract: We investigate the impact of power control on latency in wireless ad-hoc networks. If transmission power is increased, interference increases, thus reducing network capacity. A node sending/relaying delay-sensitive real-time application traffic can, however, use a higher power level to reduce latency, if it considers information about load and channel contention at its neighboring nodes. Based on this observation, we formulate a new distributed power control protocol, Load-Aware Power Control (LAPC), that heuristically considers low end-to-end latency when selecting power levels. We study the performance of LAPC via simulations, varying the network density, node dispersion patterns, and traffic load. Our simulation results demonstrate that LAPC achieves an average end-to-end latency improvement of 54\% over the case when nodes are transmitting at the highest power possible, and an average end-to-end latency improvement of 33\% over the case when nodes are transmitting using the lowest power possible, for uniformly dispersed nodes in a lightly loaded network.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: This work presents a solution that employs the controlled access features of the 802.11e to provide per-session guaranteed quality-of-service and shows that the proposed solution outperforms other methods that are contention and priority based.
Abstract: Wireless Local Area Networks (WLAN) are being deployed at a rapid pace and in different environments. As a result, the demand for supporting a diverse range of applications over wireless access networks is becoming increasingly important. In particular, multimedia applications, such as Video and Voice, have specific delay and bandwidth requirements that cannot be fulfilled by the current IEEE 802.11-based WLANs. To overcome this issue, new enhancements are being introduced to the Medium Access Control (MAC) layer of the 802.11 standard under the framework of the IEEE 802.11e standard which is still a work in progress. The 802.11e standard offers new features for supporting Quality of Service (QoS) in the MAC layer, it however does not mandate a final solution for QoS issues and intentionally leaves it to the implementers to devise their own methods using the available features. We present a solution that employs the controlled access features of the 802.11e to provide per-session guaranteed quality-of-service. Our design comprises of a scheduler that assign guaranteed service times to individual sessions using a fair scheduling algorithm. We show that the proposed solution outperforms other methods that are contention and priority based.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: Results show that the proposed utility-based bandwidth adaptation scheme for multi-class traffic QoS provisioning in wireless networks is effective in both increasing cell utility and reducing the call blocking and handoff dropping probabilities of wireless networks.
Abstract: Adaptive bandwidth allocation is becoming very attractive in wireless communications since it can dynamically adjust the allocated bandwidth of ongoing calls to cope with the network resource fluctuations. In this paper, we propose a utility-based bandwidth adaptation scheme for multi-class traffic QoS provisioning in wireless networks. With the proposed scheme, each call is assigned a utility function and depending on the network load the bandwidth of ongoing calls are upgraded or degraded so that the achieved utility of each individual cell is maximized. We also take into account the negative effects of bandwidth adaptation by integrating adaptation penalty into the utility function. Simulation experiments are carried out to evaluate the performance of the proposed scheme. Results show that our adaptive scheme is effective in both increasing cell utility and reducing the call blocking and handoff dropping probabilities of wireless networks.

Book ChapterDOI
21 Jun 2005
TL;DR: This paper proposes an alternate approach, loopless interface-specific forwarding (LISF), that averts transient loops by forwarding a packet based on both its incoming interface and destination, and proves its correctness.
Abstract: Under link-state routing protocols such as OSPF and IS-IS, when there is a change in the topology, propagation of link-state announcements, path recomputation, and updating of forwarding tables (FIBs) will all incur some delay before traffic forwarding can resume on alternate paths. During this convergence period, routers may have inconsistent views of the network, resulting in transient forwarding loops. Previous remedies proposed to address this issue enforce a certain order among the nodes in which they update their FIBs. While such approaches succeed in avoiding transient loops, they incur additional message overhead and increased convergence delay. We propose an alternate approach, loopless interface-specific forwarding (LISF), that averts transient loops by forwarding a packet based on both its incoming interface and destination. LISF requires no modifications to the existing link-state routing mechanisms. It is easily deployable with current routers since they already maintain a FIB at each interface for lookup efficiency. This paper presents the LISF approach, proves its correctness, discusses three alternative implementations of it and evaluates their performance.

Book ChapterDOI
02 Feb 2005
TL;DR: This work proposes a new CFA algorithm, called GWFD, that is capable of assign flow and capacities under e2e QoS constraints, that maps the end-user performance constrains into transport-layer performance constraints first, and then into network-layerperformance constraints.
Abstract: The topological design of distributed packet switched networks consists of finding a topology that minimizes the communication costs by taking into account a certain number of constraints such as the end-to-end quality of service (e2e QoS) and the reliability. Our approach is based on the exploration of the solution space using metaheuristic algorithms (GA and TS), where candidate solutions are evaluated by solving CFA problems. We propose a new CFA algorithm, called GWFD, that is capable of assign flow and capacities under e2e QoS constraints. Our proposed approach maps the end-user performance constrains into transport-layer performance constraints first, and then into network-layer performance constraints. A realistic representation of traffic patterns at the network layer is considered as well to design the IP network. Examples of application of the proposed design methodology show the effectiveness of our approach.

Book ChapterDOI
21 Jun 2005
TL;DR: This paper has completed the first real implementation of network coding in end hosts, as well as decentralized algorithms to construct the routing strategies and to perform random code assignment, and suggests that approaching maximum throughput with network coding is not only theoretically sound, but also practically promising.
Abstract: Network coding has been recently proposed in information theory as a new dimension of the information multicast problem that helps achieve optimal transmission rate or cost. End hosts in overlay networks are natural candidates to perform network coding, due to its available computational capabilities. In this paper, we seek to bring theoretical advances in network coding to the practice of high-throughput multicast in overlay networks. We have completed the first real implementation of network coding in end hosts, as well as decentralized algorithms to construct the routing strategies and to perform random code assignment. Our experiences suggest that approaching maximum throughput with network coding is not only theoretically sound, but also practically promising. We also present a number of unique challenges in designing and realizing coded data dissemination, and corresponding solution techniques to address them.

Book ChapterDOI
21 Jun 2005
TL;DR: A novel framework eQoS to monitoring and controlling client-perceived response time in Web servers is proposed, which proposes an adaptive fuzzy controller, STFC, to allocating server resources.
Abstract: It is important to guarantee end-to-end quality of service (QoS) under heavy-load conditions. Existing work focus on server-side request processing time or queueing delays in the network core. In this paper, we propose a novel framework eQoS to monitoring and controlling client-perceived response time in Web servers. The response time is measured with respect to requests for Web pages that contain multiple embedded objects. Within the framework, we propose an adaptive fuzzy controller, STFC, to allocating server resources. It deals with the effect of process delay in resource allocation by its two-level self-tuning capabilities. Experimental results on PlanetLab and simulated networks demonstrate the effectiveness of the framework: it controls client-perceived pageview response time to be within 20% of a pre-defined target. In comparison with static fuzzy controller, experimental results show that, although the STFC has slightly worse performance in the environment where the static fuzzy controller is best tuned, because of its self-tuning capabilities, it works better in all other test cases by 25% in terms of the deviation from the target response time. In addition, due to its model independence, the STFC outperforms the linear proportional integral (PI) and adaptive PI controllers by 50% and 75%, respectively.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: This paper introduces a new approach to reform the cluster, namely the Smooth and Efficient Re-Clustering (SERC) protocol, based on providing a secondary clusterhead for each clusterhead which is called here primary clusterhead (PCH).
Abstract: In most MANET clustering protocols, the clusterhead nodes take on a special role in managing routing information. However, the frequent changes of the clusterheads affect the performance of all the protocols that rely on it. Due to the dynamic nature of the mobile nodes, their association and disassociation to and from clusters perturb the stability of the network and the problem becomes worse if these nodes are clusterheads. Eventually, the clustering stability in MANET would be significantly affected. To enhance the network stability, in this paper we introduce a new approach to reform the cluster, namely the Smooth and Efficient Re-Clustering (SERC) protocol. This approach is based on providing a secondary clusterhead (SCH) for each clusterhead which we call here primary clusterhead (PCH). This SCH, which is a regular member node, is identified and assigned by its PCH to be the future leader of the cluster. The SCH will be triggered to be the PCH when the former PCH can no longer be a clusterhead. Since the future clusterhead is known by the cluster members, the cluster leadership will be transferred smoothly and the cluster will be reformed immediately with no need to invoke the clustering algorithm. Also, since the member nodes are associated with the cluster with its subsequent clusterheads, the cluster looks stable to the other clusters. Hence, the smooth clusterhead transfer from a node to another aims at increasing the cluster residence time, which will sustain the stability of the network, decrease the clustering communication overhead and, minimize the time spent by each node to join or to reform a cluster.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: A novel security system based on stream ciphers for their speed, but maintaining a much more solid and proven security that is much faster than peer mechanisms such as WEP and CCMP.
Abstract: Motivated by the tradeoff between security and efficiency performance parameters that has been imposed on all modern wireless security protocols, we designed a novel security system that gained in both parameters. Our system is based on stream ciphers for their speed, but maintaining a much more solid and proven security. Such security strength stems from the novel deployment of permutation vectors and the data records in the regeneration of the secret key. Moreover, the involvement of the former results in an adaptive and efficient data integrity mechanism that relies on error propagations in the data stream. Simulation results show that our security protocol is much faster than peer mechanisms such as WEP and CCMP. Hence, we anticipate a great opportunity to deploy our system in environments with scarce bandwidth, which are the most vulnerable; specifically the wireless domain.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: A fuzzy logic approach for threshold selection named (FuzzyCCG) is proposed in order to enhance the control of congestion and promises to be an efficient tool for reducing the delay of multimedia applications in wireless ad hoc networks.
Abstract: This paper explores the use of fuzzy logic for threshold buffers management in wireless ad hoc networks. This exploration is useful first, because of the dynamic nature of buffer occupancy and congestion at a node; second, because of the uncertainty of information in wireless ad hoc networks due to network mobility. The notion of threshold is practical for discarding data packets and adapting the traffic service depending on the occupancy of buffers. The threshold function has a significant influence on the performance of networks in terms of both packets average delay and throughput. We propose a fuzzy logic approach for threshold selection named (FuzzyCCG) in order to enhance the control of congestion. FuzzyCCG was studied under different mobility, channel, and traffic conditions. The results of simulations confirm that the proposed model can achieve low and stable end-to-end delay under different network scalability and mobility conditions. FuzzyCCG promises to be an efficient tool for reducing the delay of multimedia applications in wireless ad hoc networks.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: Several models for wireless sensor networks (\wsn s) where the focus is on selecting paths to relay data from sensors to the base station are presented and discussed.
Abstract: In this paper we present and discuss several models for wireless sensor networks (\wsn s) where the focus is on selecting paths to relay data from sensors to the base station. We are concerned with high level protocols that assume complete knowledge of the local network topology at any node and rely on basic routines to address and forward packets from node to node. More precisely, for every sensor node that needs to send data to the base station, we want to select one or more routes involving perhaps several intermediate nodes along which data are sent. The route(s) are chosen to satisfy quality of service (QoS) requirements expressed by the maximum delay incurred by data from the moment they are captured by sensors to the moment they reach the base station.

Book ChapterDOI
02 Feb 2005
TL;DR: This work proposes using the Schwarz information criterion (SIC) to detect changes in the variance structure of the wavelet decomposition and then segmenting the trace into pieces with homogeneous characteristics for the Hurst parameter, which can be extended to the stationary wavelet transform (SWT), a non-orthogonal transform that provides higher accuracy in the estimation of the change points.
Abstract: Network traffic exhibits fractal characteristics, such as self-similarity and long-range dependence. Traffic fractality and its associated burstiness have important consequences for the performance of computer networks, such as higher queue delays and losses than predicted by classical models. There are several estimators of the fractal parameters, and those based on the discrete wavelet transform (DWT) are the best in terms of efficiency and accuracy. The DWT estimator does not consider the possibility of changes to the fractal parameters over time. We propose using the Schwarz information criterion (SIC) to detect changes in the variance structure of the wavelet decomposition and then segmenting the trace into pieces with homogeneous characteristics for the Hurst parameter. The procedure can be extended to the stationary wavelet transform (SWT), a non-orthogonal transform that provides higher accuracy in the estimation of the change points. The SIC analysis can be performed progressively. The DWT-SIC and SWT-SIC algorithms were tested against synthetic and well-known real traffic traces, with promising results.