scispace - formally typeset
Search or ask a question

Showing papers in "Wireless Networks in 2010"


Journal ArticleDOI
TL;DR: A distributed algorithm based on the distributed coloring of the nodes, that increases the delay by a factor of 10–70 over centralized algorithms for 1000 nodes, and obtain upper bound for these schedules as a function of the total number of packets generated in the network.
Abstract: Algorithms for scheduling TDMA transmissions in multi-hop networks usually determine the smallest length conflict-free assignment of slots in which each link or node is activated at least once. This is based on the assumption that there are many independent point-to-point flows in the network. In sensor networks however often data are transferred from the sensor nodes to a few central data collectors. The scheduling problem is therefore to determine the smallest length conflict-free assignment of slots during which the packets generated at each node reach their destination. The conflicting node transmissions are determined based on an interference graph, which may be different from connectivity graph due to the broadcast nature of wireless transmissions. We show that this problem is NP-complete. We first propose two centralized heuristic algorithms: one based on direct scheduling of the nodes or node-based scheduling, which is adapted from classical multi-hop scheduling algorithms for general ad hoc networks, and the other based on scheduling the levels in the routing tree before scheduling the nodes or level-based scheduling, which is a novel scheduling algorithm for many-to-one communication in sensor networks. The performance of these algorithms depends on the distribution of the nodes across the levels. We then propose a distributed algorithm based on the distributed coloring of the nodes, that increases the delay by a factor of 10---70 over centralized algorithms for 1000 nodes. We also obtain upper bound for these schedules as a function of the total number of packets generated in the network.

381 citations


Journal ArticleDOI
TL;DR: This paper first breaks up existing routing strategies into a small number of common and tunable routing modules, and shows how and when a given routing module should be used, depending on the set of network characteristics exhibited by the wireless application.
Abstract: Communication networks, whether they are wired or wireless, have traditionally been assumed to be connected at least most of the time. However, emerging applications such as emergency response, special operations, smart environments, VANETs, etc. coupled with node heterogeneity and volatile links (e.g. due to wireless propagation phenomena and node mobility) will likely change the typical conditions under which networks operate. In fact, in such scenarios, networks may be mostly disconnected, i.e., most of the time, end-to-end paths connecting every node pair do not exist. To cope with frequent, long-lived disconnections, opportunistic routing techniques have been proposed in which, at every hop, a node decides whether it should forward or store-and-carry a message. Despite a growing number of such proposals, there still exists little consensus on the most suitable routing algorithm(s) in this context. One of the reasons is the large diversity of emerging wireless applications and networks exhibiting such "episodic" connectivity. These networks often have very different characteristics and requirements, making it very difficult, if not impossible, to design a routing solution that fits all. In this paper, we first break up existing routing strategies into a small number of common and tunable routing modules (e.g. message replication, coding, etc.), and then show how and when a given routing module should be used, depending on the set of network characteristics exhibited by the wireless application. We further attempt to create a taxonomy for intermittently connected networks. We try to identify generic network characteristics that are relevant to the routing process (e.g., network density, node heterogeneity, mobility patterns) and dissect different "challenged" wireless networks or applications based on these characteristics. Our goal is to identify a set of useful design guidelines that will enable one to choose an appropriate routing protocol for the application or network in hand. Finally, to demonstrate the utility of our approach, we take up some case studies of challenged wireless networks, and validate some of our routing design principles using simulations.

232 citations


Journal ArticleDOI
TL;DR: The theoretical base is established and a localization algorithm for building a zero-configuration and robust indoor localization and tracking system to support location-based network services and management and the empirical results show the proposed system is quite robust and gives accurate localization results.
Abstract: With the technical advances in ubiquitous computing and wireless networking, there has been an increasing need to capture the context information (such as the location) and to figure it into applications. In this paper, we establish the theoretical base and develop a localization algorithm for building a zero-configuration and robust indoor localization and tracking system to support location-based network services and management. The localization algorithm takes as input the on-line measurements of received signal strengths (RSSs) between 802.11 APs and between a client and its neighboring APs, and estimates the location of the client. The on-line RSS measurements among 802.11 APs are used to capture (in real-time) the effects of RF multi-path fading, temperature and humidity variations, opening and closing of doors, furniture relocation, and human mobility on the RSS measurements, and to create, based on the truncated singular value decomposition (SVD) technique, a mapping between the RSS measure and the actual geographical distance. The proposed system requires zero-configuration because the on-line calibration of the effect of wireless physical characteristics on RSS measurement is automated and no on-site survey or initial training is required to bootstrap the system. It is also quite responsive to environmental dynamics, as the impacts of physical characteristics changes have been explicitly figured in the mapping between the RSS measures and the actual geographical distances. We have implemented the proposed system with inexpensive off-the-shelf Wi-Fi hardware and sensory functions of IEEE 802.11, and carried out a detailed empirical study in our departmental building, Siebel Center for Computer Science. The empirical results show the proposed system is quite robust and gives accurate localization results.

158 citations


Journal ArticleDOI
TL;DR: Hierarchical geographic multicast routing is presented, a new location-based multicast protocol that seamlessly incorporates the key design concepts of GMR and HRPM and optimizes them for wireless sensor networks by providing both forwarding efficiency (energy efficiency) as well as scalability to large networks.
Abstract: Wireless sensor networks comprise typically dense deployments of large networks of small wireless capable sensor devices. In such networks, multicast is a fundamental routing service for efficient data dissemination required for activities such as code updates, task assignment and targeted queries. In particular, efficient multicast for sensor networks is critical due to the limited energy availability in such networks. Multicast protocols that exploit location information available from GPS or localization algorithms are more efficient and robust than other stateful protocols as they avoid the difficulty of maintaining distributed state (multicast tree). Since localization is typically already required for sensing applications, this location information can simply be reused for optimizing multicast performance at no extra cost. Recently, two protocols were proposed to optimize two orthogonal aspects of location-based multicast protocols: GMR (Sanchez et al. GMR: Geographic multicast routing for wireless sensor networks. In Proceedings of the IEEE SECON, 2006) improves the forwarding efficiency by exploiting the wireless multicast advantage but it suffers from scalability issues when dealing with large sensor networks. On the other hand, HRPM (Das et al. Distributed hashing for scalable multicast in wireless ad hoc networks. IEEE TPDS 47(4):445---487, 2007) reduces the encoding overhead by constructing a hierarchy at virtually no maintenance cost via the use of geographic hashing but it is energy-inefficient due to inefficacies in forwarding data packets. In this paper, we present HGMR (hierarchical geographic multicast routing), a new location-based multicast protocol that seamlessly incorporates the key design concepts of GMR and HRPM and optimizes them for wireless sensor networks by providing both forwarding efficiency (energy efficiency) as well as scalability to large networks. Our simulation studies show that: (i) In an ideal environment, HGMR incurs a number of transmissions either very close to or lower than GMR, and, at the same time, an encoding overhead very close to HRPM, as the group size or the network size increases. (ii) In a realistic environment, HGMR, like HRPM, achieves a Packet Delivery Ratio (PDR) that is close to perfect and much higher than GMR. Further, HGMR has the lowest packet delivery latency among the three protocols, while incurring much fewer packet transmissions than HRPM. (iii) HGMR is equally efficient with both uniform and non-uniform group member distributions.

124 citations


Journal ArticleDOI
TL;DR: A novel cluster-based trust-aware routing protocol for MANETs to protect forwarded packets from intermediary malicious nodes and ensures the trustworthiness of cluster-heads by replacing them as soon as they become malicious and can dynamically update the packet path to avoid malicious routes.
Abstract: Routing protocols are the binding force in mobile ad hoc network (MANETs) since they facilitate communication beyond the wireless transmission range of the nodes. However, the infrastructure-less, pervasive, and distributed nature of MANETs renders them vulnerable to security threats. In this paper, we propose a novel cluster-based trust-aware routing protocol (CBTRP) for MANETs to protect forwarded packets from intermediary malicious nodes. The proposed protocol organizes the network into one-hop disjoint clusters then elects the most qualified and trustworthy nodes to play the role of cluster-heads that are responsible for handling all the routing activities. The proposed CBTRP continuously ensures the trustworthiness of cluster-heads by replacing them as soon as they become malicious and can dynamically update the packet path to avoid malicious routes. We have implemented and simulated the proposed protocol then evaluated its performance compared to the clustered based routing protocol (CBRP) as well as the 2ACK approach. Comparisons and analysis have shown the effectiveness of our proposed scheme.

105 citations


Journal ArticleDOI
TL;DR: Improved the virtual topology strategy and import heuristic algorithm to satisfy the QoS requirements of the MLSN users and results show that heuristic routing algorithm can provide more QoS guarantees than shortest path first algorithm on package loss rate, link congestion and call blocking.
Abstract: Due to the recent developments in wireless technology and electronics, it is feasible to develop pervasive algorithms for satellite environments. Multi-Layered Satellite Networks (MLSNs) that consist of low earth orbit and medium earth orbit satellites are becoming increasingly important since they have higher coverage and better service than single-layered satellite networks. One of the challenges in MLSNs is the development of specialized and efficient routing algorithms. In this paper, we improved the virtual topology strategy and import heuristic algorithm to satisfy the QoS requirements of the MLSN users. The QoS requirements include end to end delay; link utilization, bandwidth, and package loss rate are mainly focused in this paper. To satisfy the QoS requirements is a multi-parameter optimization problem, and it is convinced as a Non-deterministic Polynomial Complete problem already. As a solution, three typical heuristic algorithms--Ant Colony Algorithm, Taboo Search Algorithm and Genetic Algorithm are applied in the routing scheme in order to reduce package loss, link congestion and call blocking. Simulation results show that heuristic routing algorithm can provide more QoS guarantees than shortest path first algorithm on package loss rate, link congestion and call blocking.

83 citations


Journal ArticleDOI
TL;DR: This paper explores the problem of efficiently designing a multihop wireless backhaul to connect multiple wireless access points to a wired gateway, and provides a generalized link activation framework for scheduling packets over this wirelessBackhaul, such that any existing wireline scheduling policy can be implemented locally at each node of the wirelessbackhaul.
Abstract: As wireless access technologies improve in data rates, the problem focus is shifting towards providing adequate backhaul from the wireless access points to the Internet. Existing wired backhaul technologies such as copper wires running at DSL, T1, or T3 speeds can be expensive to install or lease, and are becoming a performance bottleneck as wireless access speeds increase. Longhaul, non-line-of-sight wireless technologies such as WiMAX (802.16) hold the promise of enabling a high speed wireless backhaul as a cost-effective alternative. However, the biggest challenge in building a wireless backhaul is achieving guaranteed performance (throughput and delay) that is typically provided by a wired backhaul. This paper explores the problem of efficiently designing a multihop wireless backhaul to connect multiple wireless access points to a wired gateway. In particular, we provide a generalized link activation framework for scheduling packets over this wireless backhaul, such that any existing wireline scheduling policy can be implemented locally at each node of the wireless backhaul. We also present techniques for determining good interference-free routes within our scheduling framework, given the link rates and cross-link interference information. When a multihop wireline scheduler with worst case delay bounds (such as WFQ or Coordinated EDF) is implemented over the wireless backhaul, we show that our scheduling and routing framework guarantees approximately twice the delay of the corresponding wireline topology. Finally, we present simulation results to demonstrate the low delays achieved using our framework.

75 citations


Journal ArticleDOI
TL;DR: A novel environmental monitoring system with a focus on overall system architecture for seamless integration of wired and wireless sensors for long-term, remote, and near-real-time monitoring and a new WSN-based soil moisture monitoring system is developed and deployed to support hydrologic monitoring and modeling research.
Abstract: Wireless sensor networks (WSNs) have great potential to revolutionize many science and engineering domains. We present a novel environmental monitoring system with a focus on overall system architecture for seamless integration of wired and wireless sensors for long-term, remote, and near-real-time monitoring. We also present a unified framework for sensor data collection, management, visualization, dissemination, and exchange, conforming to the new Sensor Web Enablement standard. Some initial field testing results are also presented. The monitoring system is being integrated into the Texas Environmental Observatory infrastructure for long-term operation. As part of the integrated system, a new WSN-based soil moisture monitoring system is developed and deployed to support hydrologic monitoring and modeling research. This work represents a significant contribution to the empirical study of the emerging WSN technology. We address many practical issues in real-world application scenarios that are often neglected in the existing WSN research.

73 citations


Journal ArticleDOI
TL;DR: It is shown that for any fixed k, there can be no k-local routing algorithm that guarantees delivery on all unit ball graphs, and guaranteed delivery is possible if the nodes of the unit ball graph are contained in a slab of thickness 1/\sqrt{2}.
Abstract: We study the problem of routing in three-dimensional ad hoc networks. We are interested in routing algorithms that guarantee delivery and are k-local, i.e., each intermediate node v's routing decision only depends on knowledge of the labels of the source and destination nodes, of the subgraph induced by nodes within distance k of v, and of the neighbour of v from which the message was received. We model a three-dimensional ad hoc network by a unit ball graph, where nodes are points in three-dimensional space, and for each node v, there is an edge between v and every node u contained in the unit-radius ball centred at v. The question of whether there is a simple local routing algorithm that guarantees delivery in unit ball graphs has been open for some time. In this paper, we answer this question in the negative: we show that for any fixed k, there can be no k-local routing algorithm that guarantees delivery on all unit ball graphs. This result is in contrast with the two-dimensional case, where 1-local routing algorithms that guarantee delivery are known. Specifically, we show that guaranteed delivery is possible if the nodes of the unit ball graph are contained in a slab of thickness $$1/\sqrt{2}.$$ However, there is no k-local routing algorithm that guarantees delivery for the class of unit ball graphs contained in thicker slabs, i.e., slabs of thickness $$1/\sqrt{2} + \epsilon$$ for some $$ \epsilon > 0.$$ The algorithm for routing in thin slabs derives from a transformation of unit ball graphs contained in thin slabs into quasi unit disc graphs, which yields a 2-local routing algorithm. We also show several results that further elaborate on the relationship between these two classes of graphs.

70 citations


Journal ArticleDOI
TL;DR: This paper proposes a secure encrypted-data aggregation scheme for wireless sensor networks that eliminates redundant sensor readings without using encryption and maintains data secrecy and privacy during transmission and can be practically implemented in on-the-shelf sensor platforms.
Abstract: This paper proposes a secure encrypted-data aggregation scheme for wireless sensor networks. Our design for data aggregation eliminates redundant sensor readings without using encryption and maintains data secrecy and privacy during transmission. Conventional aggregation functions operate when readings are received in plaintext. If readings are encrypted, aggregation requires decryption creating extra overhead and key management issues. In contrast to conventional schemes, our proposed scheme provides security and privacy, and duplicate instances of original readings will be aggregated into a single packet. Our scheme is resilient to known-plaintext attacks, chosen-plaintext attacks, ciphertext-only attacks and man-in-the-middle attacks. Our experiments show that our proposed aggregation method significantly reduces communication overhead and can be practically implemented in on-the-shelf sensor platforms.

59 citations


Journal ArticleDOI
TL;DR: A learning automata-based data aggregation method in sensor networks when the environment’s changes cannot be predicted beforehand will be proposed and the results have shown that the proposed method outperforms all these methods, especiallywhen the environment is highly dynamic.
Abstract: One way to reduce energy consumption in wireless sensor networks is to reduce the number of packets being transmitted in the network. As sensor networks are usually deployed with a number of redundant nodes (to overcome the problem of node failures which is common in such networks), many nodes may have almost the same information which can be aggregated in intermediate nodes, and hence reduce the number of transmitted packets. Aggregation ratio is maximized if data packets of all nodes having almost the same information are aggregated together. For this to occur, each node should forward its packets along a path on which maximum number of nodes with almost the same information as the information of the sending node exist. In many real scenarios, such a path has not been remained the same for the overall network lifetime and is changed from time to time. These changes may result from changes occurred in the environment in which the sensor network resides and usually cannot be predicted beforehand. In this paper, a learning automata-based data aggregation method in sensor networks when the environment's changes cannot be predicted beforehand will be proposed. In the proposed method, each node in the network is equipped with a learning automaton. These learning automata in the network collectively learn the path of aggregation with maximum aggregation ratio for each node for transmitting its packets toward the sink. To evaluate the performance of the proposed method computer simulations have been conducted and the results are compared with the results of three existing methods. The results have shown that the proposed method outperforms all these methods, especially when the environment is highly dynamic.

Journal ArticleDOI
TL;DR: Simulation results indicate that the protocol can effectively address deafness and directional hidden terminal problem and increase network performance and is evaluated using detailed simulation studies.
Abstract: We address deafness and directional hidden terminal problem that occur when MAC protocols are designed for directional antenna based wireless multi-hop networks. Deafness occurs when the transmitter fails to communicate to its intended receiver, because the receiver's antenna is oriented in a different direction. The directional hidden terminal problem occurs when the transmitter fails to hear a prior RTS/CTS exchange between another pair of nodes and cause collision by initiating a transmission to the receiver of the ongoing communication. Though directional antennas offer better spatial reuse, these problems can have a serious impact on network performance. In this paper, we study various scenarios in which these problems can occur and design a MAC protocol that solves them comprehensively using only a single channel and single radio interface. Current solutions in literature either do not address these issues comprehensively or use more than one radio/channel to solve them. We evaluate our protocol using detailed simulation studies. Simulation results indicate that our protocol can effectively address deafness and directional hidden terminal problem and increase network performance.

Journal ArticleDOI
TL;DR: A trust management model that can uniformly support the needs of nodes with highly diverse network roles and capabilities, by exploiting the pre-deployment knowledge on the network topology and the information flows, and by allowing for flexibility in the trust establishment process is proposed.
Abstract: Wireless sensor networks are characterised by the distributed nature of their operation and the resource constraints on the nodes. Trust management schemes that are targeted at sensor networks need to be lightweight in terms of computational and communication requirements, yet powerful in terms of flexibility in managing trust between nodes of heterogeneous deployments. In this paper, we propose a trust management model that can uniformly support the needs of nodes with highly diverse network roles and capabilities, by exploiting the pre-deployment knowledge on the network topology and the information flows, and by allowing for flexibility in the trust establishment process. The model is hybrid, combining aspects from certificate-based and behaviour-based approaches on trust establishment on common evaluation processes and metrics. It enables controlled trust evolution based on network pre-configuration, and controlled trust revocation through the propagation of behaviour evaluation results made available by supervision networks. The proposed model and trust metrics have been validated through simulation. The results and analysis demonstrate its effectiveness in managing the trust relationships between nodes and clusters, while distributing the computational cost of trust evaluation operations.

Journal ArticleDOI
TL;DR: A hybrid system which combines the energy efficient and statistically reliable transport (eESRT) protocol with the implicit and explicit ARQ (ieARQ) protocol is proposed, which adaptively switches between eESRT and ieARQ machanisms according to a dynamic hop threshold H_sw proposed in this work.
Abstract: Typical wireless sensor network deployments are expected to be in unattended terrains where link packet error rate may be as high as 70% and path length could be up to tens of hops. In coping with such harsh conditions, we introduce a new notion of statistical reliability to achieve a balance between data reliability and energy consumption. Under this new paradigm, the energy efficiency of a comprehensive set of statistically reliable data delivery protocols are analyzed. Based on the insight gained, we propose a hybrid system which combines the energy efficient and statistically reliable transport (eESRT) protocol with the implicit and explicit ARQ (ieARQ) protocol. This hybrid system adaptively switches between eESRT and ieARQ machanisms according to a dynamic hop threshold H_sw proposed in this work. Simulation and experiment results confirm our theoretical findings and demonstrate the advantages the hybrid system in boosting energy efficiency, reducing end to end delay, and in overcoming the "avalanche" effect.

Journal ArticleDOI
TL;DR: In this article, the authors propose a distributed algorithm for the autonomous deployment of mobile sensors called Push & Pull, which does not require any prior knowledge of the operating conditions or any manual tuning of key parameters.
Abstract: Mobile sensor networks are important for several strategic applications devoted to monitoring critical areas. In such hostile scenarios, sensors cannot be deployed manually and are either sent from a safe location or dropped from an aircraft. Mobile devices permit a dynamic deployment reconfiguration that improves the coverage in terms of completeness and uniformity. In this paper we propose a distributed algorithm for the autonomous deployment of mobile sensors called Push & Pull. According to our proposal, movement decisions are made by each sensor on the basis of locally available information and do not require any prior knowledge of the operating conditions or any manual tuning of key parameters. We formally prove that, when a sufficient number of sensors are available, our approach guarantees a complete and uniform coverage. Furthermore, we demonstrate that the algorithm execution always terminates preventing movement oscillations. Numerous simulations show that our algorithm reaches a complete coverage within reasonable time with moderate energy consumption, even when the target area has irregular shapes. Performance comparisons between Push & Pull and one of the most acknowledged algorithms show how the former one can efficiently reach a more uniform and complete coverage under a wide range of working scenarios.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an optimal distributed network selection scheme in heterogeneous wireless networks considering multimedia application layer QoS, where the integrated network was formulated as a restless bandit system and the optimal network selection policy was indexable.
Abstract: The complementary characteristics of different wireless networks make it attractive to integrate a wide range of radio access technologies. Most of previous work on integrating heterogeneous wireless networks concentrates on network layer quality of service (QoS), such as blocking probability and utilization, as design criteria. However, from a user's point of view, application layer QoS, such as multimedia distortion, is an important issue. In this paper, we propose an optimal distributed network selection scheme in heterogeneous wireless networks considering multimedia application layer QoS. Specifically, we formulate the integrated network as a restless bandit system. With this stochastic optimization formulation, the optimal network selection policy is indexable, meaning that the network with the lowest index should be selected. The proposed scheme can be applicable to both tight coupling and loose coupling scenarios in the integration of heterogeneous wireless networks. Simulation results are presented to illustrate the performance of the proposed scheme.

Journal ArticleDOI
TL;DR: The proposed heuristic algorithm adapts methods usually applied in network design problems in the specific requirements of sensor networks by suggesting an appropriate number of MAs that minimizes the overall data fusion cost and constructs near-optimal itineraries for each of them.
Abstract: In wireless sensor networks (WSNs), a lot of sensory traffic with redundancy is produced due to massive node density and their diverse placement. This causes the decline of scarce network resources such as bandwidth and energy, thus decreasing the lifetime of sensor network. Recently, the mobile agent (MA) paradigm has been proposed as a solution to overcome these problems. The MA approach accounts for performing data processing and making data aggregation decisions at nodes rather than bring data back to a central processor (sink). Using this approach, redundant sensory data is eliminated. In this article, we consider the problem of calculating near-optimal routes for MAs that incrementally fuse the data as they visit the nodes in a WSN. The order of visited nodes (the agent's itinerary) affects not only the quality but also the overall cost of data fusion. Our proposed heuristic algorithm adapts methods usually applied in network design problems in the specific requirements of sensor networks. It computes an approximate solution to the problem by suggesting an appropriate number of MAs that minimizes the overall data fusion cost and constructs near-optimal itineraries for each of them. The performance gain of our algorithm over alternative approaches both in terms of cost and task completion latency is demonstrated by a quantitative evaluation and also in simulated environments through a Java-based tool.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed location-tracking algorithm using KF with the RFID-assisted scheme can achieve a high degree of location accuracy (i.e., more than 90% of the estimated positions have error distances of less than 2.1 m).
Abstract: This paper presents adaptive algorithms for estimating the location of a mobile terminal (MT) based on radio propagation modeling (RPM), Kalman filtering (KF), and radio-frequency identification (RFID) assisting for indoor wireless local area networks (WLANs). The location of the MT of the extended KF positioning algorithm is extracted from the constant-speed trajectory and the radio propagation model. The observation information of the KF tracker is extracted from the empirical and RPM positioning methods. Specifically, a sensor-assisted method employs an RFID system to adapt the sequential selection cluster algorithm. As compared with the empirical method, not only can the RPM algorithm reduce the number of training data points and perform on-line calibration in the signal space, but the RPM and KF algorithms can alleviate the problem of aliasing. In addition, the KF tracker with the RFID-assisted scheme can calibrate the location estimation and improve the corner effect. Experimental results demonstrate that the proposed location-tracking algorithm using KF with the RFID-assisted scheme can achieve a high degree of location accuracy (i.e., more than 90% of the estimated positions have error distances of less than 2.1 m).

Journal ArticleDOI
TL;DR: An analytical model of the time it takes for the master in a piconet to discover one slave is given and it is shown that, even in the absence of packet interference, the discovery time can be long in some instances.
Abstract: Device discovery and connection establishment are fundamental to communication between two Bluetooth (BT) devices. In this paper, we give an analytical model of the time it takes for the master in a piconet to discover one slave. We show that, even in the absence of packet interference, the discovery time can be long in some instances. We have simulated the discovery protocol by actually implementing it to validate the analytical model. By means of simulations, we show how discovery time is affected by (i) the presence of multiple potential slaves, and (ii) changes in the maximum backoff limit. Using simulation studies we observed the effectiveness of two proposed improvements to device discovery, namely, (i) avoiding repetitions of the A and B trains before a train switch, and (ii) eliminating the idea of random backoff, or reducing the backoff limit. We show that discovery time can be reduced by avoiding repetitions of the A and B trains before a train switch. However, complete elimination of the random backoff is not a good idea, as discovery time will be too long when the number of BT devices is large. Instead, choosing a small backoff limit of 250---300 slots is highly effective in reducing discovery time even in the presence of a large number (say, 50) of potential slaves.

Journal ArticleDOI
TL;DR: The distributed adaptive sleep scheduling algorithm (DASSA) does not require location information of sensors while maintaining connectivity and satisfying a user defined coverage target and attains network lifetimes up to 92% of the centralized solution and it achieves significantly longer lifetimes compared with the DGT algorithm.
Abstract: One of the most important design objectives in wireless sensor networks (WSN) is minimizing the energy consumption since these networks are expected to operate in harsh conditions where the recharging of batteries is impractical, if not impossible. The sleep scheduling mechanism allows sensors to sleep intermittently in order to reduce energy consumption and extend network lifetime. In applications where 100% coverage of the network field is not crucial, allowing the coverage to drop below full coverage while keeping above a predetermined threshold, i.e., partial coverage, can further increase the network lifetime. In this paper, we develop the distributed adaptive sleep scheduling algorithm (DASSA) for WSNs with partial coverage. DASSA does not require location information of sensors while maintaining connectivity and satisfying a user defined coverage target. In DASSA, nodes use the residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism reduces the randomness in scheduling that would otherwise occur due to the absence of location information. The performance of DASSA is compared with an integer linear programming (ILP) based centralized sleep scheduling algorithm (CSSA), which is devised to find the maximum number of rounds the network can survive assuming that the location information of all sensors is available. DASSA is also compared with the decentralized DGT algorithm. DASSA attains network lifetimes up to 92% of the centralized solution and it achieves significantly longer lifetimes compared with the DGT algorithm.

Journal ArticleDOI
TL;DR: Interestingly, the experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least.
Abstract: The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks' dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model--the evolving graphs--was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Journal ArticleDOI
TL;DR: Simulations of different scenarios show that adding such a classifier to TCP can improve the throughput of TCP substantially in wired/wireless networks without compromizing TCP-friendliness in both wired and wireless environments.
Abstract: TCP is suboptimal in heterogeneous wired/wireless networks because it reacts in the same way to losses due to congestion and losses due to link errors. In this paper, we propose to improve TCP performance in wired/wireless networks by endowing it with a classifier that can distinguish packet loss causes. In contrast to other proposals we do not change TCP's congestion control nor TCP's error recovery. A packet loss whose cause is classified as link error will simply be ignored by TCP's congestion control and recovered as usual, while a packet loss classified as congestion loss will trigger both mechanisms as usual. To build our classification algorithm, a database of pre-classified losses is gathered by simulating a large set of random network conditions, and classification models are automatically built from this database by using supervised learning methods. Several learning algorithms are compared for this task. Our simulations of different scenarios show that adding such a classifier to TCP can improve the throughput of TCP substantially in wired/wireless networks without compromizing TCP-friendliness in both wired and wireless environments.

Journal ArticleDOI
TL;DR: A probabilistic system for auto-diagnosis in the radio access part of wireless networks, which comprises a model and a method and some techniques are proposed for the automatic learning of those model parameters.
Abstract: Self-management is essential for Beyond 3G (B3G) systems, where the existence of multiple access technologies (GSM, GPRS, UMTS, WLAN, etc.) will complicate network operation. Diagnosis, that is, fault identification, is the most difficult task in automatic fault management. This paper presents a probabilistic system for auto-diagnosis in the radio access part of wireless networks, which comprises a model and a method. The parameters of the model are thresholds for the discretization of Key Performance Indicators (KPIs) and probabilities. In this paper, some techniques are proposed for the automatic learning of those model parameters. In order to support the theoretical concepts, experimental results are examined, based on data from a live network. It has been proven that calculating parameters from network statistics, instead of being defined by diagnosis experts, highly increases the performance of the diagnosis system. In addition, the proposed techniques enhance the results obtained with continuous diagnosis models previously exposed in the literature.

Journal ArticleDOI
TL;DR: The FastScan scheme is proposed which reduces the scanning delay by using a client-based database and the net handoff delay is reduced to as low as 20 ms for IEEE 802.11b networks.
Abstract: IEEE 802.11 Wireless LANs are increasingly being used in enterprise environments for broadband access. Such large scale IEEE 802.11 WLAN deployments implies the need for client mobility support; a mobile station has to be "handed off" from one Access Point to another. Seamless handoff is possible for data traffic, which is not affected much by the handoff delay. However, voice traffic has stringent QoS requirements and cannot tolerate more than 50ms net handoff delay. The basic IEEE 802.11 handoff scheme (implemented in Layers 1 & 2) only achieves a handoff delay of 300ms at best, leading to disrupted connectivity and call dropping. The delay incurred in scanning for APs across channels contributes to 90% of the total handoff delay. In this paper, the FastScan scheme is proposed which reduces the scanning delay by using a client-based database. The net handoff delay is reduced to as low as 20 ms for IEEE 802.11b networks. We next suggest "Enhanced FastScan" that uses the direction and relative position of the client with respect to the current AP to satisfy the latency constraint in IEEE 802.11a scenarios, which have significantly higher scanning delays due to the larger number of channels. The proposed schemes do not need any changes in the infrastructure (access points) and require only a single radio and a small cache memory at the client side.

Journal ArticleDOI
TL;DR: The model is in better agreement with simulation results as compared with other models and can be used to compute overhead signaling during route-maintenance of unicast and multicast routing protocols for mobile ad-hoc networks.
Abstract: In this paper, we present a model that estimates the time duration of routes formed by several intermediate nodes in mobile multi-hop ad-hoc networks. First, we analyze a 3-node route, where only the intermediate node is in movement while source and destination nodes remain static. From this case, we show how route duration is affected by the initial position of the intermediate node and the size of the region where it is located. We also consider a second case where all nodes of 3-node routes are mobile. Based on extensive analysis of these routes, we determine the PDF of route duration under two different mobility models. This PDF can be determined by either analytical or statistical methods. The main contribution of this paper is that the time duration of a route formed by N intermediate nodes can be accurately computed by considering the minimum route duration of a set of N routes of 3 nodes each. Simulation work was conducted using the NS-2 network simulator to verify the accuracy of the proposed model and to compare it with other proposals found in the literature. We show that our model is in better agreement with simulation results as compared with other models. Results from this work can be used to compute overhead signaling during route-maintenance of unicast and multicast routing protocols for mobile ad-hoc networks. Similarly, because route duration decreases with route length, this study can be used to scale the network size up/down.

Journal ArticleDOI
TL;DR: This paper undertook a detailed characterization of 802.11 link-level behavior using commercial 802.
Abstract: Since wireless signals propagate through the ether, they are significantly affected by attenuation, fading, multipath, and interference. As a result, it is difficult to measure and understand fundamental wireless network behavior. This creates a challenge for both network researchers, who often rely on simulators to evaluate their work, and network managers, who need to deploy and optimize operational networks. Given the complexity of wireless networks, both communities often rely on simplifying rules, which frequently have not been validated using today's wireless radios. In this paper, we undertake a detailed characterization of 802.11 link-level behavior using commercial 802.11 cards. Our study uses a wireless testbed that provides signal propagation emulation, giving us complete control over the signal environment. In addition, we use our measurements to analyze the performance of an operational wireless network. Our work contributes to a more accurate understanding of link-level behavior and enables the development of more accurate wireless network simulators.

Journal ArticleDOI
TL;DR: TP CERL successfully attacks the well-known performance degradation issue of TCP over channels subject to random losses by distinguishing random losses from congestion losses based on a dynamically set threshold value.
Abstract: In this paper, we propose and verify a modified version of TCP Reno that we call TCP Congestion Control Enhancement for Random Loss (CERL). We compare the performance of TCP CERL, using simulations conducted in ns-2, to the following other TCP variants: TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno. TCP CERL is a sender-side modification of TCP Reno. It improves the performance of TCP in wireless networks subject to random losses. It utilizes the RTT measurements made throughout the duration of the connection to estimate the queue length of the link, and then estimates the congestion status. By distinguishing random losses from congestion losses based on a dynamically set threshold value, TCP CERL successfully attacks the well-known performance degradation issue of TCP over channels subject to random losses. Unlike other TCP variants, TCP CERL doesn't reduce the congestion window and slow start threshold when random loss is detected. It is very simple to implement, yet provides a significant throughput gain over the other TCP variants mentioned above. In single connection tests, TCP CERL achieved an 175, 153, 85, 64 and 88% throughput gain over TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno, respectively. In tests with multiple coexisting connections, TCP CERL achieved an 211, 226, 123, 70 and 199% throughput improvement over TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno, respectively.

Journal ArticleDOI
TL;DR: This work presents a distributed algorithm for assigning minimum possible power to all the nodes in a static wireless network such that the resultant network topology is k-connected, and extends the topology control algorithm from static networks to networks having mobile nodes.
Abstract: In wireless multi-hop and ad-hoc networks, minimizing power consumption and at the same time maintaining desired properties of the network topology is of prime importance. In this work, we present a distributed algorithm for assigning minimum possible power to all the nodes in a static wireless network such that the resultant network topology is k-connected. In this algorithm, a node collects the location and maximum power information from all nodes in its vicinity, and then adjusts the power of these nodes in such a way that it can reach all of them through k optimal vertex-disjoint paths. The algorithm ensures k-connectivity in the final topology provided the topology induced when all nodes transmit with their maximum power is k-connected. We extend our topology control algorithm from static networks to networks having mobile nodes. We present proof of correctness for our algorithm for both static and mobile scenarios, and through extensive simulation we present its behavior.

Journal ArticleDOI
TL;DR: In this article, a node distribution-based localization (NDBL) algorithm is proposed for low-cost and low-rate wireless sensors, where each node adaptively chooses neighboring nodes, updates its position estimate by minimizing a local cost function, and then passes this updated position to neighboring nodes.
Abstract: Distributed localization algorithms are required for large-scale wireless sensor network applications. In this paper, we introduce an efficient algorithm, termed node distribution-based localization (NDBL), which emphasizes simple refinement and low system-load for low-cost and low-rate wireless sensors. Each node adaptively chooses neighboring nodes, updates its position estimate by minimizing a local cost-function, and then passes this updated position to neighboring nodes. This update process uses a node distribution that has the same density per unit area as large-scale networks. Neighbor nodes are selected from the range in which the strength of received signals is greater than an experimentally based threshold. Based on results of a MATLAB simulation, the proposed algorithm was more accurate than trilateration and less complex than multi-dimensional scaling. Numerically, the mean distance error of the NDBL algorithm is 1.08---5.51 less than that of distributed weighted multi-dimensional scaling (dwMDS). Implementation of the algorithm using MicaZ with TinyOS-2.x confirmed the practicality of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper studies the joint relay node assignment and power allocation problem which aims to minimize the total power consumption of the network while providing the efficient bandwidth service and presents a polynomial-time algorithm JRPA to optimally solve this problem.
Abstract: In the recent years, cooperative communication is shown to be a promising technology to improve the spatial diversity without additional equipments or antennas. With this communication paradigm, energy can be saved by effective relay assignment and power allocation while achieving the required bandwidth for each transmission pair. Thus, this paper studies the joint relay node assignment and power allocation problem which aims to minimize the total power consumption of the network while providing the efficient bandwidth service. We first analyze the minimum power consumption under the bandwidth requirement for different communication modes. Based on the analytical results, we present a polynomial-time algorithm JRPA to optimally solve this problem. The algorithm first constructs a weighted bipartite graph G based on the given transmission pairs and relay nodes. Then, we adopt the KM method to find out a saturated matching M, and assign the relay nodes to the transmission pairs based on the matching. The optimality of the algorithm is also proved. The simulation results show that JRPA algorithm can save about 34.2% and 18.9% power consumptions compared with the direct transmission and ORA schemes in many situations.