scispace - formally typeset
Search or ask a question

Showing papers in "Wireless Networks in 2008"


Journal ArticleDOI
TL;DR: This paper addresses the following relay sensor placement problem: given the set of duty sensors in the plane and the upper bound of the transmission range, compute the minimum number of relay sensors such that the induced topology by all sensors is globally connected.
Abstract: This paper addresses the following relay sensor placement problem: given the set of duty sensors in the plane and the upper bound of the transmission range, compute the minimum number of relay sensors such that the induced topology by all sensors is globally connected. This problem is motivated by practically considering the tradeoff among performance, lifetime, and cost when designing sensor networks. In our study, this problem is modelled by a NP-hard network optimization problem named Steiner Minimum Tree with Minimum number of Steiner Points and bounded edge length (SMT-MSP). In this paper, we propose two approximate algorithms, and conduct detailed performance analysis. The first algorithm has a performance ratio of 3 and the second has a performance ratio of 2.5.

476 citations


Journal ArticleDOI
TL;DR: This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities.
Abstract: This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.

393 citations


Journal ArticleDOI
TL;DR: The Horus system identifies different causes for the wireless channel variations and addresses them and uses location-clustering techniques to reduce the computational requirements of the algorithm, which helps in supporting a larger number of users by running the algorithm at the clients.
Abstract: We present the design and implementation of the Horus WLAN location determination system. The design of the Horus system aims at satisfying two goals: high accuracy and low computational requirements. The Horus system identifies different causes for the wireless channel variations and addresses them to achieve its high accuracy. It uses location-clustering techniques to reduce the computational requirements of the algorithm. The lightweight Horus algorithm helps in supporting a larger number of users by running the algorithm at the clients. We discuss the different components of the Horus system and evaluate its performance on two testbeds. Our results show that the Horus system achieves its goal. It has an error of less than 0.6 meter on the average and its computational requirements are more than an order of magnitude better than other WLAN location determination systems. Moreover, the techniques developed in the context of the Horus system are general and can be applied to other WLAN location determination systems to enhance their accuracy. We also report lessons learned from experimenting with the Horus system and provide directions for future work.

342 citations


Journal ArticleDOI
TL;DR: This paper considers three kinds of deployments for a sensor network on a unit square—a √n×√n grid, random uniform (for all n points, and Poisson) and claims that the critical value of the function npπ r2/log (np) is 1 for the event of k-coverage of every point.
Abstract: Sensor networks are often desired to last many times longer than the active lifetime of individual sensors. This is usually achieved by putting sensors to sleep for most of their lifetime. On the other hand, event monitoring applications require guaranteed k-coverage of the protected region at all times. As a result, determining the appropriate number of sensors to deploy that achieves both goals simultaneously becomes a challenging problem. In this paper, we consider three kinds of deployments for a sensor network on a unit square--a √n × √n grid, random uniform (for all n points), and Poisson (with density n). In all three deployments, each sensor is active with probability p, independently from the others. Then, we claim that the critical value of the function npπ r2/ log(np) is 1 for the event of k-coverage of every point. We also provide an upper bound on the window of this phase transition. Although the conditions for the three deployments are similar, we obtain sharper bounds for the random deployments than the grid deployment, which occurs due to the boundary condition. In this paper, we also provide corrections to previously published results. Finally, we use simulation to show the usefulness of our analysis in real deployment scenarios.

254 citations


Journal ArticleDOI
TL;DR: This paper utilizes the multiple paths between the source and sink pairs for QoS provisioning and converts the optimization problem as a probabilistic programming into a deterministic linear programming, which is much easier and convenient to solve.
Abstract: Sensor nodes are densely deployed to accomplish various applications because of the inexpensive cost and small size. Depending on different applications, the traffic in the wireless sensor networks may be mixed with time-sensitive packets and reliability-demanding packets. Therefore, QoS routing is an important issue in wireless sensor networks. Our goal is to provide soft-QoS to different packets as path information is not readily available in wireless networks. In this paper, we utilize the multiple paths between the source and sink pairs for QoS provisioning. Unlike E2E QoS schemes, soft-QoS mapped into links on a path is provided based on local link state information. By the estimation and approximation of path quality, traditional NP-complete QoS problem can be transformed to a modest problem. The idea is to formulate the optimization problem as a probabilistic programming, then based on some approximation technique, we convert it into a deterministic linear programming, which is much easier and convenient to solve. More importantly, the resulting solution is also one to the original probabilistic programming. Simulation results demonstrate the effectiveness of our approach.

239 citations


Journal ArticleDOI
TL;DR: In this article, a scalable cross-layer framework is proposed to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level coverage based on load, throughput, and channel measurements.
Abstract: We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff/cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.

177 citations


Journal ArticleDOI
TL;DR: An algorithmic model for wireless ad hoc and sensor networks that aims to be sufficiently close to reality as to represent practical realworld networks while at the same time being concise enough to promote strong theoretical results is studied.
Abstract: In this paper, we study an algorithmic model for wireless ad hoc and sensor networks that aims to be sufficiently close to reality as to represent practical real-world networks while at the same time being concise enough to promote strong theoretical results The quasi unit disk graph model contains all edges shorter than a parameter d between 0 and 1 and no edges longer than 1 We show that--in comparison to the cost known for unit disk graphs--the complexity results of geographic routing in this model contain the additional factor 1/d2 We prove that in quasi unit disk graphs flooding is an asymptotically message-optimal routing technique, we provide a geographic routing algorithm being most efficient in dense networks, and we show that classic geographic routing is possible with the same asymptotic performance guarantees as for unit disk graphs if d ≥ 1/√2

156 citations


Journal ArticleDOI
TL;DR: This paper investigates mathematical programming models for supporting the decisions on where to install new base stations and how to select their configuration so as to find a trade-off between maximizing coverage and minimizing costs, and proposes a Tabu Search algorithm which provides good solutions within a reasonable computing time.
Abstract: Radio planning and coverage optimization are critical issues for service providers and vendors that are deploying third generation mobile networks and need to control coverage as well as the huge costs involved. Due to the peculiarities of the Code Division Multiple Access (CDMA) scheme used in 3G cellular systems like UMTS and CDMA2000, network planning cannot be based only on signal predictions, and the approach relying on classical set covering formulations adopted for second generation systems is not appropriate. In this paper we investigate mathematical programming models for supporting the decisions on where to install new base stations and how to select their configuration (antenna height and tilt, sector orientations, maximum emission power, pilot signal, etc.) so as to find a trade-off between maximizing coverage and minimizing costs. The overall model takes into account signal-quality constraints in both uplink and downlink directions, as well as the power control mechanism and the pilot signal. Since even small and simplified instances of this NP-hard problem are beyond the reach of state-of-the-art techniques for mixed integer programming, we propose a Tabu Search algorithm which provides good solutions within a reasonable computing time. Computational results obtained for realistic instances, generated according to classical propagation models, with different traffic scenarios (voice and data) are reported and discussed.

129 citations


Journal ArticleDOI
TL;DR: An analytical performance model for a network in which the sensors are at the tips of a star topology, and the sensors need to transmit their measurements to the hub node so that certain objectives for packet delay and packet discard are met is provided.
Abstract: One class of applications envisaged for the IEEE 802.15.4 LR-WPAN (low data rate--wireless personal area network) standard is wireless sensor networks for monitoring and control applications. In this paper we provide an analytical performance model for a network in which the sensors are at the tips of a star topology, and the sensors need to transmit their measurements to the hub node so that certain objectives for packet delay and packet discard are met. We first carry out a saturation throughput analysis of the system; i.e., it is assumed that each sensor has an infinite backlog of packets and the throughput of the system is sought. After a careful analysis of the CSMA/CA MAC that is employed in the standard, and after making a certain decoupling approximation, we identify an embedded Markov renewal process, whose analysis yields a fixed point equation, from whose solution the saturation throughput can be calculated. We validate our model against ns2 simulations (using an IEEE 802.15.4 module developed by Zheng [14]). We find that with the default back-off parameters the saturation throughput decreases sharply with increasing number of nodes. We use our analytical model to study the problem and we propose alternative back-off parameters that prevent the drop in throughput. We then show how the saturation analysis can be used to obtain an analytical model for the finite arrival rate case. This finite load model captures very well the qualitative behavior of the system, and also provides a good approximation to the packet discard probability, and the throughput. For the default parameters, the finite load throughput is found to first increase and then decrease with increasing load. We find that for typical performance objectives (mean delay and packet discard) the packet discard probability would constrain the system capacity. Finally, we show how to derive a node lifetime analysis using various rates and probabilities obtained from our performance analysis model.

114 citations


Journal ArticleDOI
TL;DR: This paper proposes a very simple Cross-Layer Energy Manager (XEM) that dynamically tunes its energy-saving strategy depending on the application behavior and key network parameters and reduces the energy consumption of an additional 20–96% with respect to the standard PSM.
Abstract: Nowadays Wi-Fi is the most mature technology for wireless-Internet access. Despite the large (and ever increasing) diffusion of Wi-Fi hotspots, energy limitations of mobile devices are still an issue. To deal with this, the standard 802.11 includes a Power-Saving Mode (PSM), but not much attention has been devoted by the research community to understand its performance in depth. We think that this paper contributes to fill the gap. We focus on a typical Wi-Fi hotspot scenario, and assess the dependence of the PSM behavior on several key parameters such as the packet loss probability, the Round Trip Time, the number of users within the hotspot. We show that during traffic bursts PSM is able to save up to 90% of the energy spent when no energy management is used, and introduces a limited additional delay. Unfortunately, in the case of long inactivity periods between bursts, PSM is not the optimal solution for energy management. We thus propose a very simple Cross-Layer Energy Manager (XEM) that dynamically tunes its energy-saving strategy depending on the application behavior and key network parameters. XEM does not require any modification to the applications or to the 802.11 standard, and can thus be easily integrated in current Wi-Fi devices. Depending on the network traffic pattern, XEM reduces the energy consumption of an additional 20-96% with respect to the standard PSM.

86 citations


Journal ArticleDOI
TL;DR: Two decentralized dynamic power control algorithms are obtained: primal and dual power update, and their global stability is established utilizing both classical Lyapunov theory and the passivity framework using a Lagrangian relaxation approach.
Abstract: We study power control in multicell CDMA wireless networks as a team optimization problem where each mobile attains at the minimum its individual fixed target SIR level and beyond that optimizes its transmission power level according to its individual preferences. We derive conditions under which the power control problem admits a unique feasible solution. Using a Lagrangian relaxation approach similar to [10] we obtain two decentralized dynamic power control algorithms: primal and dual power update, and establish their global stability utilizing both classical Lyapunov theory and the passivity framework [14]. We show that the robustness results of passivity studies [8, 9] as well as most of the stability and robustness analyses in the literature [10] are applicable to the power control problem considered. In addition, some of the basic principles of call admission control are investigated from the perspective of the model adopted in this paper. We illustrate the proposed power control schemes through simulations.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed approach outperforms the IEEE 802.11 power saving mechanism in terms of throughput and the amount of energy consumed.
Abstract: This paper presents an optimization of the power saving mechanism in the Distributed Coordination Function (DCF) in an Independent Basic Service Set (IBSS) of the IEEE 802.11 standard. In the power saving mode specified for DCF, time is divided into so-called beacon intervals. At the start of each beacon interval, each node in the power saving mode periodically wakes up for a duration called the ATIM Window. Nodes are required to be synchronized to ensure that all nodes wake up at the same time. During the ATIM window, the nodes exchange control packets to determine whether they need to stay awake for the rest of the beacon interval. The size of the ATIM window has a significant impact on energy saving and throughput achieved by the nodes. This paper proposes an adaptive mechanism to dynamically choose a suitable ATIM window size. We also allow the nodes to stay awake for only a fraction of the beacon interval following the ATIM window. On the other hand, the IEEE 802.11 DCF mode requires nodes to stay awake either for the entire beacon interval following the ATIM window or not at all. Simulation results showthat the proposed approach outperforms the IEEE 802.11 power saving mechanism in terms of throughput and the amount of energy consumed.

Journal ArticleDOI
TL;DR: Simulation results show that LARDAR has lower routing cost and collision than other protocols, and guarantees that the areas of route rediscovery will never exceed twice the entire network.
Abstract: One possibility direction to assist routing in Mobile Ad Hoc Network (MANET) is to use geographical location information provided by positioning devices such as global positioning systems (GPS). Instead of searching the route in the entire network blindly, position-based routing protocol uses the location information of mobile nodes to confine the route searching space into a smaller estimated range. The smaller route searching space to be searched, the less routing overhead and broadcast storm problem will occur. In this paper, we proposed a location-based routing protocol called LARDAR. There are three important characteristics be used in our protocol to improve the performance. Firstly, we use the location information of destination node to predict a smaller triangle or rectangle request zone that covers the position of destination in the past. The smaller route discovery space reduces the traffic of route request and the probability of collision. Secondly, in order to adapt the precision of the estimated request zone, and reduce the searching range, we applied a dynamic adaptation of request zone technique to trigger intermediate nodes using the location information of destination node to redefine a more precise request zone. Finally, an increasing-exclusive search approach is used to redo route discovery by a progressive increasing search angle basis when route discovery failed. This progressive increased request zone and exclusive search method is helpful to reduce routing overhead. It guarantees that the areas of route rediscovery will never exceed twice the entire network. Simulation results show that LARDAR has lower routing cost and collision than other protocols.

Journal ArticleDOI
TL;DR: This paper proposes and analyze an incremental localized version of the Broadcast Incremental Power protocol, and provides experimental results showing that this new protocol obtains very good results for low densities, and is almost as efficient as BIP for higher densities.
Abstract: We investigate broadcasting and energy preservation in ad hoc networks. One of the best known algorithm, the Broadcast Incremental Power (BIP) protocol, constructs an efficient spanning tree rooted at a given node. It offers very good results in terms of energy savings, but its computation is centralized and it is a real problem in ad hoc networks. Distributed versions have been proposed, but they require a huge transmission overhead for information exchange. Other localized protocols have been proposed, but none of them has ever reached the performances of BIP. In this paper, we propose and analyze an incremental localized version of this protocol. In our method, the packet is sent from node to node based on local BIP trees computed by each node in the broadcasting chain. Local trees are constructed within the k- hop neighborhood of nodes, based on information provided by previous nodes, so that a global broadcasting structure is incrementally built as the message is being propagated through the network. Only the source node computes an initially empty tree to initiate the process. Discussion and results are provided where we argue that k = 2 is the best compromise for efficiency. We also discuss potential conflicts that can arise from the incremental process. We finally provide experimental results showing that this new protocol obtains very good results for low densities, and is almost as efficient as BIP for higher densities.

Journal ArticleDOI
TL;DR: The results show that the exploitation of second order cost information in SARA substantially increases the goodness of the selected paths with respect to fully localized greedy routing.
Abstract: The main goal of this paper is to provide routing-table-free online algorithms for wireless sensor networks (WSNs) to select cost (e.g., node residual energies) and delay efficient paths. As basic information to drive the routing process, both node costs and hop count distances are considered. Particular emphasis is given to greedy routing schemes, due to their suitability for resource constrained and highly dynamic networks. For what concerns greedy forwarding, we present the Statistically Assisted Routing Algorithm (SARA), where forwarding decisions are driven by statistical information on the costs of the nodes within coverage and in the second order neighborhood. By analysis, we prove that an optimal online policy exists, we derive its form and we exploit it as the core of SARA. Besides greedy techniques, sub-optimal algorithms where node costs can be partially propagated through the network are also presented. These techniques are based on real time learning LRTA algorithms which, through an initial exploratory phase, converge to quasi globally optimal paths. All the proposed schemes are then compared by simulation against globally optimal solutions, discussing the involved trade-offs and possible performance gains. The results show that the exploitation of second order cost information in SARA substantially increases the goodness of the selected paths with respect to fully localized greedy routing. Finally, the path quality can be further increased by LRTA schemes, whose convergence can be considerably enhanced by properly setting real time search parameters. However, these solutions fail in highly dynamic scenarios as they are unable to adapt the search process to time varying costs.

Journal ArticleDOI
TL;DR: This work considers the problem of transmission scheduling of data over a wireless fading channel with hard deadline constraints and obtains optimal solutions, based on dynamic programming (DP), and tractable approximate heuristics in both cases.
Abstract: We consider the problem of transmission scheduling of data over a wireless fading channel with hard deadline constraints. Our system consists of N users, each with a fixed amount of data that must be served by a common deadline. Given that, for each user, the channel fade state determines the throughput per unit of energy expended, our objective is to minimize the overall expected energy consumption while satisfying the deadline constraint. We consider both a linear and a strictly convex rate-power curve and obtain optimal solutions, based on dynamic programming (DP), and tractable approximate heuristics in both cases. For the special non-fading channel case with convex rate-power curve, an optimal solution is obtained based on the Shortest Path formulation. In the case of a linear rate-power curve, our DP solution has a nice "threshold" form; while for the convex rate-power curve we are able to obtain a heuristic algorithm with comparable performance with that of the optimal scheduling scheme.

Journal ArticleDOI
TL;DR: Thorough empirical evaluation using the ns2 simulator with up to 675 mobile nodes shows that Octopus achieves excellent fault-tolerant and efficient position-based routing protocol at a modest overhead: when all nodes intermittently disconnect and reconnect,Octopus achieves the same high reliability as when all node are constantly up.
Abstract: Mobile ad-hoc networks (MANETs) are failure-prone environments; it is common for mobile wireless nodes to intermittently disconnect from the network, e.g., due to signal blockage. This paper focuses on withstanding such failures in large MANETs: we present Octopus, a fault-tolerant and efficient position-based routing protocol. Fault-tolerance is achieved by employing redundancy, i.e., storing the location of each node at many other nodes, and by keeping frequently refreshed soft state. At the same time, Octopus achieves a low location update overhead by employing a novel aggregation technique, whereby a single packet updates the location of many nodes at many other nodes. Octopus is highly scalable: for a fixed node density, the number of location update packets sent does not grow with the network size. And when the density increases, the overhead drops. Thorough empirical evaluation using the ns2 simulator with up to 675 mobile nodes shows that Octopus achieves excellent fault-tolerance at a modest overhead: when all nodes intermittently disconnect and reconnect, Octopus achieves the same high reliability as when all nodes are constantly up.

Journal ArticleDOI
TL;DR: The Minimum Flow Maximum Residual (MFMR) routing algorithm over the Routing Set boundaries is proposed in order to better utilize the capacity of the system by distributing the load over the shortest path alternatives of the System.
Abstract: Satellite networks are used as backup networks to the terrestrial communication systems. In this work, we tried to find a routing strategy over dynamic satellite systems to better utilize the capacity of the network. The satellite networks are not affected by natural disasters, therefore they can be used widely during and after disasters. The Minimum Flow Maximum Residual (MFMR) routing algorithm over the Routing Set boundaries is proposed in order to better utilize the capacity of the system by distributing the load over the shortest path alternatives of the system. We assumed the satellite network as having finite states and formulated the problem by using Finite State Automation concept along with earth-fixed cell strategy by using a virtual satellite network model. The routing problem in satellite networks is previously studied in the literature and it is conjectured that the problem is NP-Hard. The online and offline problems are stated and the MFMR algorithm is described in detail. The algorithm is compared with alternatives by simulating the network on Opnet Modeler. Finally, the performance analysis of different scenarios is given in this work.

Journal ArticleDOI
TL;DR: The results show that the two-hop co-channel separations often assumed for sensor and ad hoc networks are not sufficient to guarantee communications and provide theoretical basis for channel spatial reuse and medium access control for WSN and also serve as a guideline for how channel assignment algorithms should allocate channels.
Abstract: Wireless sensor networks (WSN) are formed by network-enabled sensors spatially randomly distributed over an area. Because the number of nodes in the WSNs is usually large, channel reuse must be applied, keeping co-channel nodes sufficiently separated geographically to achieve satisfactory SIR level. The most efficient channel reuse configuration for WSN has been determined and the worst-interference scenario has been identified. For this channel reuse pattern and worst-case scenario, the minimum co-channel separation distance consistent with an SIR level constraint is derived. Our results show that the two-hop co-channel separations often assumed for sensor and ad hoc networks are not sufficient to guarantee communications. Minimum co-channel separation curves given various parameters are also presented. The results in this paper provide theoretical basis for channel spatial reuse and medium access control for WSNs and also serve as a guideline for how channel assignment algorithms should allocate channels. Furthermore, because the derived co-channel separation is a function of the sensor transmission radius, it also provides a connection between network data transport capacity planning and network topology control which is administered by varying transmission powers.

Journal ArticleDOI
TL;DR: In this article, the authors describe possible denial-of-service attacks to access points in infrastructure wireless networks using the 80211b protocol, using only commodity hardware and software components to carry out such attacks.
Abstract: We describe possible denial of service attacks to access points in infrastructure wireless networks using the 80211b protocol To carry out such attacks, only commodity hardware and software compo

Journal ArticleDOI
TL;DR: To consider the path of a mobile entity which includes turns, this work mainly develops a new mobicast routing protocol, called the variant-egg-based mobicasts (VE-mobicast) routing protocol), by utilizing the adaptive variant- egg shape of the forwarding zone to achieve high predictive accuracy.
Abstract: In this paper, we present a new "spatiotemporal multicast", called a "mobicast", protocol for supporting applications which require spatiotemporal coordination in sensornets. The spatiotemporal character of a mobicast is to forward a mobicast message to all sensor nodes that will be present at time t in some geographic zone (called the forwarding zone) Z, where both the location and shape of the forwarding zone are a function of time over some interval (tstart, tend). The mobicast is constructed of a series of forwarding zones over different intervals (tstart, tend), and only sensor nodes located in the forwarding zone in the time interval (tstart, tend) should be awake in order to save power and extend the network lifetime. Existing protocols for a spatiotemporal variant of a multicast system were designed to support a forwarding zone that moves at a constant velocity, v, in sensornets. To consider the path of a mobile entity which includes turns, this work mainly develops a new mobicast routing protocol, called the variant-egg-based mobicast (VE-mobicast) routing protocol, by utilizing the adaptive variant-egg shape of the forwarding zone to achieve high predictive accuracy. To illustrate the performance achievement, a mathematical analysis is conducted and simulation results are examined.

Journal ArticleDOI
TL;DR: A concatenation algorithm which groups IP layer packets prior to transmission, called PAC-IP, enables packet-based fairness in medium access as well as includes QoS support module handling delay-sensitive traffic demands.
Abstract: Wireless local area networks experience performance degradation in presence of small packets The main reason for that is the large overhead added at the physical and link layers This paper proposes a concatenation algorithm which groups IP layer packets prior to transmission, called PAC-IP As a result, the overhead added at the physical and the link layers is shared among the grouped packets Along with performance improvement, PAC-IP enables packet-based fairness in medium access as well as includes QoS support module handling delay-sensitive traffic demands The performance of the proposed algorithm is evaluated through both simulations and an experimental WLAN testbed environment covering the single-hop and the widespread infrastructure network scenarios Obtained results underline significant performance enhancement in different operating scenarios and channel conditions

Journal ArticleDOI
TL;DR: A model where users communicate over a set of parallel multi-access fading channels, as in an orthogonal frequency division multiple access (OFDMA) system, is considered, where an optimal policy is characterized which maximizes the system throughput and also gives a simpler sub-optimal policy.
Abstract: In this paper we develop distributed approaches for power allocation and scheduling in wireless access networks. We consider a model where users communicate over a set of parallel multi-access fading channels, as in an orthogonal frequency division multiple access (OFDMA) system. At each time, each user must decide which channels to transmit on and howto allocate its power over these channels. We give distributed power allocation and scheduling policies, where each user's actions depend only on knowledge of their own channel gains. Assuming a collision model for each channel, we characterize an optimal policy which maximizes the system throughput and also give a simpler sub-optimal policy. Both policies are shown to have the optimal scaling behavior in several asymptotic regimes.

Journal ArticleDOI
TL;DR: Two new heuristics are proposed which exploit a novel characterization of optimal solutions for the special case of two channels and data items of uniform lengths, called Greedy+ and Dlinear, which have been tested on benchmarks whose popularities are characterized by Zipf distributions.
Abstract: The problem of data broadcasting over multiple channels consists in partitioning data among channels, depending on data popularities, and then cyclically transmitting them over each channel so that the average waiting time of the clients is minimized. Such a problem is known to be polynomially time solvable for uniform length data items, while it is computationally intractable for non-uniform length data items. In this paper, two new heuristics are proposed which exploit a novel characterization of optimal solutions for the special case of two channels and data items of uniform lengths. Sub-optimal solutions for the most general case of an arbitrary number of channels and data items of non-uniform lengths are provided. The first heuristic, called Greedy+, combines the novel characterization with the known greedy approach, while the second heuristic, called Dlinear, combines the same characterization with the dynamic programming technique. Such heuristics have been tested on benchmarks whose popularities are characterized by Zipf distributions, as well as on a wider set of benchmarks. The experimental tests reveal that Dlinear finds optimal solutions almost always, requiring good running times. However, Greedy+ is faster and scales well when changes occur on the input parameters, but provides solutions which are close to the optimum.

Journal ArticleDOI
TL;DR: This study presents a novel power saving mechanism, called PIANO (paging via another radio), for the integration of heterogeneous wireless networks, and further applies the proposed methods to implement a cellular/VoWLAN dual-mode system.
Abstract: The integration of cellular and VoIP over WLAN (VoWLAN) systems recently has attracted considerable interest from both academia and industry. A cellular/VoWLAN dual-mode system enables users to access a low-cost VoIP service in a WLAN hotspot and switch to a wide-area cellular system without WLANs. Unfortunately, cellular/VoWLAN dual-mode mobiles suffer the power consumption problem that becomes one of the major concerns for commercial deployment of the dual-mode service. In this study, we present a novel power saving mechanism, called PIANO (paging via another radio), for the integration of heterogeneous wireless networks, and further apply the proposed methods to implement a cellular/VoWLAN dual-mode system. Based on the proposed mechanisms, a dual-mode mobile can completely switch off its WLAN interface, only leaving the cellular interface awake to listen to paging messages. When a mobile receives a paging message from its cellular interface, it wakes up the WLAN interface and responds to connection requests via WLAN networks. Therefore, a dual-mode mobile reduces the power consumption by turning off the WLAN interface during idle, and can also receive VoWLAN services. Measurement results based on the prototype system demonstrate that the proposed methods significantly extend the standby hours of a dual-mode mobile.

Journal ArticleDOI
TL;DR: This paper defines routing optimality using different metrics such as path length, energy consumption along the path, and energy aware load balancing among the nodes, and proposes a framework of Self-Healing and Optimizing Routing Techniques (SHORT) for mobile ad hoc networks.
Abstract: On demand routing protocols provide scalable and cost-effective solutions for packet routing in mobile wireless ad hoc networks. The paths generated by these protocols may deviate far from the optimal because of the lack of knowledge about the global topology and the mobility of nodes. Routing optimality affects network performance and energy consumption, especially when the load is high. In this paper, we define routing optimality using different metrics such as path length, energy consumption along the path, and energy aware load balancing among the nodes. We then propose a framework of Self-Healing and Optimizing Routing Techniques (SHORT) for mobile ad hoc networks. While using SHORT, all the neighboring nodes monitor the route and try to optimize it if and when a better local subpath is available. Thus SHORT enhances performance in terms of bandwidth and latency without incurring any significant additional cost. In addition, SHORT can be also used to determine paths that result in low energy consumption or to optimize the residual battery power. Thus, we have detailed two broad classes of SHORT algorithms: Path-Aware SHORT and Energy-Aware SHORT. Finally, we evaluate SHORT using the ns-2 simulator. The results demonstrate that the performance of existing routing schemes can be significantly improved using the proposed SHORT algorithms.

Journal ArticleDOI
TL;DR: This paper considers the case where the primary license holder is a GSM-based cellular carrier and shows that the proposed sharing scheme works well even with simple admission control and primitive frequency assignment algorithms, and that imprecise location information does not significantly undermine the performance of the scheme.
Abstract: Most wireless systems receive a license that gives them exclusive access to a block of spectrum. Exclusivity guarantees adequate quality of service, but it also leads to inefficient use of spectrum. Even when the license holder is idle, no other device can use the spectrum. This paper explores an alternative paradigm for secondary access to spectrum, where a secondary device can transmit when and only when the primary license holder grants permission. In this spectrum usage paradigm, each secondary device makes a request for temporary access to spectrum by providing a primary license holder with information such as its required bandwidth, its required signal to interference ratio, its transmit power, and its location, which is essential for a primary license holder in making an admission decision. This explicit coordination makes it possible to protect the quality of service of both primary and secondary, while gaining the efficiency of spectrum sharing. In this paper, we consider the case where the primary license holder is a GSM-based cellular carrier. We show that our proposed sharing scheme works well even with simple admission control and primitive frequency assignment algorithms. Moreover, imprecise location information does not significantly undermine the performance of our scheme. We also demonstrate that our scheme is attractive to a license holder by showing that a cellular carrier can profit from offering a secondary device access to spectrum at a price lower than it would normally charge a cellular call.

Journal ArticleDOI
TL;DR: This paper considers the problem of minimizing pilot power subject to a coverage constraint, and presents a linear-integer mathematical formulation for the problem, which is able to find near-optimal solutions with a reasonable amount of computing effort for large networks.
Abstract: Pilot power management is an important issue for efficient resource utilization in WCDMA networks. In this paper, we consider the problem of minimizing pilot power subject to a coverage constraint. The constraint can be used to model various levels of coverage requirement, among which full coverage is a special case. The pilot power minimization problem is NP-hard, as it generalizes the set covering problem. Our solution approach for this problem consists of mathematical programming models and methods.We present a linear-integer mathematical formulation for the problem. To solve the problem for large-scale networks, we propose a column generation method embedded into an iterative rounding procedure. We apply the proposed method to a range of test networks originated from realistic network planning scenarios, and compare the results to those obtained by two ad hoc approaches. The numerical experiments show that our algorithm is able to find near-optimal solutions with a reasonable amount of computing effort for large networks. Moreover, optimized pilot power considerably outperforms the ad hoc approaches, demonstrating that efficient pilot power management is an important component of radio resource optimization. As another part of our numerical study, we examine the trade-off between service coverage and pilot power consumption.

Journal ArticleDOI
TL;DR: This paper introduces a flexible progressive coding framework for 3D meshes, which can be adapted to the different conditions imposed by wired and wireless channels at the bitstream level by avoiding the computationally complex steps of transcoding between networks.
Abstract: The evolution of mobile network and the popularization of mobile devices; the demand for multimedia services and 3D graphics applications on limited resource devices is more contemporary. Most of theworks on multimedia transmission are focused on bit errors and packet losses due to the fading channel environment of a wireless network. Error resilient multimedia is significant research topic which can be adapted to the different conditions in a wireless environment. The current solutions in transmission of multimedia across different networks include some type of transcoder where the source is partially or fully decoded, and re-encoded to suit the network conditions. This paper introduces a flexible progressive coding framework for 3D meshes, which can be adapted to the different conditions imposed by wired and wireless channels at the bitstream level. By avoiding the computationally complex steps of transcoding between networks, could deteriorate decoded model quality. The framework also allows refined degradation of model quality when the network conditions are poor due to congestion or deep fades.

Journal ArticleDOI
TL;DR: An analytical model is proposed to address the issue of how often batch rekeying should be performed and it is demonstrated that an optimal rekey interval exists for each scheme.
Abstract: Advances in wireless communications and mobile computing have led to the emergence of group communications and applications over wireless. In many of these group interactions, new members can join and current members can leave at any time, and existing members must communicate securely to achieve application-specific missions or network-specific functionality. Since wireless networks are resource-constrained, a key challenge is to provide secure and efficient group communication mechanisms that satisfy application requirements while minimizing the communication cost. Instead of individual rekeying, i.e., performing a rekey operation right after each join or leave request, periodic batch rekeying has been proposed to alleviate rekeying overhead in resource-constrained wireless networks. In this paper, we propose an analytical model to address the issue of how often batch rekeying should be performed. We propose threshold-based batch rekeying schemes and demonstrate that an optimal rekey interval exists for each scheme. We further compare these schemes to identify the best scheme that can minimize the communication cost of rekeying while satisfying application requirements when given a set of parameter values characterizing the operational and environmental conditions of the system. In a highly dynamic wireless environment in which the system parameter values change at runtime, our work may be used to adapt the rekeying interval accordingly.