scispace - formally typeset
Search or ask a question

Showing papers in "Center for Embedded Network Sensing in 2004"


Journal Article
TL;DR: This work proposes an algorithm based on artificial potential fields which is distributed, scalable and does not require a prior map of the environment to maximize the area coverage of a mobile sensor network.
Abstract: We consider the problem of self-deployment of a mobile sensor network. We are interested in a deployment strategy that maximizes the area coverage of the network with the constraint that each of the nodes has at least K neighbors, where K is a user-specified parameter. We propose an algorithm based on artificial potential fields which is distributed, scalable and does not require a prior map of the environment. Simulations establish that the resulting networks have the required degree with a high probability, are well connected and achieve good coverage. We present analytical results for the coverage achievable by uniform random and symmetrically tiled network configurations and use these to evaluate the performance of our algorithm.

521 citations


Journal Article
TL;DR: In this paper, the authors exploit mobility to develop a fluid infrastructure: mobile components are deliberately built into the system infrastructure for enabling specific functionality that is very hard to achieve using other methods.
Abstract: Computer networks have historically considered support for mobile devices as an extra overhead to be borne by the system. Recently however, researchers have proposed methods by which the network can take advantage of mobile components. We exploit mobility to develop a fluid infrastructure: mobile components are deliberately built into the system infrastructure for enabling specific functionality that is very hard to achieve using other methods. Built-in intelligence helps our system adapt to run time dynamics when pursuing pre-defined performance objectives. Our approach yields significant advantages for energy constrained systems, sparsely deployed networks, delay tolerant networks, and in security sensitive situations. We first show why our approach is advantageous in terms of network lifetime and data fidelity. Second, we present adaptive algorithms that are used to control mobility. Third, we design the communication protocol supporting a fluid infrastructure and long sleep durations on energy-constrained devices. Our algorithms are not based on abstract radio range models or idealized unobstructed environments but founded on real world behavior of wireless devices. We implement a prototype system in which infrastructure components move autonomously to carry out important networking tasks. The prototype is used to validate and evaluate our suggested mobility control methods.

373 citations


Journal Article
TL;DR: A harvesting theory for determining performance in energy harvesting systems and a localized algorithm for increasing the performance of a distributed system by adapting the process scheduling to the spatio-temporal characteristics of the environmental energy in the distributed system.
Abstract: Performance Aware Tasking for Environmentally Powered Sensor Networks Aman Kansal, Dunny Potter and Mani B Srivastava Department of Electrical Engineering University of California, Los Angeles Los Angeles, CA, USA kansal,dpotter,mbs @ee.ucla.edu ABSTRACT The use of environmental energy is now emerging as a feasible en- ergy source for embedded and wireless computing systems such as sensor networks where manual recharging or replacement of bat- teries is not practical. However, energy supply from environmental sources is highly variable with time. Further, for a distributed sys- tem, the energy available at its various locations will be different. These variations strongly influence the way in which environmental energy is used. We present a harvesting theory for determining per- formance in such systems. First we present a model for characteriz- ing environmental sources. Second, we state and prove two harvest- ing theorems that help determine the sustainable performance level from a particular source. This theory leads to practical techniques for scheduling processes in energy harvesting systems. Third, we present our implementation of a real embedded system that runs on solar energy and uses our harvesting techniques. The system adjusts its performance level in response to available resources. Fourth, we propose a localized algorithm for increasing the perfor- mance of a distributed system by adapting the process scheduling to the spatio-temporal characteristics of the environmental energy in the distributed system. While our theoretical intuition is based on certain abstractions, all the scheduling methods we present are motivated solely from the experimental behavior and resource con- straints of practical sensor networking systems. 1. INTRODUCTION Several prototypes and research efforts have demonstrated the usefulness of sensor networks [1, 2, 3] for a wide variety of ap- plications spanning defense [4], education [5, 6], science [7, 8], to arts and entertainment [9]. However, energy supply still remains one of the open challenges in such systems because unfettered de- ployment rules out traditional wall socket supplies and batteries with acceptable form factor and cost constraints do not yield the lifetimes desired by most applications. One method to improve the battery lifetime of such systems is to supplement the battery supply with environmental energy. Sev- eral technologies exist to extract energy from the environment such as solar, thermal, optical and kinetic energy, vibrational [10, 11, 12, 13, 14, 15]. However, system level methods to efficiently ex- ploit these resources for optimal performance are lacking. Sensor networks are expected to be deployed for several mission critical tasks and operate unattended for extended durations. This makes performance awareness crucial. Environmental sources are highly variable. A key concern then is ensuring a desired level of perfor- mance even as the source varies. In distributed systems, not only does the energy source vary in time, but also the energy available at different locations, and thus at different nodes of the sensor net- work differs. Energy consumption at different nodes may not be uniform either. In this situation, the performance can be improved by scheduling tasks according to the spatio-temporal characteris- tics of energy availability. The problem then, is to find schedul- ing mechanisms which can adapt the performance to the available energy profile. We address the problems mentioned above, both analytically and in experiments on our custom designed harvesting hardware. Categories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer Communi- cation Networks—Distributed Systems; C.4 [Computer Systems Organization]: Performance of Systems; G.m [Mathematics of Computing]: Miscellaneous General Terms Performance, Theory, Algorithms, Experimentation 1.1 Contributions of this paper This paper makes several contributions towards achieving a sus- tainable performance in systems using energy harvesting facilities. First, we develop an analytically tractable characterization for en- ergy sources that can be used for deriving performance bounds. This is a very flexible model which can handle a wide variety of en- ergy sources ranging from natural ones like solar energy to robotic energy delivery. Next, we propose a harvesting theory that helps to determine performance levels given the energy source classification. This theory aims to answer questions such as the following. What is the minimum latency for a particular application in a given energy environment? What performance level can a system achieve if it must survive eternally (until its hardware gets outdated or dam- aged) from environmental sources? What additional resources may be needed if a particular quality of service must be achieved? The Keywords energy harvesting, process scheduling, performance guarantees Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGMETRICS/Performance’04 June 12-16, 2004, New York, NY, USA. Copyright 2004 ACM 1-58113-873-3/04/0006 ...$5.00.

251 citations


Journal Article
TL;DR: In this article, the authors develop tools specifically to support heterogeneous systems, as well as to support the measurement and visualization of operational systems that is critical to addressing the inevitable problems that crop up in deployment.
Abstract: Recently deployed Wireless Sensor Network systems (WSNs) are increasingly following heterogeneous designs, incorporating a mixture of elements with widely varying capabilities. The development and deployment of WSNs rides heavily on the availability of simulation, emulation, visualization and analysis support. In this work, we develop tools specifically to support heterogeneous systems, as well as to support the measurement and visualization of operational systems that is critical to addressing the inevitable problems that crop up in deployment. Our system differs from related systems in three key ways: in its ability to simulate and emulate heterogeneous systems in their entirety, in its extensive support for integration and interoperability between motes and microservers, and in its unified set of tools that capture, view, and analyze real time debugging information from simulations, emulations, and deployments.

193 citations


Journal Article
TL;DR: This work presents an approach, inspired by bacterial chemotaxis, for robots to navigate to sources using gradient measurements and a simple actuation strategy (biasing a random walk), and shows how such an approach could be used for boundary finding.
Abstract: Locating gradient sources and tracking them over time has important applications to environmental monitoring and studies of the ecosystem We present an approach, inspired by bacterial chemotaxis, for robots to navigate to sources using gradient measurements and a simple actuation strategy (biasing a random walk) Extensive simulations show the efficacy of the approach in varied conditions including multiple sources, dissipative sources, and noisy sensors and actuators We also show how such an approach could be used for boundary finding We validate our approach by testing it on a small robot (the robomote) in a phototaxis experiment A comparison of our approach with gradient descent shows that while gradient descent is faster, our approach is better suited for boundary coverage, and performs better in the presence of multiple and dissipative sources

175 citations


Journal Article
TL;DR: In this article, the authors describe an embedded networked sensor architecture that merges sensing and articulation with adaptive algorithms that are responsive to both variability in environmental phenomena discovered by the mobile sensors and to discrete events discovered by static sensors.
Abstract: Monitoring of environmental phenomena with embedded networked sensing confronts the challenges of both unpredictable variability in the spatial distribution of phenomena, coupled with demands for a high spatial sampling rate in three dimensions. For example, low distortion mapping of critical solar radiation properties in forest environments may require two-dimensional spatial sampling rates of greater than 10 samples/m² over transects exceeding 1000 m². Clearly, adequate sampling coverage of such a transect requires an impractically large number of sensing nodes. This paper describes a new approach where the deployment of a combination of autonomous-articulated and static sensor nodes enables sufficient spatiotemporal sampling density over large transects to meet a general set of environmental mapping demands. To achieve this we have developed an embedded networked sensor architecture that merges sensing and articulation with adaptive algorithms that are responsive to both variability in environmental phenomena discovered by the mobile sensors and to discrete events discovered by static sensors. We begin by describing the class of important driving applications, the statistical foundations for this new approach, and task allocation. We then describe our experimental implementation of adaptive, event aware, exploration algorithms, which exploit our wireless, articulated sensors operating with deterministic motion over large areas. Results of experimental measurements and the relationship among sampling methods, event arrival rate, and sampling performance are presented.

166 citations


Journal Article
TL;DR: In this article, the authors describe an algorithm for robot navigation using a sensor network embedded in the environment, which obviates the need for a map or localization on the part of the robot.
Abstract: We describe an algorithm for robot navigation using a sensor network embedded in the environment. Sensor nodes act as signposts for the robot to follow, thus obviating the need for a map or localization on the part of the robot. Navigation directions are computed within the network (not on the robot) using value iteration. Using small low-power radios, the robot communicates with nodes in the network locally, and makes navigation decisions based on which node it is near. An algorithm based on processing of radio signal strength data was developed so the robot could successfully decide which node neighborhood it belonged to. Extensive experiments with a robot and a sensor network confirm the validity of the approach.

148 citations


Journal Article
TL;DR: A simple temporal compression scheme designed specifically to be used bymicamotes for the compaction of microclimate data that compresses data up to 20-to-1 while introducing error on the order of the sensor hardware’s specified margin of error.
Abstract: Since the inception of sensor networks, in-network processing has been touted as the enabling technology for long-lived deployments. Radio communication is the overriding consumer of energy in such networks. Therefore, data reduction before transmission, either by compression or feature extraction, will directly and significantly increase network lifetime. In many cases, it is premature to begin implementing feature extraction techniques.Users do not yet understand in what forms interesting data will appear and consequently can’t risk automatically discarding what they presume to be uninteresting. Moreover, computer scientists are only beginning to develop algorithms to collect spatially distributed features in situ. Even for the many application where all of the data must be transported out of the network, data may be compressed before transport, so long as the chosen compression technique can operate under the stringent resource constraints of low-power nodes and induces only tolerable errors. This paper evaluates a simple temporal compression scheme designed specifically to be used bymicamotes for the compaction of microclimate data. The algorithm makes use of the observation that over a small enough window of time, samples of microclimate data are linear. It finds such windows and generates a series of line segments that accurately represent the data. It compresses data up to 20-to-1 while introducing error on the order of the sensor hardware’s specified margin of error. Furthermore it is simple, consumes little CPU and requires very little storage when compared to other compression techniques. This paper describes the technique and results using a dataset from a one-year microclimate deployment.

135 citations


Journal Article
TL;DR: Elson and Estrin this article described the Great Quake of 2053 in Southern California as a "catastrophic event" with a magnitude of 8 on the Richter scale.
Abstract: Chapter 1 SENSOR NETWORKS: A BRIDGE TO THE PHYSICAL WORLD Jeremy Elson and Deborah Estrin Center for Embedded Networked Sensing University of California, Los Angeles Los Angeles, CA 90095 {jelson,destrin}@csuclaedu The Quake It was in the early afternoon of an otherwise unremarkable Thursday that the Great Quake of 2053 hit Southern California The earth began to rupture several miles under the surface of an uninhabited part of the Mohave desert Decades of pent-up energy was violently released, sending huge shear waves speeding toward greater Los Angeles Home to some 38 million people, the potential for epic disaster might be only seconds away The quake was enormous, even by California standards, as its magnitude surpassed 8 on the Richter scale Residents had long ago understood such an event was possible This area was well known for its seismic activity, and had been heavily in- strumented by scientists for more than a century The earliest data collection had been primitive, of course In the 1960’s, seismometers were isolated devices, each simply recording observations to tape for months at a time Once or twice a year, seismologists of that era would spend weeks traveling to each site, collecting the full tapes and replac- ing them with fresh blanks If they were lucky, each tape would contain data from the entire six months since their last visit Sometimes, they would instead discover only a few hours of data had been recorded be- fore the device had malfunctioned But, despite the process being so impractical, the data gathered were invaluable—revealing more about the Earth’s internal structure than had ever been known before By the turn of the century, the situation had improved considerably Many seismometers were connected to the Internet and could deliver a continuous stream of data to scientists, nearly in real-time Experts D R A F T Page 1 January 22, 2004, 8:01am D R A F T

122 citations



Journal Article
TL;DR: An on-line algorithm capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map is proposed, which shows how the differentiation of dynamic and static entities in the environments and SLAM can be mutually beneficial.
Abstract: We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and with physical robots show the efficiency of our approach and show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.

Journal Article
TL;DR: In this paper, the authors present a systematic exploration of the tradeoffs of combinations of link-layer retransmission, blacklisting, and end-to-end routing metrics, quantifying the effects of each of these three techniques.
Abstract: Unpredictable and heterogeneous links in a wireless sensor network require techniques to avoid low delivery rate and high delivery cost. Three commonly used techniques to help discover high quality paths include (1) link-layer retransmission, (2) blacklisting bad links, and (3) end-to-end routing metrics. Using simulation and testbed experiments, we present the rst systematic exploration of the tradeoffs of combinations of these approaches, quantifying the effects of each of these three techniques. We identify several key results: One is that per-hop retransmissions (ARQ) is a necessary addition to any other mechanism if reliable data delivery is a goal. Additional interactions between the services are more subtle. First, in a multihop network, either blacklisting or reliability metrics like ETX can provide consistent high-reliability paths when added to ARQ. Second, at higher deployment densities, blacklisting has a lower routing overhead than ETX. But at lower densities, blacklisting becomes less stable as the network partitions. These results are consistent across both simulation and testbed experiments. We conclude that ETX with retransmissions is the best choice in general, but that blacklisting may be worth considering at higher densities, either with or without ETX.

Journal Article
TL;DR: This paper describes an ongoing project investigating embedded networked sensing for structural health monitoring applications with the vision of many low-power sensor “motes” embedded throughout the structure with a smaller number of nodes that can provide local excitation.
Abstract: This paper describes an ongoing project investigating embedded networked sensing for structural health monitoring applications. The vision is of many low-power sensor “motes” embedded throughout the structure with a smaller number of nodes that can provide local excitation. The challenge is to develop both the networking algorithms to reliably communicate within the network, and distributed algorithms to monitor the state of the structure. A wireless data acquisition network is described, including the methods of storing and transmitting the data. A damage detection scheme is described that uses extremely low transmission bandwidth, and is shown to be effective in detecting damage in a simulated structure. Finally, a large-scale structural testbed that is being used for this project is described. The outcome of this work-in-progress is expected to be strong recommendations and algorithms for distributed wireless sensor/actuator structural health monitoring networks.

Journal Article
TL;DR: Vonoi scoping is proposed, a distributed algorithm to constrain the dissemination of messages from different sinks that has the property that a query originated by a given sink is forwarded only to the nodes for which that sink is the closest (under the chosen metric).
Abstract: In a data-gathering sensor network with multiple sinks, it is often unnecessary and redundant for each sink to flood the entire network with its queries. We propose a simple scoping scheme with the property that a query originated at a sink will be forwarded only to the subset of nodes for whom that sink is the closest sink.

Journal Article
TL;DR: Results show the effective and robust operation of the proposed algorithms and their implementations on a real-time acoustical wireless testbed and a novel virtual array model applicable to the AML-DOA estimation method is proposed for reverberant scenarios.
Abstract: Wireless sensor networks have been attracting increasing research interest given the recent advances in microelectronics, array processing, and wireless networking. Consisting of a large collection of small, wireless, low-cost, integrated sensing, computing, and communicating nodes capable of performing various demanding collaborative space-time processing tasks, wireless sensor network technology poses various unique design challenges, particularly for real-time operation. In this paper, we review the Approximate Maximum-Likelihood (AML) method for source localization and direction-of-arrival (DOA) estimations. Then, we consider the use of least-squares (LS) method applied to DOA bearing crossings to perform source localization. A novel virtual array model applicable to the AML-DOA estimation method is proposed for reverberant scenarios. Details on the wireless acoustical testbed are given. We consider the use of Compaq iPAQ 3760s, which are handheld, battery-powered device normally meant to be used as personal organizers (PDAs), as sensor nodes. The iPAQ provide a reasonable balance of cost, availability, and functionality. It has a build-in StrongARM processor, microphone, codec for acoustic acquisition and processing, and a PCMCIA bus for external IEEE 802.11b wireless cards for radio communication. The iPAQs form a distributed sensor network to perform real-time acoustical beamforming. Computational times and associated real-time processing tasks are described. Field measured results for linear, triangular, and square subarrays in free-space and reverberant scenarios are presented. These results show the effective and robust operation of the proposed algorithms and their implementations on a real-time acoustical wireless testbed.

Journal Article
TL;DR: Wisden as mentioned in this paper is a wireless sensor network for structural data acquisition, which uses a hybrid of end-to-end and hop-by-hop recovery, and lowoverhead data time-stamping that does not require global clock synchroniza- tion.
Abstract: A Wireless Sensor Network For Structural Monitoring ∗ Ning Xu Sumit Rangwala Alan Broad Krishna Kant Chintalapudi Deepak Ganesan Ramesh Govindan Deborah Estrin ABSTRACT Structural monitoring—the collection and analysis of structural re- sponse to ambient or forced excitation–is an important application of networked embedded sensing with significant commercial po- tential. The first generation of sensor networks for structural mon- itoring are likely to be data acquisition systems that collect data at a single node for centralized processing. In this paper, we dis- cuss the design and evaluation of a wireless sensor network sys- tem (called Wisden) for structural data acquisition. Wisden in- corporates two novel mechanisms, reliable data transport using a hybrid of end-to-end and hop-by-hop recovery, and low-overhead data time-stamping that does not require global clock synchroniza- tion. We also study the applicability of wavelet-based compression techniques to overcome the bandwidth limitations imposed by low- power wireless radios. We describe our implementation of these mechanisms on the Mica-2 motes and evaluate the performance of our implementation. We also report experiences from deploying Wisden on a large structure. General Terms Reliability, Design Keywords Sensor Network, Structural Health Monitoring, Wisden INTRODUCTION Categories and Subject Descriptors C.2.1 [Computer Communication Networks]: Wireless commu- nication; C.3 [Special-Purpose and Application-Based Systems]: Embedded Systems ∗ This material is based upon work supported by the National Sci- ence Foundation under Grants No. 0121778 (Center for Embedded Networked Systems) and 0325875 (ITR: Structural Health Moni- toring Using Local Excitations and Dense Sensing). Any opinions, findings and conclusions or recomendations expressed in this ma- terial are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). † Computer Science Department, University of Southern California, {nxu, srangwal, chintala, ramesh}@usc.edu ‡ Current Affiliation - Center for Embedded Networked Sensing, Los Angeles § Computer Science Department, University of California, Los An- geles {deepak, destrin}@cs.ucla.edu ¶ Crossbow Technology Inc. abroad@xbow.com Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’04, November 3–5, 2004, Baltimore, Maryland, USA. Copyright 2004 ACM 1-58113-879-2/04/0011 ... $ 5.00. Structural health monitoring systems seek to detect and local- ize damage in buildings, bridges, ships, and aircraft. The design of such systems is an active and well-established area of research. When built, such systems would infer the existence and location of damage by measuring structural response to ambient or forced excitation. Wireless sensor networks are a natural candidate for structural health monitoring systems, since they enable dense in- situ sensing and simplify deployment of instrumentation. However, techniques for damage assessment are quite complex, and practical wireless networked structural health monitoring systems are sev- eral years away. Wireless sensor networks do have a more immediate role to play in structural monitoring. Advances in structural engineering de- pend upon the availability of many detailed data sets that record the response of different structures to ambient vibration (caused, for example, by earthquakes, wind, or passing vehicles) or forced excitation (delivered by large special-purpose shakers). Currently, structural engineers use wired or single-hop wireless data acqui- sition systems to acquire such data sets. These systems consist of a device that collects and stores vibration measurements from a small number of sensors. However, power and wiring constraints imposed by these systems can increase the cost of acquiring these data sets, impose significant setup delays, and limit the number and location of sensors. Wireless sensor networks can help address these issues. In this paper, we describe the design of Wisden, a wireless sen- sor network system for structural-response data acquisition. Wis- den continuously collects structural response data from a multi-hop network of sensor nodes, and displays and stores the data at a base station. Wisden can be thought of as a first-generation wireless structural monitoring system; it incorporates some in-network pro- cessing, but later systems will move more processing into the net- work once the precise structural monitoring applications are better understood. In being essentially a data collection system, Wisden resembles other early sensor networks such as those being deployed for habitat monitoring [10]. While the architecture of Wisden is simple—a base station cen- trally collecting data—its design is a bit more challenging than that of other sensor networks built till date. Structural response data is generated at higher data rates than most sensing applications

Journal Article
TL;DR: An optimal admission control policy and a post-admission policing mechanism at the node-level are presented, which can achieve up to 48% increase in user rewards compared to the absence of energy management, for a variety of application mixes.
Abstract: Node-level Energy Management for Sensor Networks in the Presence of Multiple Applications Athanassios Boulis and Mani B. Srivastava Networked and Embedded System Laborarory (NESL), EE Department, University of California at Los Angeles, email: { boulis, mbs }@ee.ucla.edu infeasible or so costly that overcomes any benefits drawn from the WASN. Sustainable energy sources, such as solar power, ambient vibrations, acoustic signals, have yet to be proven realizable and efficient in today's WASNs [1][5][8]. Consequently, most research efforts use energy consumption as one of their efficiency metrics. Applications, protocols, services, and hardware are designed to reduce the energy consumption while maintaining their functionality. While these efforts are absolutely necessary for the evolution of WASNs into something more than academic research, they are not the only viewpoint of energy related issues in WASNs. WASNs are currently envisioned to have long life spans, servicing many transient users with different needs. This vision is currently supported by a series of frameworks that try to make WASNs dynamically programmable and generally open to transient users [4][6][9]. Multiple different requests for physical information, arriving at different times , translates into multiple different applications running concurrently in the network while multiple requests for execution of new applications are constantly received . This setting poses the question: Given a finite energy amount and an unknown sequence of application requests (chosen from a set of candidate applications with known occurrence probabilities, energy costs and user rewards/penalties), how does one accept/reject applications into the network, in order to maximize overall user rewards? The terms: requests for information , application requests , applications , and distributed algorithms are used interchangeably in the text. These are the items handled (i.e., undergo admission control and policing) in order to maximize rewards for all users. This is an operations research problem in its core (as so many other problems in engineering). The difficulty lies mainly on the formulation of the problem. Do we consider the finite energy amount at the node or the system/network level? What is an application's energy cost and how is it measured? How are user rewards defined? Section 2 argues that a pure system-level approach, although yielding optimal results, is unrealistic, as it requires from each application to have full knowledge of every other application in the WASN (which is contradictory to the notion of transient WASN users), or pay huge traffic Abstract Energy related research in wireless ad hoc sensor networks (WASNs) is focusing on energy saving techniques in the application-, protocol-, service-, or hardware-level. Little has been done to manage the finite amount of energy for a given (possibly optimally-designed) set of applications, protocols and hardware. Given multiple candidate applications (i.e., distributed algorithms in a WASN) of different energy costs and different user rewards, how does one manage a finite energy amount? Where does one provide energy, so as to maximize the useful work done (i.e. maximize user rewards)? We formulate the problem at the node-level, by having system- level hints from the applications. In order to tackle the central problem we first identify the energy consumption patterns of applications in WASNs, we propose ways for real-time measurements of the energy consumption by individual applications, and we solve the problem of estimating the extra energy consumption that a new application brings to a set of executing applications. Having these tools at our disposal, and by properly abstracting the problem we present an optimal admission control policy and a post-admission policing mechanism at the node-level. The admission policy can achieve up to 48% increase in user rewards compared to the absence of energy management, for a variety of application mixes. * 1. Introduction Wireless Ad-hoc Sensor Networks (WASNs) are the main representative of pervasive computing in large-scale physical environments. Networks of a large number of cheap, small-form, wireless devices, embedded in the physical world, may be used for applications such as premise security and surveillance, environmental habitat monitoring, condition-based maintenance, battlefields etc. Most of the research work in WASNs revolves around energy, focusing predominately in energy saving problems. The energy source in each sensor node is limited to the initial battery charge. Replenishing the battery charge is This work was supported in part by the Office of Naval Research under the AINS research program. Proceedings of the First IEEE International Conference on Pervasive Computing and Communications (PerCom’03) 0-7695-1895/03 $17.00 © 2003 IEEE 0-7695-1893-1/03 $17.00 © 2003 IEEE

Journal Article
TL;DR: In this article, the authors proposed a Bayesian method to analyze the lower bound of localization uncertainty in sensor networks, given the location and sensing uncertainty of individual sensors, the method computes the minimum entropy target location distribution estimated by the network of sensors.
Abstract: Localization is a key application for sensor networks. We propose a Bayesian method to analyze the lower bound of localization uncertainty in sensor networks. Given the location and sensing uncertainty of individual sensors, the method computes the minimum-entropy target location distribution estimated by the network of sensors. We define the Bayesian bound (BB) as the covariance of such distribution, which is compared with the Cramer-Rao bound (CRB) through simulations. When the observation uncertainty is Gaussian, the BB equals the CRB. The BB is much simpler to derive than the CRB when sensing models are complex. We also characterize the localization uncertainty attributable to the sensor network topology and the sensor observation type through the analysis of the minimum entropy and the CRB. Given the sensor network topology and the sensor observation type, such characteristics can be used to approximately predict where the target can be relatively accurately located.

Journal Article
TL;DR: This paper envision the use of a new design dimension to enhance sustainability of in sensor networks—the use of controlled mobility that can alleviate resource limitations and improve system performance by adapting to deployment demands.
Abstract: A key challenge in sensor networks is ensuring the sustainability of the system at the required performance level, in an autonomous manner. Sustainability is a major concern because of severe resource constraints in terms of energy, bandwidth and sensing capabilities in the system. In this paper, we envision the use of a new design dimension to enhance sustainability of in sensor networks—the use of controlled mobility. We argue that this capability can alleviate resource limitations and improve system performance by adapting to deployment demands. While opportunistic use of external mobility has been considered before, the use of controlled mobility is largely unexplored. We also outline the research issues associated with effectively utilizing this new design dimension. Two system prototypes are described to present first steps towards realizing the proposed vision.

Journal Article
TL;DR: In this paper, the authors describe the design and construction of an underwater sensor actuator network to detect extreme temperature gradients, motivated by the fact that regions of sharp temperature change (thermoclines) are a breeding ground for certain marine microorganisms.
Abstract: We describe the design and construction of an underwater sensor actuator network to detect extreme temperature gradients. We are motivated by the fact that regions of sharp temperature change (thermoclines) are a breeding ground for certain marine microorganisms. We present a distributed algorithm using local communication based on binary search to find a thermocline by using a mobile sensor network. Simulations and experiments using a mote test bed demonstrate the validity of this approach. We also discuss the improvement in energy efficiency using a submarine robot as a data mule. Comparisons between experimental data with and without the data mule show that there are considerable energy savings in the sensor network due to the data mule.

Journal Article
TL;DR: In this paper, a reputation-based framework for sensor networks where nodes maintain reputation for other nodes and use it to evaluate their trustworthiness has been proposed, which provides a scalable, diverse and a generalized approach for countering all types of misbehavior resulting from malicious and faulty nodes.
Abstract: The traditional approach of providing network security has been to borrow tools from cryptography and authentication. However, we argue that the conventional view of security based on cryptography alone is not sufficient for the unique characteristics and novel misbehaviors encountered in sensor networks. Fundamental to this is the observation that cryptography cannot prevent malicious or non-malicious insertion of data from internal adversaries or faulty nodes. We believe that in general tools from different domains such as economics, statistics and data analysis will have to be combined with cryptography for the development of trustworthy sensor networks. Following this approach, we propose a reputation-based framework for sensor networks where nodes maintain reputation for other nodes and use it to evaluate their trustworthiness. The framework is modularized; we will analyze each building block in detail in this paper. We will show that this framework provides a scalable, diverse and a generalized approach for countering all types of misbehavior resulting from malicious and faulty nodes.

Journal Article
TL;DR: A preliminary design and evaluation of Sympathy, a debugging tool for pre-deployment sensor networks and motivated by Ruan and Pai’s DeBox system, and the idea of correlating seemingly un- related events, and providing context for these events, in order to track down bugs and their root causes is presented.
Abstract: Sympathy: A Debugging System for Sensor Networks Nithya Ramanathan, Eddie Kohler, Lewis Girod, and Deborah Estrin UCLA Center for Embedded Network Sensing nithya, kohler, girod, destrin @cs.ucla.edu I. I NTRODUCTION Sensor networks—networks of small, resource-constrained wireless devices embedded in a dynamic physical environment—have led to new algorithm, protocol, and operating system designs [1], [2]. Interactions between sensor hardware, protocols, and environmental characteristics are impossible to predict, so sensor network application design is an iterative process between debugging and deployment [3]. Current debugging techniques fall short for systems which contain bugs characteristic of both distributed and embedded systems. Such bugs can be difficult to track because they are often multicausal, non-repeatible, timing-sensitive and have ephemeral triggers such as race conditions, decisions based on asynchronous changes in distributed state, or interactions with the physical environment. Furthermore, it is a challenge to extract debugging information from a running system without introducing the probing effect (alteration of normal behavior due to instrumentation) or draining excessive energy. This paper presents a preliminary design and evaluation of Sympathy, a debugging tool for pre-deployment sensor networks and motivated by Ruan and Pai’s DeBox system [4]. Sympathy consists of mechanisms for collecting system per- formance metrics with minimal memory overhead; mecha- nisms for recognizing events based on these metrics; and a system for collecting events and their spatio-temporal context. Sympathy introduces the idea of correlating seemingly un- related events, and providing context for these events, in order to track down bugs and find their root causes. Using Sympathy we have begun to distill out the important metrics, events and generic correlators that help find bugs quickly, and to transmit this data in ways that minimize energy consumption and probing effects. This process is ongoing. Our current con- tribution, then, is a tool that can be used for pre-deployment debugging, and for analysis on the role of a debugging tool in the entire design process. Eventually, Sympathy will be part of a system that can aid in debugging sensor networks both pre- and post-deployment. Below we present a useful case study that demonstrates our current contributions by showing how Sympathy was used to debug a failure in tiny diffusion. In related work, [5] and [6] address the data collection aspects of post-deployment debugging, but focus on the mech- anism to gather statistics instead of their content. Our work is complementary, since Sympathy is so far mostly concerned with content: discovering the most useful metrics to collect. Simulations and visualization tools are also helpful, but do not capture historical context or aid in determining the cause of a failure. While log files can provide context to a failure, they often contain excessive data which can obfuscate impor- tant events. Sympathy distinguishes itself from passive data logging approaches by proactively collecting and highlighting potentially relevant events and their context in order to aid in isolating their causes. II. A RCHITECTURE Sympathy´ general architecture is as follows: Sympathy s collects metrics from all nodes and watches the metrics for indications of events, which are metric changes that often indicate important changes in application state. On inferring an event, Sympathy: 1) Stores all metrics it has collected from the past 200 time units for the node causing the trigger, providing temporal context. 2) Stores all metrics it has collected from the past 200 time units for the nodes neighboring the node where the event was detected, providing spatial context. 3) Prints event and context information to a log file, which can aid in correlating events. 4) Calls applications interested in the event. The version of Sympathy described here collects four metrics: neighbor lists, link quality, nodes’ top two choices for next hop, and associated next-hop path loss. It watches for two types of events based on these metrics, namely missing or isolated nodes and changes in route selection, neighbor lists, or link quality. III. E VALUATION To demonstrate Sympathy’s potential as a debugging tool, we ran it with a nesC implementation of tiny diffusion [7], a routing algorithm based on directed diffusion [8]. In tiny diffusion, nodes periodically flood neighbor beacons (to cal- culate link quality), neighbor lists and associated link qualities (to identify assymetric links), and gradients which carry a node’s next hop and projected path loss (to determine a node’s next hop). We debugged this system pre-deployment, using simulations on a 14-node network that ran for two hours. Our goal was to determine why tiny diffusion had been experiencing loss rates an order of magnitude higher than expected in data delivery to the sink. After the first run, using the events triggered in Sympathy, we saw nodes change their next-hop selection approximately every 170 seconds. Sympathy aided over traditional debugging techniques by highlighting the frequent changes in next-hop selection and providing spatial correlation, which revealed that

Journal Article
TL;DR: This paper compares two schemes, one that uses an application-level hierarchy (ALH) and another that uses router-assisted hierarchy (RAH), and finds that the qualitative performance of ALH is comparable to RAH.
Abstract: IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 3, JUNE 2004 A Comparison of Application-Level and Router-Assisted Hierarchical Schemes for Reliable Multicast Pavlin Radoslavov, Christos Papadopoulos, Member, IEEE, Ramesh Govindan, and Deborah Estrin, Fellow, IEEE Abstract—One approach to achieving scalability in reliable mul- ticast is to use a hierarchy. A hierarchy can be established at the application level, or by using router-assist. With router-assist we have more fine-grain control over the placement of error-recovery functionality, therefore, a hierarchy produced by assistance from the routers is expected to have better performance. In this paper, we test this hypothesis by comparing two schemes, one that uses an application-level hierarchy (ALH) and another that uses router-as- sisted hierarchy (RAH). Contrary to our expectations, we find that the qualitative performance of ALH is comparable to RAH. We do not model the overhead of creating the hierarchy nor the cost of adding router-assist to the network. Therefore, our conclusions in- form rather than close the debate of which approach is better. Index Terms—Reliable multicast, router-assist for reliable multicast. I. I NTRODUCTION ELIABLE multicast has received significant attention re- cently in the research literature [1]–[8]. The key design challenge for reliable multicast is scalable recovery of losses. The two main impediments to scale are implosion and exposure. Implosion occurs when, in the absence of coordination, the loss of a packet triggers simultaneous redundant messages (requests and/or retransmissions) from many receivers. In large multicast groups, these messages may swamp the sender, the network, or even other receivers. Exposure wastes resources by delivering a retransmitted message to receivers which have not experienced loss. Another challenge that arises in the design of reliable mul- ticast is long recovery latency, which may result from suppres- sion mechanisms introduced to solve the implosion problem. Latency can have significant effect on application utility and on the amount of buffering required for retransmissions. One popular class of solutions is hierarchical data recovery. In these schemes, participants are organized into a hierarchy. R Manuscript received April 12, 2002; revised May 28, 2003; approved by IEEE/ACM T RANSACTIONS ON N ETWORKING Editor S. Paul. This work was supported by the National Science Foundation under Cooperative Agreement NCR-9321043. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. P. Radoslavov is with the International Computer Science Institute, Berkeley, CA 94704 USA (e-mail: pavlin@icsi.berkeley.edu). C. Papadopoulos and R. Govindan are with the Computer Science Depart- ment, University of Southern California, Los Angeles, CA 90089-0781 USA (e-mail: christos@isi.edu; ramesh@usc.edu). D. Estrin is with the Department of Computer Science, University of Cali- fornia, Los Angeles, CA 90095 USA (e-mail: destrin@cs.ucla.edu). Digital Object Identifier 10.1109/TNET.2004.828950 By limiting the scope of recovery data and control messages between parents and children in the hierarchy, both implosion and exposure can be substantially reduced. Hierarchies intro- duce a latency penalty, but that is proportional to the depth of the hierarchy. The biggest challenge with hierarchical solutions is the construction and maintenance of the hierarchy, especially for dynamic groups. For optimal efficiency, the recovery hier- archy must be congruent with the actual underlying multicast tree. 1 Divergence of these structures can lead to inefficiencies when children select parents who are located downstream in the multicast tree. One approach, exemplified by reliable multicast transport protocol (RMTP) [3], is to use manual configuration or application-level mechanisms to construct and maintain the hierarchy. Manual hierarchy construction techniques rely either on complete or partial (e.g., where the border routers are) knowledge of the topology. Automated hierarchy construction techniques rely on dynamically discovering tree structure, either explicitly by tracing tree paths [6], or implicitly by using techniques based on expanding ring search. Once a hierarchy is formed, children recursively recover losses from their parents in the hierarchy by sending explicit negative acknowledgments. Another approach, exemplified by light-weight multicast services (LMS) [2], proposes to use minimal router support not only to make informed parent/child allocation, but also to adapt the hierarchy under dynamic conditions. PGM [9] is another example of a router-assisted approach. In some of these router-assisted schemes, hierarchy construction is achieved by routers keeping minimal information about parents for downstream receivers, then carefully forwarding loss recovery control and data messages to minimize implosion and exposure. In these schemes, hierarchy construction requires little explicit mechanism at the application level at the expense of adding router functionality. Because of this, one would expect these router-assisted hierarchies (Section II-B) to differ from the application-level hierarchies (Section II-A) in two different ways: 1) router-assisted hierarchies are finer-grained; that is, have many more “internal nodes” in the hierarchy; and 2) they are more congruent to the underlying multicast tree. Then, it is natural to ask, as we do in this paper: Is the per- formance of application-level hierarchies qualitatively different than that of router-assisted hierarchies? To our knowledge, this question has not been addressed before. We study this question by evaluating two specific schemes: LMS and an RMTP-like 1 Congruency is achieved when the virtual hierarchy and the underlying mul- ticast tree coincide. 1063-6692/04$20.00 © 2004 IEEE

Journal Article
TL;DR: The effects of sensor noise and correlation in the sensor readings are explicitly modelled and how much data should be transmitted from multiple sensors such that only useful information is exchanged and energy or bandwidth are not wasted on redundant data is addressed.
Abstract: Sensor networks collect data at multiple distributed nodes and transfer the acquired information to points of interest. The raw data collected by each individual sensor is typically not of interest. Instead, a reduced representation of the measured phenomenon is to be generated. Multiple readings, however, add to the information about the phenomenon by providing its description at multiple points in space for distributed phenomena and multiple perspectives for a localized phenomenon. We also note that sensor readings have noise, and multiple readings can help mitigate the effect of this noise. Thus, while all the sensor readings need not be communicated, enough data must be exchanged to reliably reproduce the phenomenon. Considering the above effects, it becomes important to determine how much data should be transmitted from multiple sensors such that only useful information is exchanged and energy or bandwidth are not wasted on redundant data. We address this question using information theoretic techniques. The effects of sensor noise and correlation in the sensor readings are explicitly modelled.

Journal Article
TL;DR: A nonlinear programming-based control algorithm is proposed to optimize irrigation scheduling subject to contaminant transport constraints and a networked sensor array is being designed for deployment at an agricultural research plot.
Abstract: An issue associated with agricultural irrigation using reclaimed wastewater is the potential threat to underlying groundwater quality. A prime example is nitrate, which serves as a fertilizing agent but has the potential to leach into groundwater. In order to balance water reuse and groundwater protection, intelligent irrigation management and monitoring systems are required for such water reuse systems. In this work, a nonlinear programming-based control algorithm is proposed to optimize irrigation scheduling subject to contaminant transport constraints. In support of the algorithmic developments, a networked sensor array is being designed for deployment at an agricultural research plot. This array will supply real-time field information about water infiltration and distribution, nitrate propagation, and heat transport, to the irrigation scheduling algorithm. The control scheme (measurement, decision, and action) will be continuously updated using on-line feedback from sensors. The simulator on which the management algorithm depends is a one-dimensional form of the Richards equation coupled to energy and solute transport mass balances.

Journal Article
TL;DR: Energy supply is a major challenge for sensor network sustainability and there are several changes required in system design to support harvested energy such as the capability to learn the environmental energy levels in addition to residual battery.
Abstract: Energy supply is a major challenge for sensor network sustainability. A feasible alternative for enabling long term self-sustained deployments is to supplement or replace the battery supplies with environmentally harvested resources, such as solar power in outdoor environments. However, there are several changes required in system design to support harvested energy such as the capability to learn the environmental energy levels in addition to residual battery. Further, the environmental energy opportunity is significantly variable across space and time. The required taskload should be allocated among multiple nodes as per their energy availabilities to extract maximum performance from the system.

Journal Article
TL;DR: Embed responsibility for privacy into radio-frequency identification tags and other information technology designed to network the physical world is discussed in this article, where the authors propose a framework to protect the privacy of radio frequency identification tags.
Abstract: Embed responsibility for privacy into radio-frequency identification tags and other information technology designed to network the physical world.

Journal Article
TL;DR: The design of ad-hoc localization systems that use range together with either bearing or imprecise bearing information, and how these factors impact the performance of the system are examined.
Abstract: Ad-Hoc Localization Using Ranging and Sectoring Krishna Kant Chintalapudi, Amit Dhariwal, Ramesh Govindan, Gaurav Sukhatme Computer Science Department, University of Southern California, Los Angeles, California, USA, 90007. Abstract— Ad-hoc localization systems enable nodes in a sensor network to fix their positions in a global coordinate system using a relatively small number of anchor nodes that know their position through external means (e.g., GPS). Because location information provides context to sensed data, such systems are a critical component of many sensor networks and have therefore received a fair amount of recent attention in the sensor networks literature. The efficacy of these systems is a function of the density of deployment and of anchor nodes, as well as the error in distance estimation (ranging) between nodes. In this paper, we examine how these factors impact the per- formance of the system. This examination lays the groundwork for the main question we consider in this paper: Can the ability to estimate bearing to neighboring nodes greatly increase the performance of ad-hoc localization systems? We discuss the design of ad-hoc localization systems that use range together with either bearing or imprecise bearing (such as sectoring) information, and evaluate these systems using analysis and simulation. I. I NTRODUCTION Sensor network localization has been an active area of research for the last few years. For sensor networks, and more generally for networks of embedded systems, the abil- ity for nodes to determine their position through automatic means is recognized as an essential capability. The community has made great strides in ranging technologies, systems for infrastructured-based localization, and algorithms and tech- niques for ad-hoc localization (Section II). This last class is the subject of this paper. In an ad-hoc localization system, nodes determine their position in a common coordinate system using a number of anchor nodes that already know their location (through some external means, such as GPS [1]) in that coordinate system. These systems assume all nodes possess a ranging capability (the ability to estimate distances to other nodes). Using their range estimates, nodes use one of several distributed position fixing techniques to determine their positions in the coordinate system. There are two characteristics that are highly desirable in a distributed ad-hoc localization system 1 ; in fact, we assume that these are design requirements for such systems. The first requirement is that the performance of such a system be relatively insensitive to anchor placement, as long as the anchors are not placed in a degenerate configuration. From a sensor network perspective, this is desirable since it may often be difficult to engineer anchor placements in the environments 1 Our focus in this paper is on distributed ad-hoc localization systems. Henceforth, when we use the term “ad-hoc localization system”, or simply “localization system”, we mean this class of systems. that these networks are deployed. Another way of saying this is that an ad-hoc localization system permits unplanned anchor placement. A second requirement is that relatively few anchors be necessary for obtaining good localization performance. This requirement is motivated by the fact that in some environments it may be difficult to obtain position estimates through external means (e.g., because GPS signals can be significantly attentuated/obstructed through foliage). We argue that ad-hoc localization systems should work well with an order of magnitude fewer anchors than nodes. This rule-of-thumb is motivated by a systems argument; if one in two or three nodes are required to be anchors, it will significantly constrain the deployment of such a system. We begin this paper (Section III) by evaluating the perfor- mance of a range of ad-hoc localization techniques proposed in the literature. The performance of ad-hoc localization depends upon several factors: the accuracy of ranging, the density of node placement, the relative density (fraction) of anchors, as well as the particular position fixing schemes in use. Using both analysis and extensive simulations, we find that ad- hoc localization systems begin to perform acceptably only at node densities well beyond the density required for network connectivity. We find this to be rather pessimistic—being required to deploy more resources to get a component of a system working seems undesirable from an architectural standpoint. Moreover, we argue that this is a fundamental limitation of ad-hoc localization systems that use ranging devices only, rather than a shortcoming that can be remedied by designing better localization schemes. We then consider whether adding the ability to estimate bearing to neighboring nodes can qualitatively improve the performance of ad-hoc localization schemes (Section IV). We show that there exists a highly accurate position fixing scheme that uses both 2 range and bearing information in order to localize nodes, at node densities comparable to that required for network connectivity. This is obviously an idealization, since it is unclear if accurate bearing estimation devices can be built at the form factors and energy-levels that sensor network nodes require. What is more feasible, perhaps, is the ability to approxi- mately detect bearing. Guided by this observation, we examine whether devices that enable nodes to place neighbors within sectors can enable acceptable ad-hoc localization performance at node densities that are sufficient for connectivity (Sec- tion V). We show that there exists a simple iterative scheme 2 This is an important and unique contribution of the paper; the only other piece of work that uses bearing information [2] does not consider the estimation of node positions jointly using range and bearing.

Journal Article
TL;DR: In this article, an overview of energy-centric sensor node design techniques that enable designers to significantly extend system and network lifetime is presented, which can be obtained by eliminating energy inefficiencies from all aspects of the sensor node, ranging from the hardware platform to the operating system, network protocols and application software.
Abstract: The battery driven nature of wireless sensor networks, combined with the need for operational lifetimes of months to years, mandates that energy efficiency be treated as a metric of utmost priority while designing these distributed sensing systems. This chapter presents an overview of energy-centric sensor node design techniques that enable designers to significantly extend system and network lifetime. Such extensions to battery life can only be obtained by eliminating energy inefficiencies from all aspects of the sensor node, ranging from the hardware platform to the operating system, network protocols, and application software.

Journal Article
TL;DR: This work proposes two specific algorithms; the first one follows TCP's congestion avoidance algorithm and adjusts the transmission rate when a collision occurs, while the second one shifts packet transmission times to minimize collisions.
Abstract: Wireless sensor networks are characterized by collections of small, low-power nodes that collect information about the physical world. Concurrent transmissions caused by the well-known hidden terminal problem result in collisions and packet corruption. Since corrupted packets must be retransmitted, collisions add an additional burden to the already energy constrained system. In this paper, we present an application-based approach to collision avoidance.We propose two specific algorithms; the first one follows TCP’s congestion avoidance algorithm and adjusts the transmission rate when a collision occurs, while the second one shifts packet transmission times to minimize collisions. We evaluated both algorithms through simulations and our results show that our approach can reduce the number of collision-induced re-transmissions by a factor of 8 and the energy consumption by up to 50%.