scispace - formally typeset
Search or ask a question

Showing papers in "Center for Embedded Network Sensing in 2003"


Journal Article
TL;DR: Govindan et al. as mentioned in this paper performed a large-scale measurement of packet delivery in dense wireless sensor networks and found that packet de-livery performance is important for energy-constrained networks.
Abstract: Understanding Packet Delivery Performance In Dense Wireless Sensor Networks ∗ Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Jerry Zhao Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Ramesh Govindan zhaoy@usc.edu ABSTRACT Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environ- ments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspec- tive, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal charac- teristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks. In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks. ramesh@usc.edu spectrum under use, the particular modulation schemes un- der use, and possibly on the communicating devices them- selves. Communication quality can vary dramatically over time, and has been reputed to change with slight spatial displacements. All of these are true to a greater degree for ad-hoc (or infrastructure-less) communication than for wire- less communication to a base station. Given this, and the paucity of large-scale deployments, it is perhaps not surpris- ing that there have been no medium to large-scale measure- ments of ad-hoc wireless systems; one expects measurement studies to reveal high variability in performance, and one suspects that such studies will be non-representative. Wireless sensor networks [5, 7] are predicted on ad-hoc wireless communications. Perhaps more than other ad-hoc wireless systems, these networks can expect highly variable wireless communication. They will be deployed in harsh, inaccessible, environments which, almost by definition will exhibit significant multi-path communication. Many of the current sensor platforms use low-power radios which do not have enough frequency diversity to reject multi-path prop- agation. Finally, these networks will be fairly densely de- ployed (on the order of tens of nodes within communica- tion range). Given the potential impact of these networks, and despite the anecdotal evidence of variability in wireless communication, we argue that it is imperative that we get a quantitative understanding of wireless communication in sensor networks, however imperfect. Our paper is a first attempt at this. Using up to 60 Mica motes, we systematically evaluate the most basic aspect of wireless communication in a sensor network: packet delivery. Particularly for energy-constrained networks, packet de- livery performance is important, since that translates to net- work lifetime. Sensor networks are predicated using low- power RF transceivers in a multi-hop fashion. Multiple short hops can be more energy-efficient than one single hop over a long range link. Poor cumulative packet delivery per- formance across multiple hops may degrade performance of data transport and expend significant energy. Depending on the kind of application, it might significantly undermine application-level performance. Finally, understanding the dynamic range of packet delivery performance (and the ex- tent, and time-varying nature of this performance) is impor- tant for evaluating almost all sensor network communication protocols. We study packet delivery performance at two layers of the communication stack (Section 3). At the physical-layer and in the absence of interfering transmissions, packet de- Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wireless communication; C.4 [Performance of Systems]: Perfor- mance attributes, Measurement techniques General Terms Measurement, Experimentation Keywords Low power radio, Packet loss, Performance measurement 1. INTRODUCTION Wireless communication has the reputation of being no- toriously unpredictable. The quality of wireless communica- tion depends on the environment, the part of the frequency ∗ This work is supported in part by NSF grant CCR-0121778 for the Center for Embedded Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’03, November 5–7, 2003, Los Angeles, California, USA. Copyright 2003 ACM 1-58113-707-9/03/0011 ... $ 5.00.

1,330 citations


Journal Article
TL;DR: In this article, a service model for time synchronization is proposed to better support the broad range of application requirements seen in sensor networks, while meeting the unique resource constraints found in such systems.
Abstract: Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is a critical piece of infrastructure in any distributed system, but wireless sensor networks make particularly extensive use of synchronized time. Almost any form of sensor data fusion or coordinated actuation requires synchronized physical time for reasoning about events in the physical world. However, while the clock accuracy and precision requirements are often stricter in sensor networks than in traditional distributed systems, energy and channel constraints limit the resources available to meet these goals. New approaches to time synchronization can better support the broad range of application requirements seen in sensor networks, while meeting the unique resource constraints found in such systems. We first describe the design principles we have found useful in this problem space: tiered and multi-modal architectures are a better fit than a single solution forced to solve all problems; tunable methods allow synchronization to be more finely tailored to problem at hand; peer-to-peer synchronization eliminates the problems associated with maintaining a global timescale. We propose a new service model for time synchronization that provides a much more natural expression of these techniques: explicit timestamp conversions . We describe the implementation and characterization of several synchronization methods that exemplify our design principles. Reference-Broadcast Synchronization achieves high precision at low energy cost by leveraging the broadcast property inherent to wireless communication. A novel multi-hop algorithm allows RBS timescales to be federated across broadcast domains. Post-Facto Synchronization can make systems significantly more efficient by relaxing the traditional constraint that clocks must be kept in continuous synchrony. Finally, we describe our experience in applying our new methods to the implementation of a number of research and commercial sensor network applications.

663 citations


Journal Article
TL;DR: In this article, Stann et al. present RMST (Reliable Multi-Segment Transport), a new transport layer for Directed Diffusion, which provides guaranteed delivery and fragmentation/reassembly for applications that require them.
Abstract: Appearing in 1st IEEE International Workshop on Sensor Net Protocols and Applications (SNPA). Anchorage, Alaska, USA. May 11, 2003. RMST: Reliable Data Transport in Sensor Networks Fred Stann, John Heidemann Abstract – Reliable data transport in wireless sensor networks is a multifaceted problem influenced by the physical, MAC, network, and transport layers. Because sensor networks are subject to strict resource constraints and are deployed by single organizations, they encourage revisiting traditional layering and are less bound by standardized placement of services such as reliability. This paper presents analysis and experiments resulting in specific recommendations for implementing reliable data transport in sensor nets. To explore reliability at the transport layer, we present RMST (Reliable Multi- Segment Transport), a new transport layer for Directed Diffusion. RMST provides guaranteed delivery and fragmentation/reassembly for applications that require them. RMST is a selective NACK-based protocol that can be configured for in-network caching and repair. Second, these energy constraints, plus relatively low wireless bandwidths, make in-network processing both feasible and desirable [3]. Third, because nodes in sensor networks are usually collaborating towards a common task, rather than representing independent users, optimization of the shared network focuses on throughput rather than fairness. Finally, because sensor networks are often deployed by a single organization with inexpensive hardware, there is less need for interoperability with existing standards. For all of these reasons, sensor networks provide an environment that encourages rethinking the structure of traditional communications protocols. The main contribution is an evaluation of the placement of reliability for data transport at different levels of the protocol stack. We consider implementing reliability in the MAC, transport layer, application, and combinations of these. We conclude that reliability is important at the MAC layer and the transport layer. MAC-level reliability is important not just to provide hop-by-hop error recovery for the transport layer, but also because it is needed for route discovery and maintenance. (This conclusion differs from previous studies in reliability for sensor nets that did not simulate routing. [4]) Second, we have developed RMST (Reliable Multi-Segment Transport), a new transport layer, in order to understand the role of in- network processing for reliable data transfer. RMST benefits from diffusion routing, adding minimal additional control traffic. RMST guarantees delivery, even when multiple hops exhibit very high error rates. 1 Introduction Wireless sensor networks provide an economical, fully distributed, sensing and computing solution for environments where conventional networks are impractical. This paper explores the design decisions related to providing reliable data transport in sensor nets. The reliable data transport problem in sensor nets is multi-faceted. The emphasis on energy conservation in sensor nets implies that poor paths should not be artificially bolstered via mechanisms such as MAC layer ARQ during route discovery and path selection [1]. Path maintenance, on the other hand, benefits from well- engineered recovery either at the MAC layer or the transport layer, or both. Recovery should not be costly however, since many applications in sensor nets are impervious to occasional packet loss, relying on the regular delivery of coarse-grained event descriptions. Other applications require loss detection and repair. These aspects of reliable data transport include the provision of guaranteed delivery and fragmentation/ reassembly of data entities larger than the network MTU. Sensor networks have different constraints than traditional wired nets. First, energy constraints are paramount in sensor networks since nodes can often not be recharged, so any wasted energy shortens their useful lifetime [2]. This work was supported by DARPA under grant DABT63-99-1-0011 as part of the SCAADS project, and was also made possible in part due to support from Intel Corporation and Xerox Corporation. Fred Stann and John Heidemann are with USC/Information Sciences Institute, 4676 Admiralty Way, Marina Del Rey, CA, USA E-mail: fstann@usc.edu, johnh@isi.edu. 2 Architectural Choices There are a number of key areas to consider when engineering reliability for sensor nets. Many current sensor networks exhibit high loss rates compared to wired networks (2% to 30% to immediate neighbors)[1,5,6]. While error detection and correction at the physical layer are important, approaches at the MAC layer and higher adapt well to the very wide range of loss rates seen in sensor networks and are the focus of this paper. MAC layer protocols can ameliorate PHY layer unreliability, and transport layers can guarantee delivery. An important question for this paper is the trade off between implementation of reliability at the MAC layer (i.e. hop to hop) vs. the Transport layer, which has traditionally been concerned with end-to-end reliability. Because sensor net applications are distributed, we also considered implementing reliability at the application layer. Our goal is to minimize the cost of repair in terms of transmission.

650 citations


Journal Article
TL;DR: In this paper, an approach to use individual In2O3 nanowire transistors as chemical sensors working at room temperature was presented, which exhibited significantly improved chemical sensing performance compared to existing solid state sensors in many aspects, such as the sensitivity, the selectivity, the response time, and the lowest detectable concentrations.
Abstract: We present an approach to use individual In2O3 nanowire transistors as chemical sensors working at room temperature. Upon exposure to a small amount of NO2 or NH3, the nanowire transistors showed a decrease in conductance up to six or five orders of magnitude and also substantial shifts in the threshold gate voltage. These devices exhibited significantly improved chemical sensing performance compared to existing solid-state sensors in many aspects, such as the sensitivity, the selectivity, the response time, and the lowest detectable concentrations. Furthermore, the recovery time of our devices can be shortened to just 30 s by illuminating the devices with UV light in vacuum.

421 citations


ReportDOI
TL;DR: This paper presents Multihop Over-the-Air Programming (MOAP), a code distribution mechanism specifically targeted for Mica-2 Motes and shows that a very simple windowed retransmission tracking scheme is nearly as effective as arbitrary repairs and yet is much better suited to energy and memory constrained embedded systems.
Abstract: : Wireless sensor networks consist of collections of small, low-power nodes that interface or interact with the physical environment. The ability to add new functionality or perform software maintenance without having to physically reach each individual node is already an essential service, even at the limited scale at which current sensor networks are deployed. TinyOS supports single-hop over-the-air reprogramming today, but the need to reprogram sensors in a multi-hop network will become particularly critical as sensor networks mature and move toward larger deployment sizes. In this paper we present Multihop Over-the-Air Programming (MOAP), a code distribution mechanism specifically targeted for Mica-2 Motes. We discuss and analyze the design goals, constraints, choices and optimizations focusing in particular on dissemination strategies and retransmission policies. We have implemented MOAP on Mica-2 motes and we evaluate that implementation using both emulation and testbed experiments. We show that our dissemination mechanism obtains a 60-90% performance improvement in terms of required transmissions compared to flooding. We also show that a very simple windowed retransmission tracking scheme is nearly as effective as arbitrary repairs and yet is much better suited to energy and memory constrained embedded systems.

336 citations



Journal Article
TL;DR: In this paper, Boulis et al. presented a distributed estimation algorithm that can be applied to explore the energy-accuracy subspace for a sub-class of periodic aggregation problems, and presented extensive simulation results that validate their approach.
Abstract: Aggregation in Sensor Networks: An Energy- Accuracy Trade-off Athanassios Boulis, Saurabh Ganeriwal, and Mani B. Srivastava Networked and Embedded Systems Lab, EE Department, University of California at Los Angeles email: { boulis, saurabh, mbs }@ee.ucla.edu Abstract – Wireless ad hoc sensor networks (WASNs) are in need of the study of useful applications that will help the researchers view them as distributed physically coupled systems, a collective that estimates the physical environment, and not just energy- limited ad hoc networks. We develop this perspective using a large and interesting class of WASN applications called aggregation applications. In particular, we consider the challenging periodic aggregation problem where the WASN provides the user with periodic estimates of the environment, as opposed to simpler and previously studied snapshot aggregation problems. In periodic aggregation our approach allows the spatial-temporal correlation among values sensed at the various nodes to be exploited towards energy-efficient estimation of the aggregated value of interest. Our approach also creates a system level energy vs. accuracy knob whereby the more the estimation error that the user can tolerate, the less is the energy consumed. We present a distributed estimation algorithm that can be applied to explore the energy-accuracy subspace for a sub-class of periodic aggregation problems, and present extensive simulation results that validate our approach. The resulting algorithm, apart from being more flexible in the energy-accuracy subspace and more robust, can also bring considerable energy savings for a typical accuracy requirement (five -fold decrease in energy consumption for 5% estimation error) compared to repeated snapshot aggregations. Keywords: sensor networks, aggregation applications, distributed estimation, energy vs. accuracy trade-off. The original vision and promise of WASNs was that multiple nodes collectively perform the sensing task requested by the users and communicate the results to the users. However, most of the research so far has simply viewed WASNs as just another kind of wireless ad hoc networks, albeit one composed of nodes that are more energy- constrained and whose data sources are sensors. So, for example, much work has focused on issues such as energy- efficient MAC and ad hoc routing protocols to realize the needed point-to-point and point-to-multipoint communication patterns in WASNs. But, little has been done to develop an understanding of a WASN as a collective or an aggregate where sensor nodes collaborate to jointly estimate the desired answer about the sensed environment. In part this is because not many actual applications useful to the end-user have been studied. The only notable exception is the target-tracking problem, which has drawn attention from several r search e groups. Otherwise, the applications that have been examined are usually toy scenarios used to showcase the abilities of protocols and programming frameworks (e.g., [10]), or very specific applications examined for the sake of some energy- saving technique (e.g., [11]). In this research we have made a first attempt at exploring and understanding the performance of a WASN as a collective that performs a sensing task. We examine a general class of WASN applications that we call aggregation applications where the desired answer depends on the sensed value at multiple nodes. In particular, we explore the energy vs. accuracy subspace, i.e. how much energy savings can one get by relaxing some accuracy requirements and vice versa. We propose an algorithm that exploits this trade-off and jointly considers networking and signal processing issues to create a distributed estimation mechanism. A. Aggregation Applications Many of the examples and simple applications presented in WASN research are based around some kind of aggregation function. The most popular and simple examples of aggregation functions are maximum and average . That is, a user may be interested in knowing the max (or average) of a value in the WASN or in some restricted area of the WASN. If this function needs to be performed once, we refer to it as snapshot aggregation . If the user needs an update in periodic intervals we refer to it as periodic aggregation . The snapshot aggregation problem is trivial for a single static user. The user sends a request to flood the sensor network (or the area of interest). Upon reception of a request I. I NTRODUCTION The technological advances in embedded computers, sensors, and radios have led to the emergence of wireless ad- hoc sensor networks (WASNs) as a new class of system with uses in diverse and useful applications. Indeed, the early papers in the area [6][7][13][15] talk about the vision of cheap self-organizing ad-hoc networks that are able to perform a higher level sensing task through the collaboration of a large number of cheaper and resource constrained wireless sensor nodes. Leveraging numerous sensing devices placed close to the actual physical phenomena, the information that such networks can provide is more accurate and richer than the information provided by a system of few, expensive, state-of- the-art sensing devices. Since WASNs operate largely unattended, often i environments where the access cost of n deploying or maintaining nodes is high, a key problem in designing WASNs is how to prolong their useful lifetime by conserving energy. Consequently, a large fraction of research in WASNs has been dedicated to aspects of the energy- efficiency problem.

301 citations


Journal Article
TL;DR: In this article, the authors propose a novel and efficient mechanism for obtaining information in sensor networks which they refer to as acquire, where an active query is forwarded through the network, and intermediate nodes use cached local information (within a look-ahead of d hops) in order to partially resolve the query.
Abstract: We propose a novel and efficient mechanism for obtaining information in sensor networks which we refer to as acquire. In acquire an active query is forwarded through the network, and intermediate nodes use cached local information (within a look-ahead of d hops) in order to partially resolve the query. When the query is fully resolved, a completed response is sent directly back to the querying node. We take a mathematical modeling approach to calculate the energy costs associated with acquire. The models permit us to characterize analytically the impact of critical parameters, and compare the performance of acquire with respect to alternatives such as flooding-based querying (FBQ) and expanding ring search (ERS). We show that with optimal parameter settings, depending on the update frequency, acquire obtains order of magnitude reduction over FBQ and a potential reduction of 60% over ERS in consumed energy.

271 citations


Journal Article
TL;DR: In this article, a two-phase post-deployment calibration technique for large-scale, dense sensor de-ployment is presented, where the first phase derives relative calibration relationships between pairs of co-located sensors, while the second phase maximizes the consistency of the pair-wise calibration func- tions among groups of sensor nodes.
Abstract: Numerous factors contribute to errors in sensor measure- ments. In order to be useful, any sensor device must be calibrated to adjust its accuracy against the expected measurement scale. In large- scale sensor networks, calibration will be an exceptionally dicult task since sensor nodes are often not easily accessible and manual device-by- device calibration is intractable. In this paper, we present a two-phase post-deployment calibration technique for large-scale, dense sensor de- ployments. In its �rst phase, the algorithm derives relative calibration relationships between pairs of co-located sensors, while in the second phase, it maximizes the consistency of the pair-wise calibration func- tions among groups of sensor nodes. The key idea in the �rst phase is to use temporal correlation of signals received at neighboring sensors when the signals are highly correlated (i.e. sensors are observing the same phenomenon) to derive the function relating their bias in amplitude. We formulate the second phase as an optimization problem and present an algorithm suitable for localized implementation. We evaluate the perfor- mance of the �rst phase of the algorithm using empirical and simulated data.

264 citations


Journal Article
TL;DR: In this paper, a simple, practical, and static correlation-unaware clustering scheme that satisfies a min-max near-optimality condition is presented, and the implication for system design is that a static correlation unaware scheme can perform as well as sophisticated adaptive schemes for joint routing and compression.
Abstract: The efficacy of data aggregation in sensor networks is a function of the degree of spatial correlation in the sensed phenomenon. The recent literature has examined a variety of schemes that achieve greater data aggregation by routing data with regard to the underlying spatial correlation. A well known conclusion from these papers is that the nature of optimal routing with compression depends on the correlation level. In this article we show the existence of a simple, practical, and static correlation-unaware clustering scheme that satisfies a min-max near-optimality condition. The implication for system design is that a static correlation-unaware scheme can perform as well as sophisticated adaptive schemes for joint routing and compression.

252 citations


Journal Article
TL;DR: In this paper, the authors describe SCALE, a software tool to make radio connectivity measurements and present results of using SCALE with Mica 1 and 2 in three different environments under systematically varied conditions.
Abstract: This paper describes SCALE, a software tool to make radio connectivity measurements and presents results of using SCALE with Mica 1 and 2 in three different environments under systematically varied conditions.

Journal Article
TL;DR: Two topology control protocols are presented that extend the lifetime of dense ad hoc networks while preserving connectivity, the ability for nodes to reach each other, by identifying redundant nodes and turning their radios off.
Abstract: In wireless ad hoc networks and sensor networks, energy use is in many cases the most important constraint since it corresponds directly to operational lifetime. This paper presents two topology control protocols that extend the lifetime of dense ad hoc networks while preserving connectivity, the ability for nodes to reach each other. Our protocols conserve energy by identifying redundant nodes and turning their radios off. Geographic Adaptive Fidelity (GAF) identifies redundant nodes by their physical location and a conservative estimate of radio range. Cluster-based Energy Conservation (CEC) directly observes radio connectivity to determine redundancy and so can be more aggressive at identifying duplication and more robust to radio fading. We evaluate these protocols through analysis, extensive simulations, and experimental results in two wireless testbeds, showing that the protocols are robust to variance in node mobility, radio propagation, node deployment density, and other factors.


Journal Article
TL;DR: In this paper, the focus is on nanoassembly by manipulation with scanning probe microscopes (SPMs), which is a relatively well established process for prototyping nanosystems.
Abstract: Author(s): Requicha, Ari | Abstract: Nanorobotics encompasses the design, fabrication, and programming of robots with overall dimensions below a few micrometers, and the programmable assembly of nanoscale objects. Nanorobots are quintessential nanoelectromechanical systems (NEMS) and raise all the important issues that must be addressed in NEMS design: sensing, actuation, control, communications, power, and interfacing across spatial scales and between the organic/inorganic and biotic/abiotic realms. Nanorobots are expected to have revolutionary applications in such areas as environmental monitoring and health care.This paper begins by discussing nanorobot construction, which is still at an embryonic stage. The emphasis is on nanomachines, an area which has seen a spate of rapid progress over the last few years. Nanoactuators will be essential components of future NEMS.The paper's focus then changes to nanoassembly by manipulation with scanning probe microscopes (SPMs), which is a relatively well established process for prototyping nanosystems. Prototyping of nanodevices and systems is important for design validation, parameter optimization and sensitivity studies. Nanomanipulation also has applications in repair and modification of nanostructures built by other means. High-throughput SPM manipulation may be achieved by using multitip arrays.Experimental results are presented which show that interactive SPM manipulation can be used to accurately and reliably position molecular-sized components. These can then be linked by chemical or physical means to form subassemblies, which in turn can be further manipulated. Applications in building wires, single-electron transistors, and nanowaveguides are presented.

Journal Article
TL;DR: In this article, the authors study the Cramer Rao Bound behavior in carefully controlled scenarios and provide valuable design time suggestions by revealing the error trends associated with deployment and provide a benchmark for the performance evaluation of existing localization algorithms.
Abstract: Ad-hoc localization in multihop setups is a vital component of numerous sensor network applications. Although considerable effort has been invested in the development of multihop localization protocols, to the best of our knowledge the sensitivity of localization to its different setup parameters (network density, ranging system measurement error and beacon density) that are usually known prior to deployment has not been systematically studied. In an effort to reveal the trends and to gain better understanding of the error behavior in various deployment patterns, in this paper we study the Cramer Rao Bound behavior in carefully controlled scenarios. This analysis has a dual purpose. First, to provide valuable design time suggestions by revealing the error trends associated with deployment and second to provide a benchmark for the performance evaluation of existing localization algorithms.

Journal Article
TL;DR: This paper describes the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built-in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization.
Abstract: Advances in microelectronics, array processing, and wireless networking have motivated the analysis and design of low-cost integrated sensing, computing, and communicating nodes capable of performing various demanding collaborative space–time processing tasks In this paper, we consider the problem of coherent acoustic sensor array processing and localization on distributed wireless sensor networks We first introduce some basic concepts of beamforming and localization for wide-band acoustic sources A review of various known localization algorithms based on time-delay followed by least-squares estimations as well as the maximum–likelihood method is given Issues related to practical implementation of coherent array processing, including the need for fine-grain time synchronization, are discussed Then we describe the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built-in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization Various field-measured results using two localization algorithms show the effectiveness of the proposed testbed An extensive list of references related to this work is also included

Journal Article
TL;DR: In this paper, single-crystalline In2O3 nanowires were synthesized and then utilized to construct field effect transistors consisting of individual nanwires. But the performance of these nanowire transistors exhibited nice n-type semiconductor characteristics with well-defined linear and saturation regimes.
Abstract: Single-crystalline In2O3 nanowires were synthesized and then utilized to construct field-effect transistors consisting of individual nanowires. These nanowire transistors exhibited nice n-type semiconductor characteristics with well-defined linear and saturation regimes, and on/off ratios as high as 104 were observed at room temperature. The temperature dependence of the conductance revealed thermal emission as the dominating transport mechanism. Oxygen molecules adsorbed on the nanowire surface were found to have profound effects, as manifested by a substantial improvement of the device performance in high vacuum. Our work paved the way for In2O3 nanowires to be used as nanoelectronic building blocks and nanosensors.

Journal Article
TL;DR: A survey of techniques for energy efficient on-chip communication at different levels of the communication design hierarchy are described, including circuit-level techniques, such as low voltage signaling, architecture- level techniques,such as communication architecture selection and bus isolation, system-leveltechniques, suchAs communication based power management and dynamic voltage scaling for interconnects, and network-level Techniques, including error resilient encoding for packetized on- chip communication.
Abstract: Interconnects have been shown to be a dominant source of energy consumption in modern day System-on-Chip (SoC) designs. With a large (and growing) number of electronic systems being designed with battery considerations in mind, minimizing the energy consumed in on-chip interconnects becomes crucial. Further, the use of nanometer technologies is making it increasingly important to consider reliability issues during the design of SoC communication architectures. Continued supply voltage scaling has led to decreased noise margins, making interconnects more susceptible to noise sources such as crosstalk, power supply noise, radiation induced defects, etc. The resulting transient faults cause the interconnect to behave as an unreliable transport medium for data signals. Therefore, fault tolerant communication mechanism, such as Automatic Repeat Request (ARQ), Forward Error Correction (FEC), etc., which have been widely used in the networking community, are likely to percolate to the SoC domain. This paper presents a survey of techniques for energy efficient on-chip communication. Techniques operating at different levels of the communication design hierarchy are described, including circuit-level techniques, such as low voltage signaling, architecture-level techniques, such as communication architecture selection and bus isolation, system-level techniques, such as communication based power management and dynamic voltage scaling for interconnects, and network-level techniques, such as error resilient encoding for packetized on-chip communication. Emerging technologies, such as Code Division Multiple Access (CDMA) based buses, and wireless interconnects are also surveyed.

Journal Article
TL;DR: This paper introduces a new approach of mobile node adaptive sampling with the objective of minimizing error between the actual and reconstructed spatiotemporal behavior of environmental variables while minimizing required motion in NIMS environmental robotics.
Abstract: This paper introduces NIMS as Networked InfoMechanical Systems and describes new semantic of adaptive sampling for environmental robotics to cope with irregularities of the phenomena.

Journal Article
TL;DR: Wang et al. as mentioned in this paper investigated task decomposition and collaboration in a two-tiered sensor network for habitat monitoring, where each macronode combines data collected by multiple micronodes for target classification and localization.
Abstract: EURASIP Journal on Applied Signal Processing 2003:4, 392–401 c 2003 Hindawi Publishing Corporation Preprocessing in a Tiered Sensor Network for Habitat Monitoring Hanbiao Wang UCLA, Computer Science Department, Los Angeles, CA 90095-1596, USA Email: hbwang@cs.ucla.edu Deborah Estrin UCLA, Computer Science Department, Los Angeles, CA 90095-1596, USA Email: destrin@cs.ucla.edu Lewis Girod UCLA, Computer Science Department, Los Angeles, CA 90095-1596, USA Email: girod@cs.ucla.edu Received 1 February 2002 and in revised form 6 October 2002 We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing, which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes. Keywords and phrases: sensor network, collaborative signal processing, tiered architecture, classification, data reduction, data compression. INTRODUCTION Recent advances in wireless network, low-power circuit de- sign, and micro electromechanical systems (MEMS) will en- able pervasive sensing and will revolutionize the way in which we understand the physical world [1]. Extensive work has been done to address many aspects of wireless sensor network design, including low-power schemes [2, 3, 4], self- configuration [5], localization [6, 7, 8, 9, 10, 11], time syn- chronization [12, 13], data dissemination [14, 15, 16], and query processing [17]. This paper builds upon earlier work to address task decomposition and collaboration among nodes. Although hardware for sensor network nodes will be- come smaller, cheaper, more powerful, and more energy- efficient, technological advances will never obviate the need to make trade-offs. Cerpa et al [18]. described a tiered hard- ware platform for habitat monitoring applications. Smaller, less capable nodes are used to exploit spatial diversity, while more powerful nodes combine and process the micronode sensing data. Although details of task decomposition and collabora- tion clearly depend on the specific characteristics of appli- cations, we hope to identify some common principles that can be applied to tiered sensor networks across various ap- plications. We use birdcall recognition and localization as a case study of task decomposition and collaboration. In this context, we demonstrate two types of micronode prepro- cessing. Distributed detection algorithms and beamforming algorithms will not be discussed in detail in this paper al- though they are fundamental building blocks for our appli- cation. The rest of the paper is organized as follows. Section 2 presents a two-tiered sensor network for habitat monitor- ing and the task decomposition and collaboration between tiers. Sections 3 and 4 illustrate two types of micronode preprocessing. Section 5 presents the preliminary results of data reduction and compression experiments. Section 6 is a brief description of related work. Section 7 concludes this paper.

Journal Article
TL;DR: A model of optimally precise and globally consistent clock synchronization, using the model provided by Reference-Broadcast Synchronization.
Abstract: A model of optimally precise and globally consistent clock synchronization, using the model provided by Reference-Broadcast Synchronization.

Journal Article
TL;DR: EmStar is described, a Linux-based software framework that addresses data reduction process issues and a modular programming model that allows domain knowledge in one module to affect the modules around it, without sacrificing the advantages of reusability and abstraction that strict layering provides.
Abstract: Recently, increasing research attention has been directed toward wireless sensor networks: collections of small, low-power nodes, physically situated in the environment, that can intelligently deliver high-level sensing results to the user. As the community has moved into more complex design efforts—large-scale, longlived systems that truly require self-organization and adaptivity to the environment—a number of important software design issues have arisen. The data reduction process is critical for meeting energy and channel capacity constraints by preventing raw sensor time-series from being delivered. However, the lack of raw data prevents the data reduction process itself from being evaluated. Simulation is difficult to apply; the network’s physical situatedness makes it sensitive to subtleties of sensors and wireless communication channels that are difficult to model. A second problem that arises is that the traditional layered protocol stack, designed to emphasize conceptual abstraction and reusability, has too high of an efficiency cost in this domain where efficiency is paramount. In this paper, we describe EmStar, a Linux-based software framework that addresses these issues. EmStar’s novel execution environment encompasses pure simulation, true in-situ deployment, and a hybrid mode that combines simulation with real wireless communication and sensors situated in the environment. Each of these modes run the same code and use the same configuration files, allowing developers to seamlessly iterate between the convenience of simulation and the reality afforded by physically situated devices. We also describe a modular programming model that allows domain knowledge in one module to affect the modules around it, without sacrificing the advantages of reusability and abstraction that strict layering provides. Using several case studies, we show how EmStar has been applied to building real sensor network services.


Journal Article
TL;DR: In this article, the authors consider the joint optimization of sensor placement and transmission structure for data gathering, where a given number of nodes need to be placed in a field such that the sensed data can be reconstructed at a sink within specified distortion bounds while minimizing the energy consumed for communication.
Abstract: We consider the joint optimization of sensor placement and transmission structure for data gathering, where a given number of nodes need to be placed in a field such that the sensed data can be reconstructed at a sink within specified distortion bounds while minimizing the energy consumed for communication.

Journal Article
TL;DR: An efficient minimalist algorithm is presented which assumes that global information is not available to the robot (neither a map, nor GPS), and which has a cover time better than O(n log n), where the n markers that are deployed form the vertices of a regular graph.
Abstract: We study the problem of exploring an unknown environment using a single robot. The environment is large enough (and possibly dynamic) that constant motion by the robot is needed to cover the environment. We term this the dynamic coverage problem. We present an efficient minimalist algorithm which assumes that global information is not available to the robot (neither a map, nor GPS). Our algorithm uses markers which the robot drops off as signposts to aid exploration. We conjecture that our algorithm has a cover time better than O(n log n), where the n markers that are deployed form the vertices of a regular graph. We provide experimental evidence in support of this conjecture. We show empirically that the performance of our algorithm on graphs is similar to its performance in simulation.

Journal Article
TL;DR: In this paper, the authors explored the latency and energy tradeoffs introduced by the heterogeneity of sensor nodes in the network and explored the tradeoff between energy and latency of sensor node heterogeneity.
Abstract: Explored the latency and energy tradeoffs introduced by the heterogeneity of sensor nodes in the netework.

Journal Article
TL;DR: In this paper, the chemical gating effect of organic molecules and biomolecules with amino or nitro groups was investigated and attributed to the amino groups carried by the bio species.
Abstract: In2O3 nanowire transistors were used to investigate the chemical gating effect of organic molecules and biomolecules with amino or nitro groups. The nanowire conductance changed dramatically after adsorption of these molecules. Specifically, amino groups in organic molecules such as butylamine, donated electrons to In2O3 nanowires and thus led to enhanced carrier concentrations and conductance, whereas molecules with nitro groups such as butyl nitrite made In2O3 nanowires less conductive by withdrawing electrons. In addition, intrananowire junctions created by partial exposure of the nanowire device to butyl nitrite were investigated, and pronounced rectifying current–voltage characteristics were obtained. Furthermore, chemical gating by low-density lipoprotein cholesterol, the offending agent in coronary heart diseases, was also observed and attributed to the amino groups carried by the bio species.

Journal Article
TL;DR: In this article, the authors describe the development and testing of a sensitive and selective potentiometric nitrate microsensor based on doped polypyrrole films, utilizing 6-7 µm carbon fibers as a substrate for pyrrole electropolymerization.
Abstract: This work describes the development and testing of a sensitive and selective potentiometric nitrate microsensor based on doped polypyrrole films Utilizing 6–7 µm carbon fibers as a substrate for pyrrole electropolymerization allowed fabrication of flexible, miniature and inexpensive sensors for in situ monitoring of nitrate The sensors have a rapid response (several seconds) and in their characteristics are competitive with expensive commercial nitrate ion selective electrodes (ISE), exhibiting Nernstian behavior (slopes 54 ± 1 mV per log cycle of nitrate concentration (n=8), at T=22°C), a linear response to nitrate concentrations spanning three orders of magnitude (01–10⁻⁴ M or 6200–62 ppm of NO₃⁻), and a detection limit of (3 ± 1) × 10⁻⁵ M (125–25 ppm) After a 2-month-period, the response was unchanged, and after 45 months, one version of the electrode continued to exhibit significant sensitivity to nitrate Several polypyrrole nitrate microsensors were embedded sequentially downstream from a point source of nitrate solution in an intermediate scale physical groundwater model as a test of their performance under simulated environmental conditions The microsensors responded appropriately to the approaching nitrate solution front, demonstrating dispersion and attenuation of the nitrate concentrations that increased with distance from the source

Journal Article
TL;DR: Timing-sync Protocol for Sensor Networks (TPSN) as mentioned in this paper is a protocol that synchronizes a pair of neighboring motes to an average accuracy of less than 20μs.
Abstract: Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are timestamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks. In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20μs. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable.

Journal Article
TL;DR: In this article, the authors demonstrate the use of in-network wavelet-based summarization and progressive aging of summaries in support of longterm querying in storage and communication-constrained networks.
Abstract: Wireless sensor networks enable dense sensing of the environment, offering unprecedented opportunities for observing the physical world. Centralized data collection and analysis adversely impact sensor node lifetime. Previous sensor network research has, therefore, focused on in network aggregation and query processing, but has done so for applications where the features of interest are known a priori. When features are not known a priori, as is the case with many scientific applications in dense sensor arrays, efficient support for multi-resolution storage and iterative, drill-down queries is essential. Our system demonstrates the use of in-network wavelet-based summarization and progressive aging of summaries in support of longterm querying in storage and communication-constrained networks. We evaluate the performance of our linux implementation and show that it achieves: (a) low communication overhead for multi-resolution summarization, (b) highly efficient drill-down search over such summaries, and (c) efficient use of network storage capacity through load-balancing and progressive aging of summaries.