scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Sensor deployment and target localization in distributed sensor networks

01 Feb 2004-ACM Transactions in Embedded Computing Systems (ACM)-Vol. 3, Iss: 1, pp 61-91
TL;DR: A virtual force algorithm (VFA) is proposed as a sensor deployment strategy to enhance the coverage after an initial random placement of sensors to improve the coverage of cluster-based distributed sensor networks.
Abstract: The effectiveness of cluster-based distributed sensor networks depends to a large extent on the coverage provided by the sensor deployment. We propose a virtual force algorithm (VFA) as a sensor deployment strategy to enhance the coverage after an initial random placement of sensors. For a given number of sensors, the VFA algorithm attempts to maximize the sensor field coverage. A judicious combination of attractive and repulsive forces is used to determine the new sensor locations that improve the coverage. Once the effective sensor positions are identified, a one-time movement with energy consideration incorporated is carried out, that is, the sensors are redeployed, to these positions. We also propose a novel probabilistic target localization algorithm that is executed by the cluster head. The localization results are used by the cluster head to query only a few sensors (out of those that report the presence of a target) for more detailed information. Simulation results are presented to demonstrate the effectiveness of the proposed approach.

Summary (5 min read)

1. INTRODUCTION

  • Distributed sensor networks (DSNs) are important for a number of strategic applications such as coordinated target detection, surveillance, and localization.
  • The authors present the virtual force algorithm (VFA) as a sensor deployment strategy to enhance the coverage after an initial random placement of sensors.
  • The VFA algorithm is based on disk packing theory [Locateli and Raber 2002] and the virtual force field concept from robotics [Howard et al. 2002].
  • Based on the information received from the sensor and the knowledge of the sensor deployment within the cluster, the cluster head executes a probabilistic scoring-based localization algorithm to determine likely position of the target.
  • The dimensions of the grid provide a measure of the sensor field.

3.1 Preliminaries

  • For a cluster-based sensor network architecture, the authors make the following assumptions: —After the initial random deployment, all sensor nodes are able to communicate with the cluster head.
  • —The cluster head is responsible for executing the VFA algorithm and managing the one-time movement of sensors to the desired locations ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • On the other hand, if a pair of sensors is too far apart from each (once again a predetermined threshold is used here), they exert positive forces on each other.
  • This ensures that a globally uniform sensor placement is achieved.
  • Figure 1 also illustrates the translation of a distance response from a sensor to the confidence level as a probability value about this sensor response.

3.2 Virtual Forces

  • The authors now describe the virtual forces and virtual force calculation in the VFA algorithm.
  • If more detailed information about the obstacles and preferential coverage areas is available, the parameters governing the magnitude and direction (i.e., attractive or repulsive) of these forces can be chosen ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • The threshold distance dth controls how close sensors get to each other.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • When sensor detection areas overlap, the closer the sensors are to each other, the higher is the coverage probability for grid points in the overlapped areas.

3.3 Energy Constraint on the VFA Algorithm

  • In order to prolong the battery life, the distances between the initial and final position of the sensors are limited in the repositioning phase to conserve energy.
  • The authors use dmax(si) to denote the maximum distance that sensor si can move in the repositioning phase.
  • The cluster head uses the VFA algorithm to find appropriate sensor node locations based on the coverage requirements.
  • No movements are performed during the execution of the VFA algorithm.

3.4 Procedural Description of the VFA Algorithm

  • Figure 7 shows the data structure of the VFA algorithm, and Figure 8 shows the implementation details in pseudocode form.
  • Due to the granularity of the grid and the fact that the actual coverage is evaluated by the number of grid points that have been adequately covered, the convergence of the VFA algorithm is controlled by a threshold value, denoted by c. Let us use c to denote the current grid coverage of the number iteration in the VFA algorithm.
  • For the binary sensor detection model without the energy constraint, the upper bound value denoted as c̄ is kπr2; for the probabilistic sensor detection model or binary sensor detection model with the energy constraint, c is checked for saturation by defining c̄ as the average of the coverage ratios of the near 5 (or 10) iterations.
  • Since these specific scenarios are extremely unlikely for random deployment, they are not considered in this paper.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

4. TARGET LOCALIZATION

  • In order to conserve power and bandwidth, the message from the sensor to the cluster head is kept very small; in fact, the presence or absence of a target can be encoded in just one bit.
  • Detailed information such as detection strength level, imagery, and time series data are stored in the local memory and provided to the cluster head upon subsequent queries.
  • Based on the information received from the sensors within the cluster, the cluster head executes a probabilistic localization algorithm to determine candidate target locations, and it then queries the sensor(s) in the vicinity of the target.

4.1 Detection Probability Table

  • After the VFA algorithm is used to determine the final sensor locations, the cluster head generates a detection probability table for each grid point.
  • The binary string 110 denotes the possibility that s1 and s2 report a target but s3 does not report a target.
  • For each such possibility d1d2d3 (d1, d2, d3 ∈ {0, 1}) for a grid point, the authors calculate the conditional probabilities that the cluster head receives d1d2d3 given that a target is present at that grid point.
  • Note that the probability table generation is only a one-time cost.

4.2 Score-Based Ranking

  • After the probability table is generated for all the grid points, localization is done by the cluster head if a target is detected by one or more sensors.
  • When at time instant t, the cluster head receives positive event message from k(t) sensors, it uses the grid point probability table to determine which of these sensors are most suitable to be queried for more detailed information.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • The authors are using wxy(t) to filter out grid points that are not likely to be close to the actual target location.

4.3 Selection of Sensors to Query

  • To select the sensor to query based on the event reports and the localization procedure, the authors first note that for time instant t, if kmax ≥ krep(t), then all reported sensors can be queried.
  • When this happens, the authors calculate the score concentration by averaging ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • The grid point with the highest score (or the score concentration) is the most likely current target location.
  • The selected sensors provide enough information in the subsequent stage to facilitate target identification.
  • The results show that Sq(t) matches S̄q(t) in many cases.

4.4 Evaluation of Energy Savings

  • The authors next evaluate the energy saved by the proposed probabilistic localization approach.
  • The parameters T1, T2, and T3 denote the lengths of time involved in the transmission and reception, which are directly proportional to the sizes of data for yes/no messages, control messages to query sensors, and the detailed sensor data transmitted to the cluster head.
  • Also, E is monotonically nondecreasing with time.
  • Figure 12 shows the energy saved for the target trace in Figure 10.

4.5 Procedural Description for Target Localization

  • Figure 13 shows the pseudocode of the procedure to generate the probability table for each grid point.
  • For an n by m grid with k sensors, the computational complexity involved in generating the probability table is O(nm2k) since the maximum number of sensors that can detect a grid point is k for the worst case.
  • Therefore, the computational complexity of the probabilistic localization algorithm is max{O, O(nm2k)} = O(nm2k).
  • Even though the worst-case complexity of the localization procedure is exponential ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • In k, in practice, the localization procedure can execute in less time since the number of sensors that can effectively detect a target at a given grid point is quite small.

5. SIMULATION RESULTS

  • The authors first present simulation results obtained using the VFA algorithm.
  • The simulation results for the probabilistic localization algorithm are then presented using the sensor locations from the VFA algorithm as inputs.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • TargetTrace starts from tstart and ends at tend, with time unit as 1.
  • The deployment requirements include the maximum improvement of coverage over random deployment, the coverage for preferential areas, and the avoidance of obstacles.

5.1 Case Study 1: Binary Sensor Detection Model

  • Figures 15–18 present simulation results based on the binary sensor detection model.
  • Figure 16 shows the final sensor positions determined by the VFA algorithm.
  • For the binary sensor detection model, an upper bound on the coverage is given by the ratio of the sum of the circle areas (corresponding to sensors) to the total area of the sensor field.
  • For their example, this upper bound evaluates to 0.628 and it is achieved after 28 iterations of the VFA algorithm.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

5.2 Case Study 2: Probabilistic Sensor Detection Model

  • Figures 19–21 present simulation results for the probabilistic sensor model.
  • Figure 20 shows the final sensor positions determined by the VFA algorithm.
  • Shows the improvement of coverage during the execution of the VFA algorithm.
  • For the probabilistic sensor detection model, even though there are a large number of grid points that are covered, the overall number of grid points with coverage probability greater than the required level is fewer.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

5.3 Case Study 3: Sensor Field with a Preferential Area and an Obstacle

  • Preferential areas should be covered first, therefore they are modeled as attractive force sources in the VFA algorithm.
  • Figure 25 shows the virtual movement traces of all sensors during the execution of the VFA algorithm.
  • For case study 2, the VFA algorithm took only 3 min to complete 50 iterations.
  • ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • CPU time is important because sensor redeployment should not take excessive time.

5.4 Case Study 4: Probability-Based Target Localization

  • The authors evaluate the localization algorithm using the results produced by the VFA algorithm in the sensor deployment stage.
  • The authors assume that a maximum of two sensors can be selected for querying by the cluster head.
  • There are total of 82 such moves in the simulated target movement trace.
  • The parameter E(t) shows the energy saved by ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.
  • The localization algorithm for the detection event at time instant t. Figure 29 shows the estimated target location based on the grid point with the highest score.

5.5 Discussion

  • From the simulation results, the authors see that the VFA algorithm improves the sensor field coverage considerably compared to random sensor placement.
  • The results of the proposed energy-conserving target localization method also show that considerable energy is saved in localizing a target.
  • The authors found that the algorithm converged more rapidly for their case studies if wR wA.
  • The sensor placement strategy is centralized at the cluster level since every cluster head makes redeployment decisions for the nodes in its cluster.
  • The VFA algorithm however is also applicable for alternative location indicators, distance measures, and models of preferential areas and obstacles.

6. CONCLUSION

  • The authors have proposed the virtual force algorithm (VFA) as a practical approach for sensor deployment.
  • The authors have also shown that the proposed probabilistic localization algorithm can significantly reduce the energy consumption for target detection and location.
  • The VFA algorithm can be made more efficient if it is provided with the theoretical bounds on the number of sensors needed to achieve a given coverage threshold.
  • Finally, the authors will examine continuous coordination systems instead of discrete coordination systems in this work.

Did you find this useful? Give us your feedback

Figures (31)

Content maybe subject to copyright    Report

Sensor Deployment and Target Localization
in Distributed Sensor Networks
YI ZOU and KRISHNENDU CHAKRABARTY
Duke University
The effectiveness of cluster-based distributed sensor networks depends to a large extent on the cov-
erage provided by the sensor deployment. We propose a virtual force algorithm (VFA) as a sensor
deployment strategy to enhance the coverage after an initial random placement of sensors. For a
given number of sensors, the VFA algorithm attempts to maximize the sensor field coverage. A ju-
dicious combination of attractive and repulsive forces is used to determinethenewsensorlocations
that improve the coverage. Once the effective sensor positions are identified, a one-time movement
with energy consideration incorporated is carried out, that is, the sensors are redeployed, to these
positions. We also propose a novel probabilistic target localization algorithm that is executed by
the cluster head. The localization results are used by the cluster head to query only a few sensors
(out of those that reportthe presence of a target) for more detailed information. Simulation results
are presented to demonstrate the effectiveness of the proposed approach.
Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks]: Network
Architecture and Design—distributed networks; wireless communication; C.2.4 [Computer-
Communication Networks]: Distributed Systems—distributed applications; C.3 [Special-
Purpose and Application-Based Systems]: Real-time and Embedded Systems
General Terms: Algorithms, Performance, Management
Additional Key Words and Phrases: Cluster-based sensor networks, cluster head, sensor field
coverage, sensor placement, virtual force
1. INTRODUCTION
Distributed sensor networks (DSNs) are important for a number of strategic
applications such as coordinated target detection, surveillance, and localiza-
tion. The effectiveness of DSNs is determined to a large extent by the coverage
provided by the sensor deployment. The positioning of sensors affects cover-
age, communication cost, and resource management. In this paper, we focus on
sensor placement strategies that maximize the coverage for a given number of
sensors within a cluster in cluster-based DSNs.
As an initial deployment step, a random placement of sensors in the target
area(sensorfield)isoftendesirable,especiallyifnoapriori knowledgeoftheter-
rain is available. Random deployment is also practical in military applications,
Authors’ address: Department of Electrical and Computer Engineering, Hudson Hall, PO Box
90291, Duke University, Durham, NC 27708; email: {yz1,krish}@ee.duke.edu.
Permission to make digital/hard copy of part of this work for personal or classroom use is granted
withoutfee provided thatthe copies arenot made ordistributed for profitor commercial advantage,
the copyright notice, the title of the publication, and its date of appear, and notice is given that
copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to
redistribute to lists requires prior specific permision and/or a fee.
C
2004 ACM 1539-9087/04/0200-0061 $5.00
ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004, Pages 61–91.

62
Y. Zou and K. Chakrabarty
where DSNs are initially established by dropping or throwing sensors into the
sensor eld. However, random deployment does not always lead to effective
coverage, especially if the sensors are overly clustered and there is a small
concentration of sensors in certain parts of the sensor eld. The key idea of
this paper is that the coverage provided by a random deployment can be im-
proved using a force-directed algorithm. We present the virtual force algorithm
(VFA) as a sensor deployment strategy to enhance the coverage after an initial
random placement of sensors. The VFA algorithm is based on disk packing the-
ory [Locateli and Raber 2002] and the virtual force eld concept from robotics
[Howard et al. 2002]. For a given number of sensors, VFA attempts to maxi-
mize the sensor eld coverage using a combination of attractive and repulsive
forces. During the execution of the force-directed VFA algorithm, sensors do
not physically move but a sequence of virtual motion paths is determined for
the randomly placed sensors. Once the effective sensor positions are identied,
a one-time movement is carried out to redeploy the sensors at these positions.
Energy constraints are also included in the sensor repositioning algorithm.
We also propose a novel target localization approach based on a two-step
communication protocol between the cluster head and the sensors within the
cluster. Since the energy consumption in DSNs increases signicantly during
periods of activity, which may be triggered, for example, by a moving target
[Bhardwajand Chandrakasan 2002], we propose an energy-conserving method
for target localization in cluster-based DSNs. In the rst step, sensors detect-
ing a target report the event to the cluster head. The amount of information
transmitted to the cluster head is limited; in order to save power and band-
width, the sensor only reports the presence of a target,and it does not transmit
detailed information such as signal strength, condence level in the detection,
imagery or time series data. Based on the information received from the sen-
sor and the knowledge of the sensor deployment within the cluster, the cluster
head executes aprobabilisticscoring-basedlocalization algorithm to determine
likely position of the target. The cluster head subsequently queries a subset of
sensors that are in the vicinity of these likely target positions.
The sensor eld is represented by a two-dimensional grid. The dimensions
of the grid provide a measure of the sensor eld. The granularity of the grid,
that is, distance between grid points can be adjusted to trade off computation
time of the VFA algorithm with the effectiveness of the coverage measure. The
detectionbyeach sensor ismodeledas a circleon the two-dimensionalgrid.The
center of the circle denotes the sensor, while the radius denotes the detection
range ofthesensor. We rst consider a binarydetectionmodelin which a target
is detected (not detected) with complete certainty by the sensor if a target is
inside (outside) its circle. The binary model facilitates the understandingof the
VFA model. We then investigate a realistic probabilistic model in which the
probability that the sensor detects a target depends on the relative position of
the target within thecircle. The details of the probabilistic modelarepresented
in Section 1.
The organization of the paper is as follows. In Section 2, we review prior re-
searchontopics related to sensordeploymentin DSNs.In Section 3, wepresent
details of the VFA algorithm. In Section 4, we present the target localization
ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

Sensor Deployment and Target Localization in Distributed Sensor Networks
63
algorithm that is executed by the cluster head. In Section 5, we present simu-
lation results using the proposed sensor deployment strategy for various situ-
ations. Section 6 presents conclusions and outlines directions for future work.
2. RELATED PRIOR WORK
Sensor deployment problems havebeenstudiedinavariety of contexts [Brooks
and Iyengar 1997; Iyengar et al. 1995; Qi et al. 2001; Varshney 1996]. In the
area of adaptive beacon placement and spatial localization, a number of tech-
niqueshavebeenproposedforbothne-grainedandcoarse-grainedlocalization
[Bulusu et al. 2001; Heidemann and Bulusu 2001].
Sensor deployment and sensor planning for military applications are de-
scribed in Musman et al. [1997], where a general sensor model is used to detect
elusive targets in the battleeld. The sensor model is characterized by a win-
dow, which includes physical sensor model parameters, sensor location, terrain
characteristics, and the data collected in a certain period of time. The sensor
coverage analysis is based on a hypothesis of possible target movements and
sensor attributes. This analysis generates all possible routes of targets move-
ments. Bayesian networks are used to calculate the probability that a certain
targetisdetectedinaparticularareaduringparticulartimeintervals.However,
the proposed DSNs framework in Musman et al. [1997] requires a great deal of
a priori knowledge about possible targets. Hence, it is not applicable in scenar-
ios where there is no information about potential targets in the environment.
Thedeploymentofsensorsforcoverageofthesensoreldhasbeenconsidered
for multi-robot exploration [Howard et al. 2002]. Each robot can be viewed as a
sensor node in such systems. An incremental deployment algorithm is used in
which sensor nodes are deployed one by one in an adaptive fashion. Each new
deployment of a sensor is based on the sensed information from sensors de-
ployed earlier.Therstsensorisplacedrandomly.Adrawback of this approach
is that it is computationally expensive. As the number of sensors increases,
each new deployment results in a relatively large amount of computation.
The problem of evaluating the coverage provided by a given placement of
sensors is discussed in Meguerdichian et al. [2001]. The major concern here is
the self-localization of sensor nodes; sensor nodes are considered to be highly
mobile and they move frequently. An optimal polynomial-time algorithm that
uses graph theory and computational geometry constructs is used to determine
the best-case and the worst-case coverage.
Radarandsonar coveragealso presentseveralrelated challenges[Priyantha
et al. 2000]. Radar and sonar netting optimization are of great importance for
detection and tracking in a surveillance area. Based on the measured radar
cross-sections and the coverage diagrams for the different radars, the authors
in Priyantha et al. [2000] propose a method for optimally locating the radars to
achieve satisfactory surveillance with limited radar resources.
Sensor placement on two- and three-dimensional grids has been formu-
lated as a combinatorial optimization problem, and solved using integer lin-
ear programming in Chakrabarty et al. [2001, 2002]. This approach suffers
fromtwomaindrawbacks.First,computationalcomplexitymakestheapproach
ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

64
Y. Zou and K. Chakrabarty
infeasible for large problem instances. Second, the grid coverage approach re-
lies on perfect sensor detection, that is, a sensor is expected to yield a binary
yes/no detection outcome in every case. However, because of the inherent un-
certainty associated with sensor readings, sensor detection must be modeled
probabilistically [Dhillon et al. 2002].
It is well known, however, that there is inherent uncertainty associated with
sensor readings; hence, sensor detections must be modeled probabilistically
[Dhillon et al. 2002]. A probabilistic optimization framework for minimizing
the number of sensors for a two-dimensional grid has been proposed recently
[Dhillonetal.2002].Thisalgorithmattemptstomaximizetheaveragecoverage
of the grid points. Finally, there exists a close resemblance between the sensor
placement problem and the art gallery problem (AGP) addressed by the art
gallery theorem [ORourke 1987]. The AGP problem can be informally stated
as that of determining the minimum number of guards required to cover the
interior of an art gallery. (The interior of the art gallery is represented by a
polygon.) The AGP has been solved optimally in two dimension and shown to
be NP-hard in the three-dimensional case. Several variants of AGP have been
studied in the literature, including mobile guards, exterior visibility, and poly-
gons with holes. Other related work includes the placement of a given number
of sensors to reduce communication cost [Kasetkasem and Varshney 2001] and
optimal sensor placement for a given target distribution [Penny 1998].
Our proposed algorithm differs from prior methods in several ways. First,
we consider both the binary sensor detection model and probabilistic detection
model to handle sensors with both high and low detection accuracy. Second,
the amount of computation is limited since we perform a one-time computation
and sensor locations are determined at the same time for all the sensor nodes.
Third, our approach improves upon an initial random placement, which offers
a practical sensor deploymentsolution. Finally, we investigate the relationship
betweensensorplacementwithinaclusterandtargetlocalizationbythecluster
head in an effort to conserve energy whenever there are activities in the DSN.
3. VIRTUAL FORCE ALGORITHM
In this section, we describe the underlying assumptions and the virtual force
algorithm (VFA).
3.1 Preliminaries
For a cluster-based sensor network architecture, we make the following as-
sumptions:
After the initial random deployment, all sensor nodes are able to commu-
nicate with the cluster head. This communication is necessary only for the
transmission of the new locations to the nodes. This is done only once per
node and does not require large amount of data to be transferred; therefore,
the energy consumed for this purpose is ignored.
The cluster head is responsible for executing the VFA algorithm and manag-
ing the one-time movement of sensors to the desired locations
ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

Sensor Deployment and Target Localization in Distributed Sensor Networks
65
In order to minimize the network trafc and conserve energy, sensors only
send a yes/no notication message to the cluster head when a target is de-
tected. The cluster head intelligently queries a subset of sensors to gather
more detailed target information.
The VFA algorithm combinestheideasof potential eld [Howard etal.2002]
and disk packing [Locateli and Raber 2002]. In the sensor eld, each sensor
behaves as a source of force for all other sensors. This force can be either
positive (attractive) or negative (repulsive). If two sensors are placed too close
to each other, the closeness being measured by a predetermined threshold,
they exert negative forces on each other. This ensures that the sensors are not
overly clustered, leading to poor coverage in other parts of the sensor eld. On
the other hand, if a pair of sensors is too far apart from each (once again a
predetermined threshold is used here), they exert positive forces on each other.
This ensures that a globally uniform sensor placement is achieved.
Consider an n by m sensor eld grid and assume that there are k sen-
sors deployed in the random deployment stage. Each sensor has a detection
range r. Assume sensor s
i
is deployed at point (x
i
, y
i
). For any point P at
(x, y), we denote the Euclidean distance between s
i
and P as d(s
i
, P), that is,
d(s
i
, P) =
(x
i
x)
2
+ (y
i
y)
2
. Equation (1) shows the binary sensor model
[Chakrabarty et al. 2001, 2002] that expresses the coverage c
xy
(s
i
) of a grid
point P by sensor s
i
.
c
xy
(s
i
) =
1, if d(s
i
, P) < r
0, otherwise.
(1)
The binary sensor model assumes that sensor readings have no associated
uncertainty. In reality, sensor detections are imprecise, hence the coverage
c
xy
(s
i
) needs to be expressed in probabilistic terms. In this work, we assume
the sensor model given by Equation (2), which is motivated in part by Elfes
[1990]. Our approach can also be used with alternative sensor models that are
based on radio signal propagation models in which signal strength decays as a
power of the distance [Rappaport 1996]; the sensor placement and localization
algorithms are independent of the sensor models.
c
xy
(s
i
) =
0, if r + r
e
d(s
i
, P)
e
λa
β
,ifr r
e
< d(s
i
, P) < r + r
e
1, if r r
e
d(s
i
, P)
(2)
where r
e
(r
e
< r) is a measure of the uncertainty in sensor detection, a =
d(s
i
, P) (r r
e
), and λ and β are parameters that measure detection prob-
ability when a target is at distance greater than r
e
but within a distance from
the sensor. This model reects the behavior of range sensing devices such as
infrared and ultrasound sensors. The probabilistic sensor detection model is
shown in Figure 1. Note that distances are measured in units of grid points.
Figure 1 also illustrates the translation of a distance response from a sensor to
the condence level as a probability value about this sensor response. Differ-
ent values of the parameters α and β yield different translations reected by
ACM Transactions on Embedded Computing Systems, Vol. 3, No. 1, February 2004.

Citations
More filters
Journal ArticleDOI
TL;DR: Several state-of-the-art algorithms and techniques are presented and compared that aim to address the coverage-connectivity issue in wireless sensor networks.

508 citations


Cites background or methods from "Sensor deployment and target locali..."

  • ...Based on the probabilistic sensing model, the notion of probabilistic coverage [74] of a point P(xi , yi ) by a sensor si is defined as follows:...

    [...]

  • ...This binary disc sensing model can be extended to a more realistic one, called the probabilistic sensing model [74], as illustrated in Fig....

    [...]

  • ...Similar to the potential field-based approach, a sensor deployment technique based on virtual forces is proposed in [74] and [73] to increase the area coverage after an initial random deployment....

    [...]

  • ...Since a point might be covered by multiple sensors at the same time, each contributing a certain value of coverage, the concept of total coverage of a point is also defined as follows [74]....

    [...]

Journal ArticleDOI
TL;DR: This article surveys research progress made to address various coverage problems in sensor networks, and state the basic Coverage problems in each category, and review representative solution approaches in the literature.
Abstract: Sensor networks, which consist of sensor nodes each capable of sensing environment and transmitting data, have lots of applications in battlefield surveillance, environmental monitoring, industrial diagnostics, etc. Coverage which is one of the most important performance metrics for sensor networks reflects how well a sensor field is monitored. Individual sensor coverage models are dependent on the sensing functions of different types of sensors, while network-wide sensing coverage is a collective performance measure for geographically distributed sensor nodes. This article surveys research progress made to address various coverage problems in sensor networks. We first provide discussions on sensor coverage models and design issues. The coverage problems in sensor networks can be classified into three categories according to the subject to be covered. We state the basic coverage problems in each category, and review representative solution approaches in the literature. We also provide comments and discussions on some extensions and variants of these basic coverage problems.

507 citations


Cites background or methods from "Sensor deployment and target locali..."

  • ...2002; Dhillon and Chakrabarty 2003; Zou and Chakrabarty 2004b; Zhang et al. 2006; Stolkin et al. 2007; Stolkin and Florescu 2009]....

    [...]

  • ...Another truncated attenuated disk model [Zou and Chakrabarty 2004a] is de.ned as ....

    [...]

  • ...Many variants of the simple GREEDY-SET-COVER algorithm have been proposed to solve various node placement problems [Dhillon et al. 2002; Dhillon and Chakrabarty 2003; Zou and Chakrabarty 2004b; Wang and Zhong 2006; Xu and Sahni 2007; Fang and Wang 2008; Wang 2008]....

    [...]

Journal ArticleDOI
TL;DR: This article reviews some research activities in WSN and reviews some CPS platforms and systems that have been developed recently, including health care, navigation, rescue, intelligent transportation, social networking, and gaming applications.

323 citations


Additional excerpts

  • ...Virtual force [70] Deploy nodes ✓ ✓ ✓...

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors provide a tutorial and survey of recent research and development efforts addressing this issue by using the technique of multi-objective optimization (MOO), and elaborate on various prevalent approaches conceived for MOO, such as the family of mathematical programming-based scalarization methods, and a variety of other advanced optimization techniques.
Abstract: Wireless sensor networks (WSNs) have attracted substantial research interest, especially in the context of performing monitoring and surveillance tasks. However, it is challenging to strike compelling tradeoffs amongst the various conflicting optimization criteria, such as the network’s energy dissipation, packet-loss rate, coverage, and lifetime. This paper provides a tutorial and survey of recent research and development efforts addressing this issue by using the technique of multi-objective optimization (MOO). First, we provide an overview of the main optimization objectives used in WSNs. Then, we elaborate on various prevalent approaches conceived for MOO, such as the family of mathematical programming-based scalarization methods, the family of heuristics/metaheuristics-based optimization algorithms, and a variety of other advanced optimization techniques. Furthermore, we summarize a range of recent studies of MOO in the context of WSNs, which are intended to provide useful guidelines for researchers to understand the referenced literature. Finally, we discuss a range of open problems to be tackled by future research.

311 citations

Journal ArticleDOI
TL;DR: This paper introduces the maximum coverage deployment problem in wireless sensor networks and analyzes the properties of the problem and its solution space to propose an efficient genetic algorithm using a novel normalization method.
Abstract: Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

295 citations

References
More filters
Book
15 Jan 1996
TL;DR: WireWireless Communications: Principles and Practice, Second Edition is the definitive modern text for wireless communications technology and system design as discussed by the authors, which covers the fundamental issues impacting all wireless networks and reviews virtually every important new wireless standard and technological development, offering especially comprehensive coverage of the 3G systems and wireless local area networks (WLANs).
Abstract: From the Publisher: The indispensable guide to wireless communications—now fully revised and updated! Wireless Communications: Principles and Practice, Second Edition is the definitive modern text for wireless communications technology and system design. Building on his classic first edition, Theodore S. Rappaport covers the fundamental issues impacting all wireless networks and reviews virtually every important new wireless standard and technological development, offering especially comprehensive coverage of the 3G systems and wireless local area networks (WLANs) that will transform communications in the coming years. Rappaport illustrates each key concept with practical examples, thoroughly explained and solved step by step. Coverage includes: An overview of key wireless technologies: voice, data, cordless, paging, fixed and mobile broadband wireless systems, and beyond Wireless system design fundamentals: channel assignment, handoffs, trunking efficiency, interference, frequency reuse, capacity planning, large-scale fading, and more Path loss, small-scale fading, multipath, reflection, diffraction, scattering, shadowing, spatial-temporal channel modeling, and microcell/indoor propagation Modulation, equalization, diversity, channel coding, and speech coding New wireless LAN technologies: IEEE 802.11a/b, HIPERLAN, BRAN, and other alternatives New 3G air interface standards, including W-CDMA, cdma2000, GPRS, UMTS, and EDGE Bluetooth wearable computers, fixed wireless and Local Multipoint Distribution Service (LMDS), and other advanced technologies Updated glossary of abbreviations and acronyms, and a thorolist of references Dozens of new examples and end-of-chapter problems Whether you're a communications/network professional, manager, researcher, or student, Wireless Communications: Principles and Practice, Second Edition gives you an in-depth understanding of the state of the art in wireless technology—today's and tomorrow's.

17,102 citations

Proceedings ArticleDOI
01 Aug 2000
TL;DR: The randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy are described.
Abstract: This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration.

4,123 citations

Proceedings ArticleDOI
22 Apr 2001
TL;DR: This work establishes the main highlight of the paper-optimal polynomial time worst and average case algorithm for coverage calculation, which answers the questions about quality of service (surveillance) that can be provided by a particular sensor network.
Abstract: Wireless ad-hoc sensor networks have recently emerged as a premier research topic. They have great long-term economic potential, ability to transform our lives, and pose many new system-building challenges. Sensor networks also pose a number of new conceptual and optimization problems. Some, such as location, deployment, and tracking, are fundamental issues, in that many applications rely on them for needed information. We address one of the fundamental problems, namely coverage. Coverage in general, answers the questions about quality of service (surveillance) that can be provided by a particular sensor network. We first define the coverage problem from several points of view including deterministic, statistical, worst and best case, and present examples in each domain. By combining the computational geometry and graph theoretic techniques, specifically the Voronoi diagram and graph search algorithms, we establish the main highlight of the paper-optimal polynomial time worst and average case algorithm for coverage calculation. We also present comprehensive experimental results and discuss future research directions related to coverage in sensor networks.

1,837 citations

Book
05 Dec 1996
TL;DR: This book discusses distributed detection systems, Bayesian Detection Theory, Information Theory and Distributed Hypothesis Testing, and the role of data compression in the development of knowledge representation.
Abstract: 1 Introduction.- 1.1 Distributed Detection Systems.- 1.2 Outline of the Book.- 2 Elements of Detection Theory.- 2.1 Introduction.- 2.2 Bayesian Detection Theory.- 2.3 Minimax Detection.- 2.4 Neyman-Pearson Test.- 2.5 Sequential Detection.- 2.6 Constant False Alarm Rate (CFAR) Detection.- 2.7 Locally Optimum Detection.- 3 Distributed Bayesian Detection: Parallel Fusion Network.- 3.1 Introduction.- 3.2 Distributed Detection Without Fusion.- 3.3 Design of Fusion Rules.- 3.4 Detection with Parallel Fusion Network.- 4 Distributed Bayesian Detection: Other Network Topologies.- 4.1 Introduction.- 4.2 The Serial Network.- 4.3 Tree Networks.- 4.4 Detection Networks with Feedback.- 4.5 Generalized Formulation for Detection Networks.- 5 Distributed Detection with False Alarm Rate Constraints.- 5.1 Introduction.- 5.2 Distributed Neyman-Pearson Detection.- 5.3 Distributed CFAR Detection.- 5.4 Distributed Detection of Weak Signals.- 6 Distributed Sequential Detection.- 6.1 Introduction.- 6.2 Sequential Test Performed at the Sensors.- 6.3 Sequential Test Performed at the Fusion Center.- 7 Information Theory and Distributed Hypothesis Testing.- 7.1 Introduction.- 7.2 Distributed Detection Based on Information Theoretic Criterion.- 7.3 Multiterminal Detection with Data Compression.- Selected Bibliography.

1,785 citations

Book
01 Jan 1987
TL;DR: In this paper, the authors proposed a visibility algorithm based on three-dimensions and miscellany of the polygons, and showed that minimal guard covers threedimensions of the polygon.
Abstract: Polygon partitions Orthogonal polygons Mobile guards Miscellaneous shapes Holes Exterior visibility Visibility groups Visibility algorithms Minimal guard covers Three-dimensions and miscellany.

1,547 citations


"Sensor deployment and target locali..." refers background in this paper

  • ...Finally, there exists a close resemblance between the sensor placement problem and the art gallery problem (AGP) addressed by the art gallery theorem [O’Rourke 1987]....

    [...]

Frequently Asked Questions (12)
Q1. What are the contributions in "Sensor deployment and target localization in distributed sensor networks" ?

The effectiveness of cluster-based distributed sensor networks depends to a large extent on the coverage provided by the sensor deployment. The authors propose a virtual force algorithm ( VFA ) as a sensor deployment strategy to enhance the coverage after an initial random placement of sensors. The authors also propose a novel probabilistic target localization algorithm that is executed by the cluster head. The localization results are used by the cluster head to query only a few sensors ( out of those that report the presence of a target ) for more detailed information. 

Their future work will be focused on overcoming the current limitations of the VFA algorithm. Since the current target localization algorithm considers only one target in the sensor field, it is necessary to extend the proposed approach to facilitate scenarios for multiple objects localization. Extensions to nonmobile sensor nodes and situations of sensor node failures will also be considered in future work. The VFA algorithm can be made more efficient if it is provided with the theoretical bounds on the number of sensors needed to achieve a given coverage threshold. 

In order to conserve power and bandwidth, the message from the sensor to the cluster head is kept very small; in fact, the presence or absence of a target can be encoded in just one bit. 

because of the inherent uncertainty associated with sensor readings, sensor detection must be modeled probabilistically [Dhillon et al. 2002]. 

For the binary sensor detection model without the energy constraint, the upper bound value denoted as c̄ is kπr2; for the probabilistic sensor detection model or binary sensor detection model with the energy constraint, c(loops) is checked for saturation by defining c̄ as the average of the coverage ratios of the near 5 (or 10) iterations. 

For a cluster-based sensor network architecture, the authors make the following assumptions:—After the initial random deployment, all sensor nodes are able to communicate with the cluster head. 

The effectiveness of cluster-based distributed sensor networks depends to a large extent on the coverage provided by the sensor deployment. 

The target starts to move at t = tstart from the grid point marked as “Start” and finishes at t = tend at the grid point marked as “End.” 

After the VFA algorithm is used to determine the final sensor locations, the cluster head generates a detection probability table for each grid point. 

Since the term (1 − cx, y (si))(1 − cx, y (sj )) expresses the probability that neither si nor sj covers grid point at (x, y), the probability that the grid point (x, y) is covered is given by Equation (5). 

For the binary sensor detection model, an upper bound on the coverage is given by the ratio of the sum of the circle areas (corresponding to sensors) to the total area of the sensor field. 

Note that in both cases, the coverage is effective only if the total area kπr2 that can be covered with the k sensors exceeds the area of the grid.