scispace - formally typeset
Open AccessProceedings ArticleDOI

Computing aggregates for monitoring wireless sensor networks

TLDR
A novel tree construction algorithm is proposed that enables energy-efficient computation of some classes of aggregates of network properties, and it is shown that wireless communication artifacts in even relatively benign environments can significantly impact the computation of these aggregate properties.
Abstract
Wireless sensor networks involve very large numbers of small, low-power, wireless devices. Given their unattended nature, and their potential applications in harsh environments, we need a monitoring infrastructure that indicates system failures and resource depletion. We describe an architecture for sensor network monitoring, then focus on one aspect of this architecture: continuously computing aggregates (sum, average, count) of network properties (loss rates, energy-levels etc., packet counts). Our contributions are two-fold. First, we propose a novel tree construction algorithm that enables energy-efficient computation of some classes of aggregates. Second, we show through actual implementation and experiments that wireless communication artifacts in even relatively benign environments can significantly impact the computation of these aggregate properties. In some cases, without careful attention to detail, the relative error in the computed aggregates can be as much as 50%. However, by carefully discarding links with heavy packet loss and asymmetry, we can improve accuracy by an order of magnitude.

read more

Content maybe subject to copyright    Report

UCLA
Papers
Title
Computing Aggregates for Monitoring Wireless Sensor Networks
Permalink
https://escholarship.org/uc/item/3fm5v6hp
Authors
Yonggang Jerry Zhao
Ramesh Govindan
Deborah Estrin
Publication Date
2003
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

Computing Aggregates for Monitoring Wireless
Sensor Networks
Jerry Zhao and Ramesh Govindan
Department of Computer Science
University of Southern California
Los Angeles, CA 90089
Email: {zhaoy,ramseh}@usc.edu
Deborah Estrin
Department of Computer Science
University of California, Los Angeles
Los Angeles, CA 90095
Email: destrin@cs.ucla.edu
AbstractWireless sensor networks involve very large num-
bers of small, low-power, wireless devices. Given their unattended
nature, and their potential applications in harsh environments, we
need a monitoring infrastructure that indicates system failures
and resource depletion. In this paper, we briefly describe an
architecture for sensor network monitoring, then focus on one
aspect of this architecture: continuously computing aggregates
(sum, average, count) of network properties (loss rates, energy-
levels etc., packet counts). Our contributions are two-fold. First,
we propose a novel tree construction algorithm that enables
energy-efficient computation of some classes of aggregates. Sec-
ond, we show through actual implementation and experiments
that wireless communication artifacts in even relatively benign
environments can significantly impact the computation of these
aggregate properties. In some cases, without careful attention
to detail, the relative error in the computed aggregates can be
as much as 50%. However, by carefully discarding links with
heavy packet loss and asymmetry, we can improve accuracy by
an order of magnitude.
I. INTRODUCTION
Wireless sensor networks will consist of large numbers of
small, battery-powered, wireless sensors. Deployed in an ad-
hoc fashion, those sensors will coordinate to monitor physical
environments at fine temporal and spatial scales [1]–[3]. Wire-
less sensor networks will be autonomously deployed in large
numbers. Energy-efficiency is a key design criterion for these
sensor networks.
A monitoring infrastructure will be a crucial component of a
deployed sensor network. Such an infrastructure indicates node
failures, resource depletion, and other abnormalities. Our first
contribution is an architecture for sensor network monitoring
infrastructures, one that consists of three classes of software.
The first class of software continuously collects aggregates
of network properties (we call them network digests) in the
background. Triggered by sudden changes in these properties,
scans can be invoked to provide global, yet aggregated, views
of system state. Such views can indicate the location of per-
formance problems or impending failure within the network.
Dumps can then be used to collect detailed node state to debug
the problem. These three pieces of software are invoked at
different spatial and temporal scales, and will allow accurate,
yet low-overhead sensor network monitoring.
Our second contribution is the design of protocols to contin-
uously compute network digests. Abstractly, a digest is defined
by an digest function f (v
1
, v
2
, · · · , v
n
), where v
i
is the value
contributed by each node i. In this paper, we consider v
i
s
that represent some aspect of network operation: node energy
level, degree of connectivity, volume of traffic seen etc. A
key property of the class of aggregates we are interested in is
decomposability [4], [5]. f is decomposable by a function g
if it can be expressed as:
f(v
1
, · · · , v
n
) = g(f(v
1
, · · · , v
k
), f(v
k+1
, · · · , v
n
))
Decomposable digest functions include min, max, average, and
count. The median, for example, is not decomposable.
Aggregation has been discussed in different contexts such
as large scale databases [6], active networks [7], and wireless
sensor network applications [5], [8], [9]. However, computing
digests for sensor networks poses unique design challenges.
Digests are computed continuously and from the entire net-
work. Furthermore, computing digests represents background
activity and not the sensing task done by the applications (in
contrast to queries that compute the average temperature of
a region, for example). Finally, prior aggregation schemes
have been designed to deliver aggregates on-demand to a
small number of users outside the network; we argue that
digests should be continuously distributed throughout the
entire network. This will allow users low-latency access to
digests from any node within the network. In addition, it may
also enable applications to tailor their performance based on
the values of digests (e.g., shift to a different mode of operation
when the average energy level falls below a certain threshold).
A. Our Approach
These observations lead to two key constraints in the design
of protocols for digest computation. First, digest protocols
must be aggressively energy-efficient, far more so than other
components of the system. Second, because there isn’t a natu-
ral initiator for a digest (e.g., a user node) the routing structures
for digest computations must be autonomously derived.
To achieve aggressive energy-efficiency, we propose to pig-
gyback digest computation messages on neighbor-to-neighbor
communication. We observe that many proposed sensor net-
work protocols for medium access [10] and for topology
control [11], [12] include periodic beaconing. Digests, being

small by definition, can easily be piggybacked on such com-
munication. While not itself a new idea, this approach seems
almost necessary to achieve very low energy expenditures
for digest computation. This approach trades-off latency for
energy savings. We quantify this trade-off in a later section.
We observe that some decomposable digest functions like
min and max can be computed using a technique we call
digest diffusion
1
. For example, suppose we are interested in
computing a digest that represents the value of minimum
energy at any node in the network; call this value E
min
.
Each node periodically broadcasts to its neighbors (e.g. by
piggybacking on other messages) its own energy, as well as
its current estimate of E
min
. Each node also sets its estimate
of E
min
to the lower of its own energy-level and the lowest
among the estimates heard from its neighbors. After a few
iterations (intuitively, a number proportional to the network
diameter), all nodes converge to the right E
min
. In other
words, E
min
diffuses out to the network. (This is, of course,
a simplified description. We describe our protocol more fully
in Section IV).
Thus, digest diffusion can be used to evaluate one class of
digests, and satisfy two important requirements we discussed
above. First, every node ends up with an estimate of the
digest. Second, these computations do not need to be explicitly
initiated by some external action (e.g. by injecting a query into
the system).
Not all digest functions can be computed using iterative
diffusing computations. For example, the average and count
functions, being non-idempotent, can be particularly sensitive
to duplicates. Our simple diffusing computation can easily
deliver duplicate data to a node. To compute this class of
digests, though, we observe that computing a min or a max
using an iterative diffusing computation results in a tree that
spans the entire network. As we show in Section IV, we can
use this tree to compute this class of digests, by propagating
digest values to the root of the tree. Note that this tree is
not constructed by user initiation, but as a by-product of
computing a min or a max digest
2
Finally, we note that some digests are particularly sensitive
to packet loss. Count is an example of this. The packet loss on
a wireless link can be significant is anecdotally well known.
However, not much work has gone into quantifying the extent
of loss, until recently. Morris et al. show the prevalence of
links with heavy loss and asymmetry in 802.11 environments
[13]. Our own experiments confirm this in Section V for
a network of motes (the sensor node platform). Even in
fairly benign environments, we observe widespread and time-
varying occurrences of heavy link loss and asymmetry. We also
demonstrate that simple implementations of the count digest
can exhibit severe error in these environments. We then show
that a careful implementation that selectively avoids links with
1
This is in contrast to directed diffusion [8]. a data-centric routing paradigm.
2
The notion here is that the network will be continuously computing several
kinds of digests, depending on the needs of the particular deployment. We
think at least one of them will be a min or max digest, and that can form the
basis for computing other digests.
Fig. 1. Monitoring wireless sensor networks
heavy loss and asymmetry can improve the accuracy of count
computation, sometimes by an order of magnitude.
To our knowledge, this paper is the first to articulate an
architecture for sensor network monitoring. Our paper fleshes
out a very practical implementation of one component of this
architecture, discussing how real-world artifacts can seriously
impact the performance of the monitoring system.
B. Paper Organization
The rest of the paper is structured as follows. The next two
sections describe an infrastructure to monitor wireless sensor
networks in details, and give a brief definition of aggregate
network properties. Section IV describes our approach that
enables energy-efficient computation of aggregate properties.
Section V describes link quality estimation and rejection
algorithms to reduce the negative impacts of packet loss on
the performance of the computation process. The performance
of our design is evaluated by implementation on a testbed and
a simulator in Section VI and VII. We conclude our work with
related work and discussion of strength and limitation of our
approach.
II. MONITORING WIRELESS SENSOR NETWORKS: AN
ARCHITECTURE
While the main focus of this paper is a specific set of
diagnostic tools for sensor networks (digests), we describe in
this section our vision for how these tools fit into a coherent
architecture for monitoring sensor networks. This architecture,
which is quite different from the classical SNMP [14] archi-
tectural model (centralized collection of per-device statistics),
is motivated by the need for energy-efficient communication
in sensor networks.
Our architecture is distinguished by three levels of monitor-
ing, where each level consists of a class of tools. Each level is
distinguished from the next in the spatial or temporal scale at
which the corresponding tools are invoked. This is illustrated
in Figure 1.
The first component consists of tools such as dump. Upon
user’s request, dump collects detailed node state or logs over

the network for diagnosis. For example, we could dump the
raw temperature readings from some sensors to debug the
collaborative event detection algorithm between nearby nodes.
Dump can be implemented as an application upon directed
diffusion [8]. Because the amount of data per node may be
large, dump should be invoked only at small spatial scales
(i.e., from a few nodes), and only when there is a reasonable
certainty of a problem at those nodes.
To guide system administrators to the location of problems,
we envision the second class of tools that we call scans. Scans
represented abstracted views of resource consumption through-
out the entire network, or throughout a significant section
of the network. Thus, this class of tools has a significantly
greater spatial extent than dumps. One example of a scan is
the escan [15]. To compute an escan, a special user-gateway
node initiates collection of node state, for instance residual
energy supply level, from every node in the system. Instead of
delivering the raw data to user node, escan computation takes
advantage of in-network aggregation. Residual energy level
data from individual nodes are combined into more compact
forms, if and only if those nodes are nearby and have similar
energy level. By pushing the data processing into the network,
escan constructs an approximate system-wide view of energy
supply levels with much less communication cost compared
to centralized collection. From such a global view, users are
able to isolate those nodes upon which they can invoke tools
such as dump.
Clearly, the energy cost of collecting an escan can be
significant, and our third class of tools, digests, can help
alert users to error conditions (partitions, node deaths) within
the network. As we have described before, a digest is an
aggregate of some network property. For example, the size
of network i.e. the number of nodes, can indicate several
system health conditions: Sudden drop in the network size
can be taken as hint for massive node failure or network
partitioning. In the paper, we show how to collect aggregates
efficiently, accurately, and continuously. Digests, like escans,
also span the entire network, or a large spatial extent. However,
unlike escans, they are continuously computed. Digests are not
intended to isolate network problems, merely to tell users when
to invoke network-wide scans.
III. DEFINITIONS, ASSUMPTIONS, AND MODELS
We assume that the sensor network consists of n nodes
deployed in an ad-hoc manner. Nodes have unique identifiers.
Nodes may crash due to failures or resource depletion and
new nodes may join the network. Nodes are static or move
infrequently. Each node can communicate with its neighbors
within certain range. Communication between nodes may be
lost due to noise or collision. We do not assume a specific
MAC or routing protocol, but do assume the radio capability
to broadcast messages to neighbors.
Recall that a digest function is denoted by f(v
1
, v
2
, · · · , v
n
),
where v
i
is the value contributed by sensor node i. Addition-
ally, f is decomposable by a function g:
f(v
1
, v
2
, · · · , v
n
) = g(f(v
1
, · · · , v
k
), f(v
k+1
, · · · , v
n
))
A decomposable digest function is one in which the final result
can be calculated from partial results. The values v may either
be scalars or vectors. For example, to compute average residual
energy supply level in the network, we can define v =< s, c >
and aggregate function
g
AV G
(v
1
, v
2
) =< v
1
.s + v
2
.s, v
1
.c + v
2
.c >
where s and c are the sum of energy level and c is the node
count, respectively. The average value is derived from the final
result v.s/v.c.
The problem of digest computation is: Each node i provides
a value v
i
as its contribution to the digest function f, where
v
i
may change over time. The goal of the digest computation
mechanism is for each node in the network to contain a
continuous estimate for the current value of f . In this paper,
we limit the digest functions we consider to V
MAX
, V
AV G
,
V
SUM
and V
CNT
, which respectively denote the maximum,
average, and sum of v
1
, v
2
, · · · v
n
, and number of the nodes
in the network.
There is a specific rationale for our choice of digest func-
tions, since these functions are qualitatively different from
each other. Using terminology from [5], a digest function
is monotonic if and only if, when two partial results r
1
and
r
2
are combined by a function r = g(r
1
, r
2
), the result r
satisfies i = 1, 2 r r
i
for an ordering relationship . It is
exemplary if the final result can be determined from one single
contribution value. In our set of digest functions, V
MAX
is
monotonic and exemplary, while V
CNT
is monotonic but not
exemplary, and V
SUM
and V
AV G
may not be monotonic (if
negative values are allowed) and are certainly not exemplary.
Finally, as we shall argue later, the loss sensitivity of V
AV G
may be different from that of V
MAX
.
IV. COMPUTING DIGESTS
In this section, we discuss techniques for computing digest
functions for sensor network monitoring.
A naive, centralized, approach to compute digest functions
is to have each node send its value to a designated head node
H. H computes the final result from all the values received.
This approach does not scale well with network size. First,
there is possible message implosion at nodes near H. Second,
it can incur heavy processing work load at H to aggregate
values from all nodes. Third, H represents single point of
failure.
Our approach leverages in-network aggregation. Each node
computes a partial result of the digest function, and passes
that result to other neighboring nodes (we describe the exact
technique in the next two sections). For this, we leverage the
fact that our digest functions are all decomposable. In-network
aggregation has better energy-efficiency characteristics; com-
munication overhead is less, and the computation is evenly
distributed.
A standard way of computing these digest functions using
in-network processing is to use a hierarchy and propagate the
digest up to the root, computing partial values along the way.
Such an approach is exemplified by the approach of Gupta

et al. [4], where node location is leveraged to construct a
“Grid Box” hierarchy. However, their approach for computing
aggregates requires leader election within grid boxes, and other
maintenance overhead. One requirement for our monitoring
application is that digest computation has to be aggressively
energy-conserving. Another approach, with similar drawbacks
from the perspective of monitoring, is the idea of recursive
clustering elections [16], [17].
Instead of using more heavyweight hierarchy and clustering
techniques, we use a two-pronged approach for computing
digests.
We note that some of our digests can be computed by a
scheme we call digest diffusion.
Digest diffusion implicitly builds a tree. We use this tree
to compute digest functions by propagating partial results
up the tree towards the root.
We now describe these in some more detail.
A. Digest Diffusion
We note that monotonic and exemplary digest functions can
be computed efficiently by localized information exchanges
between one-hop neighbors. We call this technique digest dif-
fusion. We now describe digest diffusion for V
MAX
. Initially,
each node i sets its perceived maximum value m
i
= v
i
, source
of maximum s
i
= i, hop distance h
i
= 0 and periodically
sends a tuple M = (m
i,
, s
i
, h
i
) to its neighbors. Upon
receiving a message (m
j
, s
j
, h
j
) from neighboring node j with
m
j
> m
i
, node i sets m
i
= m
j
, s
i
= s
j
, h
i
= h
j
+ 1 and
parent p
i
= j. If m
j
= m
i
, it further checks if s
j
> s
i
, which
guarantees strict monotonicity. Node i may switch its parent
node from j to node k, when k provides the same maximum
value but a shorter hop distance h
k
< h
j
. Gradually within
O(d/(1 p)) steps (d is the diameter of the network, p is
packet loss rate per link.), all nodes agree on a node s with
the maximum value v.
This fusion based approach is simple but efficient. It is
fully distributed and requires no base-station or user node to
initiate the computation. The computation converges in a time
proportional to the network diameter. It is energy-efficient and
scales well with network size since the overhead at each node
is constant over time. The information exchanged between
neighbors is small and can easily be piggybacked
3
on other
neighbor-to-neighbor communication (e.g., beacons sent by
MAC protocols or protocols for topology adaptation
4
).
B. Computing Other Digests
However, digest diffusion cannot be used to compute non-
exemplary digests, such as V
AV G
. One of the fundamental
3
Of course, if the network is continuously computing many digests in
parallel, then piggybacking does not make sense. In that situation, one can
combine the information required from several digests into one message and
achieve similar amortization benefits.
4
The advantage of piggybacking is that it can avoid the header and
framing costs associated with sending the information on a separate packet. In
addition, in sensor networks, waiting to piggyback the information on other
transmissions can save the cost of turning on and off radios (e.g., if the MAC
layer has turned off the radio for power saving) compared to sending the
information immediately.
reasons is that when a node tries to aggregate the V
AV G
partial
results from its neighbors, it is difficult to determine if there
are any overlaps between those results. For example, in Figure
4, node E tries to aggregate the partial results for V
AV G
from
C and D. however without explicit knowledge whether values
from A, B have been accounted by C, D or both of them, it
is impossible for E to aggregate correctly.
We note that digest diffusion implicitly constructs a tree
whose root is the node that contributes to the value of the
exemplary digest (e.g., the node that has the maximum value
in a V
MAX
digest. Digest diffusion also computes a parent p
i
for each node i (see Section IV-A). We call this tree the digest
tree.
Other digest functions can be computed easily on this
tree. For example, with the aggregation tree from V
MAX
computation, it is straightforward to calculate compute V
AV G
:
node i periodically calculates a partial result from most recent
reports from its children c
1
, c
2
, · · · , c
k
, for node count
n
i
=
k
X
j=1
n
c
j
+ 1
and average value
a
i
=
P
k
j=1
n
c
j
· a
c
j
+ v
i
n
i
It then sends out < a
i
, n
i
> to its parent p
i
along the tree.
Hop by hop, the partial results are propagated up to the
root, where the final result of V
AV G
is calculated. It takes
O(d/(1 p)) time to converge on the correct result, given the
tree structure is stable. We may further reduce communication
cost by incrementally updating the partial digests. Only those
subtrees that have nodes whose values have changed beyond a
certain threshold need to send their partial results. Finally, in a
similar fashion, the root can propagate a computed digest down
the tree such that all nodes can maintain a current estimate for
the digest.
The digest tree construction process is fully distributed
and robust. The tree migrates adaptively when the current
root fails, since digest diffusion will try to find the new
value for V
MAX
. Not all metrics are suitable to construct the
aggregation tree. For example, the maximum node link degree
is a bad choice because the node with the maximum degree
(maximum number of neighbors) may change frequently over
time. A stable tree can avoid short-term errors in the computed
digest values caused by root switching. A digest tree based on
the maximum coarse-grained residual energy level of a node
tends to hold still over relative long time period. When the
current root node is exhausted, the protocol changes the root
of the tree to the next most energy-rich node in the network.
C. Digest Tree Maintenance
Maintenance of the digest tree against topology changes
such as node failure and addition is also combined within the
process of updating V
MAX
: Each node periodically broadcasts
a message M = (m, s, h) for updating V
MAX
every T
0

Citations
More filters
Journal Article

Understanding Packet Delivery Performance In Dense Wireless Sensor Networks

TL;DR: Govindan et al. as mentioned in this paper performed a large-scale measurement of packet delivery in dense wireless sensor networks and found that packet de-livery performance is important for energy-constrained networks.
Proceedings ArticleDOI

Understanding packet delivery performance in dense wireless sensor networks

TL;DR: This paper reports on a systematic medium-scale measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot, which has interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.
Journal ArticleDOI

Clock synchronization for wireless sensor networks: a survey

TL;DR: In this paper, a survey and evaluation of clock synchronization protocols based on a palette of factors such as precision, accuracy, cost, and complexity is presented, which can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor network application.
Proceedings ArticleDOI

SIA: secure information aggregation in sensor networks

TL;DR: This paper is the first on secure information aggregation in sensor networks that can handle a malicious aggregator and sensor nodes, and presents efficient protocols for secure computation of the median and the average of the measurements, for the estimation of the network size, and for finding the minimum and maximum sensor reading.
Book

Data Streams: Models and Algorithms

TL;DR: This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions.
References
More filters
Proceedings ArticleDOI

Energy-efficient communication protocol for wireless microsensor networks

TL;DR: The Low-Energy Adaptive Clustering Hierarchy (LEACH) as mentioned in this paper is a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network.

Energy-efficient communication protocols for wireless microsensor networks

TL;DR: LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network, is proposed.
Proceedings ArticleDOI

Directed diffusion: a scalable and robust communication paradigm for sensor networks

TL;DR: This paper explores and evaluates the use of directed diffusion for a simple remote-surveillance sensor network and its implications for sensing, communication and computation.
Journal Article

An Energy-Efficient MAC Protocol for Wireless Sensor Networks

TL;DR: S-MAC as discussed by the authors is a medium access control protocol designed for wireless sensor networks, which uses three novel techniques to reduce energy consumption and support self-configuration, including virtual clusters to auto-sync on sleep schedules.
Proceedings ArticleDOI

An energy-efficient MAC protocol for wireless sensor networks

TL;DR: S-MAC uses three novel techniques to reduce energy consumption and support self-configuration, and applies message passing to reduce contention latency for sensor-network applications that require store-and-forward processing as data move through the network.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What are the contributions mentioned in the paper "Computing aggregates for monitoring wireless sensor networks" ?

In this paper, the authors briefly describe an architecture for sensor network monitoring, then focus on one aspect of this architecture: continuously computing aggregates ( sum, average, count ) of network properties ( loss rates, energylevels etc., packet counts ). First, the authors propose a novel tree construction algorithm that enables energy-efficient computation of some classes of aggregates. Second, the authors show through actual implementation and experiments that wireless communication artifacts in even relatively benign environments can significantly impact the computation of these aggregate properties. However, by carefully discarding links with heavy packet loss and asymmetry, the authors can improve accuracy by an order of magnitude. 

That time-varying loss and asymmetry can result in oscillating digest tree branches, and thereby cause significant error in the computed digest. 

A standard way of computing these digest functions using in-network processing is to use a hierarchy and propagate the digest up to the root, computing partial values along the way. 

a sequence number or time-to-live value from the root is placed into each message to avoid possible looping when the root node itself crashes. 

because there isn’t a natural initiator for a digest (e.g., a user node) the routing structures for digest computations must be autonomously derived. 

with rejection of both poor incoming links and asymmetric links, the error can be reduced to less than 10% for the same network. 

A digest tree based on the maximum coarse-grained residual energy level of a node tends to hold still over relative long time period. 

The authors note that digest diffusion implicitly constructs a tree whose root is the node that contributes to the value of the exemplary digest (e.g., the node that has the maximum value in a VMAX digest. 

Because the amount of data per node may be large, dump should be invoked only at small spatial scales (i.e., from a few nodes), and only when there is a reasonable certainty of a problem at those nodes. 

An analysis of logs from the testbed reveals that the existence of heavy packet loss and link asymmetry adversely affects the tree construction protocol in Section IV. 

In addition, the authors also address the impact of packet loss from empirical studies on a real wireless sensor network testbed, which turns out to be crucial to the accuracy of aggregate computation. 

For each of its neighbors B, node A maintains a FIFO buffer k1, k2, · · · , km to store the sequence numbers in the most recently received beacons. 

With an upper bound of packet loss, the time-out values can easily be determined according to Table The authorto guarantee that tree is relatively stable. 

Figure 3, the solid curve shows that with link profiling and rejection, the computed digest is significantly more stable than in an implementation that does not selectively choose tree links based on observed packet loss. 

This architecture, which is quite different from the classical SNMP [14] architectural model (centralized collection of per-device statistics), is motivated by the need for energy-efficient communication in sensor networks.