scispace - formally typeset
Open AccessProceedings ArticleDOI

Evaluation of Queueing Policies and Forwarding Strategies for Routing in Intermittently Connected Networks

TLDR
In this paper, a probabilistic routing approach along with the correct choice of buffer management policy and forwarding strategy can result in much performance improvements in terms of message delivery, overhead and end-to-end delay.
Abstract
Delay tolerant networking (DTN) and more specifically the subset known as intermittently connected networking, is emerging as a solution for supporting asynchronous data transfers in challenging environments where a fully connected end-to-end path between a source and destination may never exist. Message delivery in such networks is enabled via scheduled or opportunistic communication based on transitive local connectivity among nodes influenced by factors such as node mobility. Given the inherently store-and-forward and opportunistic nature of the DTN architecture, the choice of buffer management policies and message forwarding strategies can have a major impact on system performance. In this paper, we propose and evaluate different combinations of queueing policies and forwarding strategies for intermittently connected networks. We show that a probabilistic routing approach along with the correct choice of buffer management policy and forwarding strategy can result in much performance improvements in terms of message delivery, overhead and end-to-end delay.

read more

Content maybe subject to copyright    Report

Evaluation of Queueing Policies and Forwarding Strategies for Routing in
Intermittently Connected Networks
Anders Lindgren, Kaustubh S. Phanse
Department of Computer Science and Electrical Engineering
Lule˚a University of Technology
SE-971 87 Lule˚a, Sweden
{dugdale,kphanse}@sm.luth.se
Abstract
Delay tolerant networking (DTN), and more specifically the sub-
set known as intermittently connected networking, is emerging
as a solution for supporting asynchronous data transfers in chal-
lenging environments where a fully connected end-to-end path
between a source and destination may never exist. Message de-
livery in such networks is enabled via scheduled or opportunis-
tic communication based on transitive local connectivity among
nodes influenced by factors such as node mobility. Given the in-
herently store-and-forward and opportunistic nature of the DTN
architecture, the choice of buffer management policies and mes-
sage forwarding strategies can have a major impact on system
performance. In this paper, we propose and evaluate different
combinations of queueing policies and forwarding strategies for
intermittently connected networks. We show that a probabilistic
routing approach along with the correct choice of buffer man-
agement policy and forwarding strategy can result in much per-
formance improvements in terms of message delivery, overhead
and end-to-end delay.
1 Introduction
There are many harsh and challenging environments, for exam-
ple, deep space communication [6], digital content delivery in
rural areas with under-developed infrastructure [13, 15], wildlife
and habitat monitoring [9, 14], where traditional networking so-
lutions are not viable and where even extensions such as ad hoc
networking do not provide a definite solution. These environ-
ments are typically characterized by disruption of communica-
tion links leading to frequent and long durations of network pari-
tioning, long delays, limited resources and heterogeneity.
Several solutions have been proposed to handle routing in such
networks. These range from pure epidemic routing [17], where
each message is “flooded” through the network to reach as large
part of the network as possible (thus also reaching its destina-
tion), to more sophisticated mechanisms. Such mechanisms can
make use of special knowledge of the scenario [1, 9, 13, 14] or
can be probabilistic routing protocols [11] that try to establish the
probability that a certain node will be able to deliver a message to
its destination based on previous events. However, an important
issue that has been largely disregarded in previous work is the
impact of buffer management policies and forwarding strategies
on the performance of the communication system. This is crucial
given the inherently store and forward nature of the DTN archi-
tecture, and is a critical piece of the tradeoffs that exist between
the overhead of random opportunistic policies and the ability to
exploit knowledge. In this paper, we propose a set of queueing
policies and forwarding strategies. We compare the performance
of two routing protocols in presence of different combinations of
the proposed policies.
The rest of the paper is organized as follows. Section 2 de-
scribes some related work on routing protocols for intermittently
connected networks and gives the background to the work pre-
sented in this paper. In Section 3 we propose some queueing
policies and forwarding strategies, which are later evaluated in
this paper. Section 4 describes the simulation setup used for the
evaluations, while Section 5 presents the results of our perfor-
mance evaluation. We summarize our conclusions and discuss
future work in Section 6.
2 Routing Protocols for Intermittently
Connected Networks
2.1 Epidemic Routing
Vahdat and Becker present a routing protocol for intermittently
connected networks called Epidemic Routing [17]. This proto-
col relies on the theory of epidemic algorithms by doing pair-
wise information of messages between nodes as they come in

contact with each other to eventually deliver messages to their
destination. Hosts buffer messages if no path to the destination
is currently available. An index of these messages, called a sum-
mary vector, is kept by the nodes, and when two nodes meet they
exchange summary vectors. After this exchange, each node can
determine if the other node has any message not previously re-
ceived by it. In that case, the node requests the messages from
the other node. The message exchange is illustrated in Figure 1.
This means that as long as buffer space is available, messages
will spread like an epidemic of a disease through the network as
nodes meet and “infect” each other.
BA
Summary vector
Send messages
Request messages
Figure 1: Epidemic Routing message exchange.
2.2 PRoPHET
If buffer space and bandwidth are unlimited, Epidemic Routing
is likely to be good at message delivery. This is, however, not
the case in reality, where resources such as bandwidth and buffer
space are constrained. Furthermore, in real life, nodes often fol-
low a non-random mobility pattern. To leverage mobility and
use scarce resources efficiently, we have previously proposed the
Probabilistic Routing Protocol using History of Encounters and
Transivitiy (PRoPHET) [10, 11].
To accomplish this, a probabilistic metric called delivery pre-
dictability, P
(A,B)
[0, 1], is established at every node A for
each known destination B. This indicates how likely it is that
this node A will be able to deliver a message to a particular des-
tination B and is used in deciding which messages are being
transferred between two nodes as they meet. Instead of just ex-
changing summary vectors upon encounter, nodes also exchange
information about the P -values they have for known destina-
tions. This information is used to update the probability informa-
tion at the receiving node, and is also used to determine whether
or not to forward a particular message to the encountered node.
This decision is made according to the forwarding strategy in use
(such as those described in Section 3.2).
The calculation of the delivery predictabilities has three parts.
Whenever a node is encountered, the delivery predictability for
that node is updated as shown in Eq. 1. This is done so that
a node has higher delivery predictability for nodes that are fre-
quently encountered than for those that are seldomly encoun-
tered.
P
(A,B)
= P
(A,B)
old
+
1 P
(A,B)
old
× P
init
(1)
When two nodes exchange information about other nodes they
know about, this information is used to update the delivery pre-
dictabilities for other nodes as shown in Eq. 2. Here, node A
can update its delivery predictability for each node C that B has
knowledge of using the transitive property of PRoPHET.
P
(A,C)
= P
(A,C)
old
+
1 P
(A,C)
old
× P
(A,B)
× P
(B,C)
× β
(2)
Finally, there is an aging of the delivery predictabilites, so they
are periodically updated as shown in Eq. 3.
P
(A,B)
= P
(A,B)
old
× γ
k
(3)
In the equations above, P
init
(0, 1], γ (0, 1), and β
[0, 1] are configurable parameters to the PRoPHET protocol, and
k is the number of time units that has passed since the value was
last aged.
3 Queueing Policies and Forwarding
Strategies
Nodes may have to buffer messages for a long time and in case
of congestion decide which messages to drop from its queue.
They also have to decide which messages to forward to another
node that is encountered. In this section we describe the different
queueing policies and forwarding strategies used in this paper for
the evaluation in Section 5.
3.1 Queueing Policies
We use the following queue management policies, that define
which message should be dropped if the buffer is full when a
new message has to be accomodated.
FIFO First in first out Handle the queue in a FIFO order.
The message that was first entered into the queue is the first
message to be dropped.
MOFO Evict most forwarded first In an attempt to maxi-
mize the dispersion of messages through the network, this
policy requires that the routing agent keeps track of the
number of times each message has been forwarded. The
message that has been forwarded the largest number of
times is the first to be dropped, thus giving messages that
have not been forwarded fewer times more chances of get-
ting forwarded.

MOPR Evict most favorably forwarded first Every node
keeps a value F P (initialized to zero) for each message
in its queue. Each time the message is forwarded, F P
is updated according to Eq. 4, where P is the delivery
predictability the receiving node has for the message.
F P = F P
old
+ P (4)
The message with the highest F P value is the first to be
dropped. MOPR can be considered to be a weighted version
of MOFO, where instead of increasing a counter by one
each time a message is forwarded, it is increased by the
delivery predictability of the other node for the destination.
SHLI Evict shortest life time first In the DTN architecture
[2], each message has a timeout value which specifies when
it is no longer useful and should be deleted. If this policy
is used, the message with the shortest remaining life time is
the first to be dropped.
LEPR Evict least probable first Since the node is least
likely to deliver a message for which it has a low P -value,
drop the message for which the node has the lowest P -
value.
More than one queueing policy may be combined in an or-
dered set, where the first policy is used primarily, the second
policy used only if there is a need to tie-break between messages
with the same eviction priority assigned by the primary policy,
and so on. As an example, one could select the queueing policy
to be {MOFO; SHLI; FIFO}, which would start by dropping the
message that has been forwarded the largest number of times.
If more than one message has been forwarded the same number
of times, the one with the shortest remaining life time will be
dropped, and in case of another tie, the FIFO policy will be used
to drop the message first received. The use of multiple queueing
policies is out of the scope of this paper, and while it is important
future work, we consider only one queuing policy at a time in the
performance evaluation in this paper.
3.2 Forwarding Strategies
During the information exchange phase, nodes need to decide on
which messages they wish to exchange with the peering node.
Finite bandwidth and unexpected interruptions may not allow a
node to transmit all the messages it would like to forward. In
such cases, the order in which the messages are transmitted is
important. This section defines the forwarding strategies that we
use in our evaluation. Note that if the node being encountered is
the destination of any of the messages being carried, those mes-
sages should be delivered to the destination irrespective of the
forwarding strategy being used. Nodes do not delete messages
after forwarding them as long as there is sufficient buffer space
available (since it might encounter a better node, or even the fi-
nal destination of the message in the future), unless the node to
which a message was forwarded was its destination.
We use the following notation in our discussions below. A
and B are the nodes that meet, and the strategies are described as
they should be followed by node A. The destination node is D.
P
(X,Y )
denotes the delivery predictability that a node X has for
a destination Y .
GRTR Forward the message only if P
(B,D)
> P
(A,D)
.
When two nodes meet, a message is sent to the other node if
the delivery predictability for the destination of the message
is higher at the other node.
GRTRSort Select messages in descending order of the value of
P
(B,D)
P
(A,D)
. Forward the message only if P
(B,D)
>
P
(A,D)
.
This strategy is similar to GRTR, but it processes the mes-
sages in the message queue in a different way. While
GRTR scans the queue in a linear way, starting by decid-
ing whether or not to forward the first message, and the
continuing like that through the queue, this strategy looks
at the difference in P -values for each message between the
two nodes, and forwards the messages with the largest dif-
ference first. This allows a node to transmit messages with
most improvement in delivery predictability first.
GRTRMax Select messages in descending order of P
(B,D)
.
Forward the message only if P
(B,D)
> P
(A,D)
.
This strategy begins by considering the messages for which
the encountered node has the highest delivery predictability.
The motivation for doing this is the same as in GRTRSort,
but based on the idea that it is better to give messages to
nodes with high absolute delivery predictabilities, instead
of trying to maximize the improvement.
COIN Forward the message only if X > 0.5, where X
U(0, 1) is a random variable.
This strategy is similar to the ordinary Epidemic Routing,
but to reduce the number of transfers, there is a “coin toss”
that determines if a message should be forwarded or not. As
this strategy does not consider the delivery predictabilities
in making its decision, it will not be used in a PRoPHET
system. Nonetheless, it serves as an interesting benchmark
to compare the performance of our proposed delivery pre-
dictability estimation in [11] to that of a simple random
pruning of Epidemic Routing.

GRTRSort and GRTRMax require a node to reorder the mes-
sages in its queue for every encounter. This being a regular sort-
ing problem, the worst case complexity is O(m log m), where
m is the number of messages in the queue [3]. By using good
datastructures, this could be reduced to O(n log n) where n is
the number of destinations for which there is messages in the
queue. Given that the number of nodes in a typical intermittently
connected network will range from tens of nodes to utmost a
few thousands, the computational overhead should not pose any
problem.
4 Simulation Setup
In our evaluation, we have used a simple high level simulator
written in Java. The simulator abstracts away the lower layers
and focuses on the operation of the routing protocol. The nodes
use a wireless communication channel that has a range of 100
meters, and nodes are able to transmit one message every sec-
ond. We compare the different queueing policies and forwarding
strategies presented in Section 3 for PRoPHET and Epidemic
Routing.
To ensure that the results of the simulations are useful, it is
important that the models that are used are realistic. Considering
the transitive kind of communication studied in this paper, it is
particularly important to select a good mobility model. In some
of the previous work, the authors have used random waypoint
mobility or variations of it [16, 19], mobility that has been con-
strained to some fixed schedule [13, 18], or mobility data gath-
ered from some real-life measurements [8]. In our simulations
we have used the community mobility model developed in [11],
inspired by the mobility of nodes in a Saami community [4, 15].
As we want to ensure that this model gives us a good represen-
tation of real mobility, we evaluated it using the same tests used
in previous work that has been done on characterizing properties
of human mobility [5]. Doing these tests, we could see that the
properties of the community model is relatively close to real mo-
bility and a definite improvement over previous models such as
the random way-point mobiliy model.
1500 m
GC11
3000 m
C1 C2 C3 C4
C5 C6 C7 C8
C9 C10
Figure 2: Community Mobility Model
We consider a 3000m×1500m area as shown in Figure 2. This
area is divided into 12 subareas; 11 communities (C1-C11), and
one “gathering place” (G). Each node has one home community
that it is more likely to visit than other places. For each commu-
nity there are 10 nodes that recognize it as their home commu-
nity. In each community, and at the gathering place, there is also
a stationary “gateway” (randomly placed within the community)
that intermittently generates traffic destined for other communi-
ties. The mobility in this scenario is such that nodes select a des-
tination and a speed (randomly chosen between 10 and 30 m/s).
On reaching the destination, it pauses for a while, and then se-
lects a new destination and speed. The destinations are selected
such that if a node is within its home community, there is a higher
probability that it will go to the gathering place as compared to
other places, and if it is away from its home community, it is very
likely that it will return to the home community. Table 1 shows
the probabilities of different destinations being chosen depend-
ing on the current location of a node. To ensure that our results
are valid in scenarios with other topology and mobility patterns,
we have also tested the protocol in a scenario where the random
way-point mobility model [7] was used. The results achieved in
those tests showed the same trends as the ones presented in this
paper, so we believe that our results are valid in more general
scenarios than the one presented in this paper.
Table 1: Destination selection probabilities
From \ To Home Gathering place Elsewhere
Home - 0.8 0.2
Elsewhere 0.9 - 0.1
Every tenth second, two randomly chosen community gate-
ways generate a message for a gateway at another community or
at the gathering place. Five seconds after each such message gen-
eration, two randomly chosen mobile nodes generate a message
to a randomly chosen destination. After 3000 seconds the mes-
sage generation ceases (after a total of 1200 messages have been
generated) and the simulation is run for another 8000 seconds to
allow messages to be delivered. A warm up period of 500 sec-
onds is used in the beginning of the simulations before message
generation commences, to allow the delivery predictabilities of
PRoPHET to initialize. Table 2 shows the values for PRoPHET
parameters kept fixed in our simulations. These values were cho-
sen based on previous experience with the protocol.
We compare the performance of the various forwarding strate-
gies and queueing policies (and the combinations therein) using
the following four metrics. The message delivery ability, i.e. the
number of messages delivered to their respective destinations, is
a primary metric. Applications using this kind of communica-

Table 2: Parameter settings
Parameter P
init
β γ
Value 0.75 0.25 0.98
tion should be delay-tolerant, but it is still of interest to consider
the message delivery delay to find out how much time it takes a
message to be delivered. Finally, we also study the overhead and
weighted overhead that is incurred by the message exchanges be-
tween nodes. The overhead is calculated as the total number of
message exchanges between nodes, and the weighted overhead
is the number of message exchanges between nodes per mes-
sage that is delivered to its destination. The size of the delivery
predictability information is linearly bounded by the number of
nodes in the network (which in most realistic cases will be fairly
small), and will be the same regardless of the queueing policy
and forwarding strategy used. Thus, this is not considered in the
overhead calculation.
To avoid any effects that improper settings of the life time of
messages may have on the results of the evaluation (determining
the proper life times for messages is out of scope for this paper),
messages in our simulations never time out. As the SHLI queue-
ing policy makes its drop decisions based on the remaining life
time of messages, we consider each message to have a life time
that is longer than the simulation time.
The queueing policies LEPR and MOPR require computation
of delivery predictabilities; thus, these policies are not used in
conjunction with the COIN and Epidemic forwarding strategies,
which do not use delivery predictability in making forwarding
decisions. As the delivery predictability calculations are very
central in the operation of PRoPHET, and as Epidemic Routing
and COIN is mostly included for comparison, it is still important
to include these queueing policies in the evaluations where it is
possible.
5 Results
The results presented here are averages from 10 simulation runs.
We have verified that the results presented are statistically signif-
icant at a 95% confidence level (with Bonferroni correction for
multiple comparisons) using pairwise t-tests [12]. Unless oth-
erwise stated, the x axis in the graphs show the queue size (the
number of messages a node can buffer) in the nodes, and the y
axis show the different metrics as outlined above.
In Figure 3, the number of delivered messages for the differ-
ent simulations is shown. At first, a general observation can be
made from these graphs. It can be seen that for all the differ-
ent queueing policies, performance is better for the forwarding
strategies using delivery predictabilities as compared to COIN or
Epidemic forwarding. This validates the original assumption we
had when designing the protocol about the utility of the probabil-
ity calculations. It shows that while a simple random pruning of
Epidemic Routing does give some performance enhancements, it
is not able to achieve the same level of performance as when us-
ing the delivery predictabilities as defined in PRoPHET. Among
the various queueing policies, the MOFO policy gives the best
performance for all different queue sizes considered. At larger
queue sizes, the difference between MOFO and some of the other
policies decreases. By dropping messages that have already
been forwarded to many other nodes, MOFO makes sure that the
messages dropped are the ones that have been spread most into
the network. This explains the good performance achieved by
MOFO as it increases the probability that a message will be able
to find its way to the destination, and also reduces the risk that a
message is dropped without being forwarded even once. The fact
that MOPR is getting worse performance than MOFO is how-
ever a sign that it might still be possible to make improvements
in the delivery predictability calculations to make them estimate
the actual delivery probability even better, as that should intu-
itively result in good performance for MOPR. As the focus for
this work is on forwarding strategies and queueing policies, such
estimation improvements will be considered in future work. The
FIFO queueing policy performed well especially in presence of
forwarding strategies that scan messages for transmission from
the head of the queue. For these strategies, given the limited con-
tact opportunity between nodes, messages that are at the head of
the queue are likely forwarded more times than those at the tail of
the queue. As FIFO drops messages from the head of the queue,
messages being dropped are thus also most likely to have been
forwarded a larger number of times, which makes this similar to
using the MOFO policy.
As mentioned above, the benefit of using delivery predictabil-
ities can be clearly seen in the graphs, and it can also be seen
that the added complexity of GRTRMax and, even more so,
GRTRSort pays off. The forwarding strategies that are some-
what more advanced than the simple GRTR get better perfor-
mance, especially GRTRSort which in conjunction with the
MOFO queueing policy gives the best performance. A result
that may seem surprising at first is that the LEPR queueing pol-
icy performs poorly in terms of delivery and delay. Intuitively,
dropping messages that the node is not very probable to deliver
should be a good way to manage the queue. One problem with
this is however that as nodes treat all messages equally, there is
a high risk of messages being dropped at the source or close to it
due to low delivery predictabilities in that region of the network
when only few or even no forwards have been done. Further in-
vestigation of the distribution of the number of times a message
is forwarded before getting dropped by its source revealed that

Citations
More filters
Journal Article

The Design and Analysis of Experiments

TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Proceedings ArticleDOI

Optimal Buffer Management Policies for Delay Tolerant Networks

TL;DR: This paper proposes an optimal buffer management policy based on global knowledge about the network that outperforms existing ones in terms of both average delivery rate and delivery delay and introduces a distributed algorithm that uses statistical learning to approximate the global knowledge required by the the optimal algorithm, in practice.
Journal ArticleDOI

A survey on congestion control for delay and disruption tolerant networks

TL;DR: A taxonomy is proposed to map the DTN congestion control design space and use it to classify existing DTN traffic congestion control mechanisms.
Proceedings ArticleDOI

An optimal joint scheduling and drop policy for Delay Tolerant Networks

TL;DR: This paper proposes an efficient joint scheduling and drop policy that can optimize different performance metrics, like average delay and delivery probability, and introduces a distributed algorithm that can approximate the performance of the optimal algorithm, in practice.
Journal ArticleDOI

Message Drop and Scheduling in DTNs: Theory and Practice

TL;DR: A practical and efficient joint scheduling and drop policy that can optimize different performance metrics, such as average delay and delivery probability, and derives the optimal policy based on global knowledge about the network using the theory of encounter-based message dissemination.
References
More filters
Book

Introduction to Algorithms

TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Book ChapterDOI

Dynamic Source Routing in Ad Hoc Wireless Networks

TL;DR: This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing that adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently.

Epidemic routing for partially-connected ad hoc networks

TL;DR: This work introduces Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery and achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.
Proceedings ArticleDOI

Spray and wait: an efficient routing scheme for intermittently connected mobile networks

TL;DR: A new routing scheme, called Spray and Wait, that "sprays" a number of copies into the network, and then "waits" till one of these nodes meets the destination, which outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered.
Proceedings ArticleDOI

Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet

TL;DR: The goal is to use the least energy, storage, and other resources necessary to maintain a reliable system with a very high `data homing' success rate and it is believed that the domain-centric protocols and energy tradeoffs presented here for ZebraNet will have general applicability in other wireless and sensor applications.
Frequently Asked Questions (2)
Q1. What have the authors contributed in "Evaluation of queueing policies and forwarding strategies for routing in intermittently connected networks" ?

In this paper, the authors propose and evaluate different combinations of queueing policies and forwarding strategies for intermittently connected networks. The authors show that a probabilistic routing approach along with the correct choice of buffer management policy and forwarding strategy can result in much performance improvements in terms of message delivery, overhead and end-to-end delay. 

The authors plan to extend this work by addressing the issues of congestion control in DTNs. More effort will also be put into further understanding the effects of topology, mobility, and the ability to make good delivery predictability estimates.