scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

The dynamic behavior of a data dissemination protocol for network programming at scale

03 Nov 2004-pp 81-94
TL;DR: It appears very hard to significantly improve upon the rate obtained by Deluge and it is argued that the rates obtained for dissemination are inherently lower than that for single path propagation.
Abstract: To support network programming, we present Deluge, a reliable data dissemination protocol for propagating large data objects from one or more source nodes to many other nodes over a multihop, wireless sensor network. Deluge builds from prior work in density-aware, epidemic maintenance protocols. Using both a real-world deployment and simulation, we show that Deluge can reliably disseminate data to all nodes and characterize its overall performance. On Mica2-dot nodes, Deluge can push nearly 90 bytes/second, one-ninth the maximum transmission rate of the radio supported under TinyOS. Control messages are limited to 18% of all transmissions. At scale, the protocol exposes interesting propagation dynamics only hinted at by previous dissemination work. A simple model is also derived which describes the limits of data propagation in wireless networks. Finally, we argue that the rates obtained for dissemination are inherently lower than that for single path propagation. It appears very hard to significantly improve upon the rate obtained by Deluge and we identify establishing a tight lower bound as an open problem.

Summary (4 min read)

1. INTRODUCTION

  • Wireless sensor networks (WSNs) represent a new class of computing with large numbers of resource-constrained computing nodes cooperating on essentially a single application.
  • These factors suggest that network programming (the programming of nodes by disseminating code over the network) is required for the success of WSNs.
  • Third, the authors develop a simple model of Deluge’s propagation behavior and use it to identify different factors which limit the overall bandwidth of any multihop communication protocol.
  • When nodes are not up-to-date, the broadcast rate is reduced, but is otherwise increased up to a specified limit.
  • Section 7 discusses future directions, and this paper concludes with Section 8.

3. DELUGE

  • Deluge is an epidemic protocol and operates as a statemachine where each node follows a set of strictly local rules to achieve a desired global behavior: the quick, reliable dis- semination of large data objects to many nodes.
  • In its most basic form, each node occasionally advertises the most recent version of the data object it has available to whatever nodes that can hear its local broadcast.
  • The first is its density-aware capability, where redundant advertisement and request messages are suppressed to minimize contention.
  • It may continue making requests if sufficient progress is being made.
  • Finally, Deluge emphasizes the use of spatial multiplexing to allow parallel transfers of data.

3.1 Data Representation

  • To manage the large size, Deluge divides the data object into fixed-size pages.
  • Instead, Deluge fragments the data object into P pages each of size Spage = N ·Spkt, where N is a fixed number of packets, as shown in Figure 1.
  • Both packets and pages include 16-bit cyclic redundancy checks (CRCs).
  • A complete description of the object is defined by the tuple (v,a), called the object profile, and is stored in non-volatile storage along with the data it represents.
  • A node receiving an object profile for a newer version uses the age-vector to determine which pages need updating.

3.2 The Protocol

  • A node operates in one of three states at any time: MAINTAIN, RX, or TX.
  • A node in the MAINTAIN state is responsible for ensur- ing that all nodes within communication range have (i) the newest version of the object profile and (ii) all available data for the newest version.
  • In Deluge, nodes request data from a single node S at a time.
  • 2.2 Request A node in the RX state is responsible for actively re- questing the remaining packets required to complete page p = γ +.
  • Nodes delay subsequent requests until they detect a period of silence equal to ω packet transmit times to help ensure the completion of data transmissions before requests are made.

3.3 Design Space

  • The design space for data dissemination protocols is large and includes: methods for suppressing redundant control and data messages, selection of nodes to transmit data, use of forward error-correction (FEC), the fragmentation of data for spatial multiplexing, use of link quality estimates or other metrics to improve decisions, among others.
  • Due to space limitations, the authors briefly mention some of their findings.
  • The authors again tested Deluge by keeping only the request suppression and it also performed poorly, confirming the results presented by SPIN-RL’s authors that show it performing worse than naive flooding in lossy network models.
  • The authors experimented with suppressing the transmission of data packets if k redundant data packets were overheard while in TX, where lower values of k represented more aggressive suppression.
  • It was interesting to see that FEC improved performance in sparse networks while harming performance in dense networks.

4. EVALUATION METHODOLOGY

  • The metrics the authors use to evaluate Deluge are driven by the primary motivation for this work: network programming.
  • The authors list the metrics they consider, ordered from highest to lowest priority.
  • In deployments, any interruption to their primary service caused by network programming should be minimized.
  • To evaluate and investigate the behavior of Deluge, the authors use two separate mechanisms.
  • The second is TOSSIM, a bit-level node simulator designed specifically for the TinyOS platform [7].

4.1 TinyOS Hardware

  • To gather empirical data, the authors use a testbed composed Mica2-dots, a TinyOS supported hardware platform.
  • The authors fully implemented Deluge in nesC and ran experiments using the Mica2-dot hardware platform with 75 nodes deployed non-uniformly in a 150’ by 100’ office environment [13].
  • With the source placed at one corner, the diameter of the network is about five hops.
  • To instrument a backchannel, the authors used specialized hardware to create a UART to TCP bridge, allowing nodes to transmit and receive messages over TCP through an Ethernet adapter.
  • At the end of each experiment, the authors verify the integrity of the data object via the TCP connection.

4.2 TOSSIM

  • In addition to hardware experiments, the authors use TOSSIM, a discrete-event network simulator, to investigate and evaluate Deluge at networks of much greater scale and differing structures.
  • It is very useful for exposing overall behavior.
  • Capturing sufficient detail when simulating the communication between nodes is essential since the behavior of dissemination protocols can be highly sensitive to low level factors.
  • Each bit-error rate is independently chosen based on the distance between u and v and a random distribution derived from empirical data collected on Mica nodes.
  • TOSSIM treats the transmission of each individual bit as an event.

5.1 Empirical Results

  • The procedure of each experiment follows that of a normal deployment scenario.
  • In its initial state, all but one of the nodes are deployed and operating in the steady-state, meaning that they have the same version v of the object profile and the same set of completed pages for version v.
  • The narrow range of completion times is a direct result of spatial multiplexing, where the range of completion times represents the amount of time to flush the pipeline and is essentially the time to disseminate a single page.
  • Again, these low numbers show the effectiveness of suppression on request transmissions.
  • On average, a node receives about 3.35 times the minimum number of required data packets, while half receive fewer than 3.34 times the minimum number.

5.2 Simulation Results

  • While the empirical results are promising, the authors were unable to experiment with networks of large scale since the testbed configuration does not scale easily.
  • In the next section, the authors investigate the propagation dynamics of Deluge to see why density affects performance.
  • Notice that the propagation speeds up as it approaches the edge of the network.
  • The expected time for receiving advertisements from nodes closer to the source is given by E[TrAdv] = E[NtPkt] · τl 2 · (1 + E[NSupp]) (4) where E[NSupp] specifies the expected number of times that an advertisement from hop h is suppressed by hop h − 1 before transmitting an advertisement (in the linear case, E[NSupp] = 1).
  • With a greater density, a larger delay period reduces any collisions caused by the hidden terminal problem.

6. CURRENT STATUS

  • The fully implemented Deluge protocol is included as a core part of TinyOS 1.1.8 to support network programming on the Mica2, Mica2-dot, MicaZ, and Telos platforms, the latter two using 2.4 GHz, IEEE 802.15.4 radios.
  • The current implementation consumes 84 bytes of RAM, of which 43 bytes is a message buffer.
  • In addition to the features described in this paper, Deluge supports multiple objects on each node by including an object summary of each object in the advertisement.
  • TOSBoot programs a node with the factory installed object on request by the user (through a radio command or a physical gesture) or if the node experiences failures.
  • This factory installed object is crucial for node reliability by providing a solid software base that allows user interaction without programming the node using a physical connection.

7. LOOKING FORWARD

  • While Deluge provides good, robust performance, there is room for potential improvement.
  • The increased performance along the edge introduces an interesting concept for disseminating large data objects.
  • In some cases, the authors were able to improve performance.
  • Spatial multiplexing limits a node’s broadcast rate to no greater than onethird the maximum rate due to the single-channel, broadcast medium.
  • While previous work has hinted at similar behaviors in real deployments, the effects are too complex to state anything conclusive and only small amounts of data are disseminated.

8. CONCLUSIONS

  • The authors presented Deluge, a reliable data dissemination protocol for propagating large data objects from one or more source nodes to many other nodes over a multihop WSN.
  • Control messages are limited to 18% of all transmissions.
  • At scale, Deluge exposes propagation dynamics only hinted at by previous work, showing the impact of the hidden terminal problem on dissemination.
  • Finally, the authors argued that dissemination is inherently slower than single path propagation and identified establishing a tight lower bound as an open problem.
  • It appears very hard to significantly improve upon the rate obtained by Deluge.

Did you find this useful? Give us your feedback

Figures (12)

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
24 Apr 2005
TL;DR: Telos is the latest in a line of motes developed by UC Berkeley to enable wireless sensor network (WSN) research, a new mote design built from scratch based on experiences with previous mote generations, with three major goals to enable experimentation: minimal power consumption, easy to use, and increased software and hardware robustness.
Abstract: We present Telos, an ultra low power wireless sensor module ("mote") for research and experimentation. Telos is the latest in a line of motes developed by UC Berkeley to enable wireless sensor network (WSN) research. It is a new mote design built from scratch based on experiences with previous mote generations. Telos' new design consists of three major goals to enable experimentation: minimal power consumption, easy to use, and increased software and hardware robustness. We discuss how hardware components are selected and integrated in order to achieve these goals. Using a Texas Instruments MSP430 microcontroller, Chipcon IEEE 802.15.4-compliant radio, and USB, Telos' power profile is almost one-tenth the consumption of previous mote platforms while providing greater performance and throughput. It eliminates programming and support boards, while enabling experimentation with WSNs in both lab, testbed, and deployment settings.

2,115 citations

Proceedings ArticleDOI
06 Nov 2006
TL;DR: An approach to time rectification of the acquired signals that can recover accurate timing despite failures of the underlying time synchronization protocol is described.
Abstract: We present a science-centric evaluation of a 19-day sensor network deployment at Reventador, an active volcano in Ecuador. Each of the 16 sensors continuously sampled seismic and acoustic data at 100 Hz. Nodes used an event-detection algorithm to trigger on interesting volcanic activity and initiate reliable data transfer to the base station. During the deployment, the network recorded 229 earthquakes, eruptions, and other seismoacoustic events.The science requirements of reliable data collection, accurate event detection, and high timing precision drive sensor networks in new directions for geophysical monitoring. The main contribution of this paper is an evaluation of the sensor network as a scientific instrument, holding it to the standards of existing instrumentation in terms of data fidelity (the quality and accuracy of the recorded signals) and yield (the quantity of the captured data). We describe an approach to time rectification of the acquired signals that can recover accurate timing despite failures of the underlying time synchronization protocol. In addition, we perform a detailed study of the sensor network's data using a direct comparison to a standalone data logger, as well as an investigation of seismic and acoustic wave arrival times across the network.

731 citations

Journal Article
TL;DR: In this article, Stann et al. present RMST (Reliable Multi-Segment Transport), a new transport layer for Directed Diffusion, which provides guaranteed delivery and fragmentation/reassembly for applications that require them.
Abstract: Appearing in 1st IEEE International Workshop on Sensor Net Protocols and Applications (SNPA). Anchorage, Alaska, USA. May 11, 2003. RMST: Reliable Data Transport in Sensor Networks Fred Stann, John Heidemann Abstract – Reliable data transport in wireless sensor networks is a multifaceted problem influenced by the physical, MAC, network, and transport layers. Because sensor networks are subject to strict resource constraints and are deployed by single organizations, they encourage revisiting traditional layering and are less bound by standardized placement of services such as reliability. This paper presents analysis and experiments resulting in specific recommendations for implementing reliable data transport in sensor nets. To explore reliability at the transport layer, we present RMST (Reliable Multi- Segment Transport), a new transport layer for Directed Diffusion. RMST provides guaranteed delivery and fragmentation/reassembly for applications that require them. RMST is a selective NACK-based protocol that can be configured for in-network caching and repair. Second, these energy constraints, plus relatively low wireless bandwidths, make in-network processing both feasible and desirable [3]. Third, because nodes in sensor networks are usually collaborating towards a common task, rather than representing independent users, optimization of the shared network focuses on throughput rather than fairness. Finally, because sensor networks are often deployed by a single organization with inexpensive hardware, there is less need for interoperability with existing standards. For all of these reasons, sensor networks provide an environment that encourages rethinking the structure of traditional communications protocols. The main contribution is an evaluation of the placement of reliability for data transport at different levels of the protocol stack. We consider implementing reliability in the MAC, transport layer, application, and combinations of these. We conclude that reliability is important at the MAC layer and the transport layer. MAC-level reliability is important not just to provide hop-by-hop error recovery for the transport layer, but also because it is needed for route discovery and maintenance. (This conclusion differs from previous studies in reliability for sensor nets that did not simulate routing. [4]) Second, we have developed RMST (Reliable Multi-Segment Transport), a new transport layer, in order to understand the role of in- network processing for reliable data transfer. RMST benefits from diffusion routing, adding minimal additional control traffic. RMST guarantees delivery, even when multiple hops exhibit very high error rates. 1 Introduction Wireless sensor networks provide an economical, fully distributed, sensing and computing solution for environments where conventional networks are impractical. This paper explores the design decisions related to providing reliable data transport in sensor nets. The reliable data transport problem in sensor nets is multi-faceted. The emphasis on energy conservation in sensor nets implies that poor paths should not be artificially bolstered via mechanisms such as MAC layer ARQ during route discovery and path selection [1]. Path maintenance, on the other hand, benefits from well- engineered recovery either at the MAC layer or the transport layer, or both. Recovery should not be costly however, since many applications in sensor nets are impervious to occasional packet loss, relying on the regular delivery of coarse-grained event descriptions. Other applications require loss detection and repair. These aspects of reliable data transport include the provision of guaranteed delivery and fragmentation/ reassembly of data entities larger than the network MTU. Sensor networks have different constraints than traditional wired nets. First, energy constraints are paramount in sensor networks since nodes can often not be recharged, so any wasted energy shortens their useful lifetime [2]. This work was supported by DARPA under grant DABT63-99-1-0011 as part of the SCAADS project, and was also made possible in part due to support from Intel Corporation and Xerox Corporation. Fred Stann and John Heidemann are with USC/Information Sciences Institute, 4676 Admiralty Way, Marina Del Rey, CA, USA E-mail: fstann@usc.edu, johnh@isi.edu. 2 Architectural Choices There are a number of key areas to consider when engineering reliability for sensor nets. Many current sensor networks exhibit high loss rates compared to wired networks (2% to 30% to immediate neighbors)[1,5,6]. While error detection and correction at the physical layer are important, approaches at the MAC layer and higher adapt well to the very wide range of loss rates seen in sensor networks and are the focus of this paper. MAC layer protocols can ameliorate PHY layer unreliability, and transport layers can guarantee delivery. An important question for this paper is the trade off between implementation of reliability at the MAC layer (i.e. hop to hop) vs. the Transport layer, which has traditionally been concerned with end-to-end reliability. Because sensor net applications are distributed, we also considered implementing reliability at the application layer. Our goal is to minimize the cost of repair in terms of transmission.

650 citations

Proceedings ArticleDOI
06 Jun 2005
TL;DR: SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum, is presented and its long term total usage is nearly identical to that of systems such as Matè and TinyOS.
Abstract: Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOS's design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Mate virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Mate and TinyOS.

582 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of denial-of-service threats and countermeasures considering wireless sensor platforms' resource constraints as well as the denial of sleep attack, which targets a battery-powered device's energy supply.
Abstract: This survey of denial-of-service threats and countermeasures considers wireless sensor platforms' resource constraints as well as the denial-of-sleep attack, which targets a battery-powered device's energy supply. Here, we update the survey of denial-of-service threats with current threats and countermeasures.In particular, we more thoroughly explore the denial-of-sleep attack, which specifically targets the energy-efficient protocols unique to sensor network deployments. We start by exploring such networks' characteristics and then discuss how researchers have adapted general security mechanisms to account for these characteristics.

488 citations

References
More filters
Proceedings ArticleDOI
01 Aug 1999
TL;DR: This paper proposes several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcast to alleviate the broadcast storm problem, which is identified by showing how serious it is through analyses and simulations.
Abstract: Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.

3,819 citations

Proceedings ArticleDOI
05 Nov 2003
TL;DR: TOSSIM, a simulator for TinyOS wireless sensor networks can capture network behavior at a high fidelity while scaling to thousands of nodes, by using a probabilistic bit error model for the network.
Abstract: Accurate and scalable simulation has historically been a key enabling factor for systems research. We present TOSSIM, a simulator for TinyOS wireless sensor networks. By exploiting the sensor network domain and TinyOS's design, TOSSIM can capture network behavior at a high fidelity while scaling to thousands of nodes. By using a probabilistic bit error model for the network, TOSSIM remains simple and efficient, but expressive enough to capture a wide range of network interactions. Using TOSSIM, we have discovered several bugs in TinyOS, ranging from network bit-level MAC interactions to queue overflows in an ad-hoc routing protocol. Through these and other evaluations, we show that detailed, scalable sensor network simulation is possible.

2,281 citations

Proceedings ArticleDOI
01 Dec 1987
TL;DR: This paper descrikrs several randomized algorit, hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y.
Abstract: Whru a dilt~lhSC is replicated at, many sites2 maintaining mutual consistrnry among t,he sites iu the fac:e of updat,es is a signitirant problem. This paper descrikrs several randomized algorit,hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y. The algorit Inns are very simple and require few guarant,ees from the underlying conllllunicat.ioll system, yc+ they rnsutc t.hat. the off(~c~t, of (‘very update is evcnt,uwlly rf+irt-ted in a11 rq1ica.s. The cost, and parformancc of t,hr algorithms arc tuned I>? c%oosing appropriat,c dist,rilMions in t,hc randoinizat,ioii step. TIN> idgoritlmls ilr(’ c*los~*ly analogoIls t,o epidemics, and t,he epidcWliolog)litc\ratiirc, ilitlh iii Illld~~rsti4lldill~ tlicir bc*liavior. One of tlW i$,oritlims 11&S brc>n implrmcWrd in the Clraringhousr sprv(brs of thr Xerox C’orporat~c~ Iiitcrnc4, solviiig long-standing prol>lf~lns of high traffic and tlatirl>ilsr inconsistcllcp.

1,958 citations


"The dynamic behavior of a data diss..." refers background in this paper

  • ...propose an epidemic algorithm based on randomly chosen point-to-point interactions for managing replicated databases that is robust to unpredictable communication failures [1]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcast to alleviate the broadcast storm problem, which is identified by showing how serious it is through analyses and simulations.
Abstract: Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.

1,411 citations


"The dynamic behavior of a data diss..." refers background in this paper

  • ...For data dissemination in wireless networks, naive retransmission of broadcasts can lead to the broadcast storm problem, where redundancy, contention, and collisions impair performance and reliability [ 9 ]....

    [...]

Proceedings ArticleDOI
01 Oct 2002
TL;DR: Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.
Abstract: Composed of tens of thousands of tiny devices with very limited resources ("motes"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Mate, a tiny communication-centric virtual machine designed for sensor networks. Mate's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Mate's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.

1,217 citations


"The dynamic behavior of a data diss..." refers background in this paper

  • ...A simple model is also derived which describes the limits of data propagation in wireless networks....

    [...]

Frequently Asked Questions (19)
Q1. What are the contributions in "The dynamic behavior of a data dissemination protocol for network programming at scale" ?

To support network programming, the authors present Deluge, a reliable data dissemination protocol for propagating large data objects from one or more source nodes to many other nodes over a multihop, wireless sensor network. Using both a real-world deployment and simulation, the authors show that Deluge can reliably disseminate data to all nodes and characterize its overall performance. 

Because the cost of end-to-end repair is exponential with the path length, both protocols emphasize hop-by-hop error recovery where loss detection and recovery is limited to a small number of hops (ideally one). 

By opening up sockets to each node from a desktop computer, the authors timestamp each UART message with precision on the order of milliseconds and track the propagation of each page. 

spatial multiplexing limits a node’s broadcast rate to no greater than onethird the maximum rate due to the single-channel, broadcast medium. 

One might suggest that starting the propagation in the centermight help to eliminate the behavior of following the edge and also decrease the propagation time by about half. 

The authors experimented with suppressing the transmission of data packets if k redundant data packets were overheard while in TX, where lower values of k represented more aggressive suppression. 

One cause is Deluge’s depth-first tendency, where propagation of a single page along good links is not blocked by delays caused by poor links. 

By doubling τr, the propagation rate along the diagonal improves by about 2.7 times while the propagation rate along the edge remains nearly identical, leading to an improvement in overall propagation performance. 

Even though placing the source at the center effectively reduces the the diameter by about half, Deluge is unable to take advantage of the quick edges since nodes in the center experience a greater number of collisions. 

The expected time required to transmit just the data packets isE[Ttx] = E[NtPkt] · TtPkt ·N, (2)where TtPkt is the transmission time for a single packet. 

The only trigger that causes R to request data from S is the receipt of an advertisement stating the availability of a needed page. 

the rate of requests from R will decrease with the decreasing advertisement rate in the steady state since S will not know that R is not up-to-date. 

The main advantage with simulating at the bit-level is that the transmission and reception of bits govern the actions of each layer, rather than modeling each layer with its own set of parameters. 

the authors used TOSSIM to evaluate and investigate the behavior of Deluge with network sizes on the order of hundreds of nodes and tens of hops. 

For the linear case, the simulations show that Deluge takesabout 40 seconds to disseminate each page to 152 nodes across 15 hops. 

Because requests are unicast to the node that most recently advertised, it is unlikely for many senders in a region to begin transmitting data. 

With this topology, the propagation behaves as expected: the propagation progresses at a fairly constant rate in a nice wavefront pattern from corner to corner. 

This structured approach should improve the propagation time for a single page, but inhibits the use of pipelining since it is more difficult to minimize interference between transfers of different pages. 

Using the transmission count information on advertisements, the authors plot a histogram of the average advertisement rate for each node by dividing the count by the completion time.