scispace - formally typeset
Search or ask a question
Topic

Packet loss

About: Packet loss is a research topic. Over the lifetime, 21235 publications have been published within this topic receiving 302453 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Having optimized an all‐to‐all routine, which sends the data in an ordered fashion, it is shown that it is possible to completely prevent packet loss for any number of multi‐CPU nodes, and the GROMACS scaling dramatically improves, even for switches that lack flow control.
Abstract: We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

121 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: Experimental results show that FloodDefender can efficiently mitigate the SDN-aimed DoS attacks, incurring less than 0.5% CPU computation to handle attack traffic, only 18ms packet delay and 5% packet loss rate under attacks.
Abstract: The separated control and data planes in software-defined networking (SDN) with high programmability introduce a more flexible way to manage and control network traffic. However, SDN will experience long packet delay and high packet loss rate when the communication link between two planes is jammed by SDN-aimed DoS attacks with massive table-miss packets. In this paper, we propose FloodDefender, an efficient and protocol-independent defense framework for SDN/OpenFlow networks to mitigate DoS attacks. It stands between the controller platform and other controller apps, and can protect both the data and control plane resources by leveraging three new techniques: table-miss engineering to prevent the communication bandwidth from being exhausted; packet filter to identify attack traffic and save computational resources of the control plane; and flow rule management to eliminate most of useless flow entries in the switch flow table. All designs of FloodDefender conform to the OpenFlow policy, requiring no additional devices. We implement a prototype of FloodDefender and evaluate its performance in both software and hardware environments. Experimental results show that FloodDefender can efficiently mitigate the SDN-aimed DoS attacks, incurring less than 0.5% CPU computation to handle attack traffic, only 18ms packet delay and 5% packet loss rate under attacks.

120 citations

Proceedings ArticleDOI
05 Dec 2006
TL;DR: This paper presents FireFly, a time-synchronized sensor network platform for real-time data streaming across multiple hops, and implements RT-Link, a TDMA-based link layer protocol for message exchange on well-defined time slots and pipelining along multiple hops.
Abstract: Wireless sensor networks have traditionally focused on low duty-cycle applications where sensor data are reported periodically in the order of seconds or even longer. This is due to typically slow changes in physical variables, the need to keep node costs low and the goal of extending battery lifetime. However, there is a growing need to support real-time streaming of audio and/or low-rate video even in wireless sensor networks for use in emergency situations and shortterm intruder detection. In this paper, we present FireFly, a time-synchronized sensor network platform for real-time data streaming across multiple hops. FireFly is composed of several integrated layers including specialized low-cost hardware, a sensor network operating system, a real-time link layer and network scheduling which together provide efficient support for applications with timing constraints. In order to achieve high end-to-end throughput, bounded latency and predictable lifetime, we employ hardware-based time synchronization. Multiple tasks including audio sampling, networking and sensor reading are scheduled using the Nano- RK RTOS. We have implemented RT-Link, a TDMA-based link layer protocol for message exchange on well-defined time slots and pipelining along multiple hops. We use this platform to support 2-way audio streaming concurrently with sensing tasks. For interactive voice, we investigate TDMA-based slot scheduling with balanced bi-directional latency while meeting audio timeliness requirements. Finally, we describe our experimental deployment of 42 nodes in a coal mine, and present measurements of the end-to-end throughput, jitter, packet loss and voice quality.

120 citations

Proceedings ArticleDOI
28 Sep 2002
TL;DR: The network survivability is perceived as a composite measure consisting of both network failure duration and failure impact on the network, and the excess packet loss due to failures is taken as the survivability performance measure.
Abstract: Network survivability reflects the ability of a network to continue to function during and after failures. Our purpose in this paper is to propose a quantitative approach to evaluate network survivability. We perceive the network survivability as a composite measure consisting of both network failure duration and failure impact on the network. A wireless ad-hoc network is analyzed as an example, and the excess packet loss due to failures (ELF) is taken as the survivability performance measure. To obtain ELF, we adopt a two phase approach consisting of the steady-state availability analysis and transient performance analysis. Assuming Markovian property for the system, this measure is obtained by solving a set of Markov models. By utilizing other analysis paradigms, our approach in this paper may also be applied to study the survivability performance of more complex systems.

120 citations

Proceedings ArticleDOI
25 Jun 2013
TL;DR: This study shows that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station, and develops a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance.
Abstract: Real-time communication (RTC) applications such as VoIP, video conferencing, and online gaming are flourishing. To adapt and deliver good performance, these applications require accurate estimations of short-term network performance metrics, e.g., loss rate, one-way delay, and throughput. However, the wide variation in mobile cellular network performance makes running RTC applications on these networks problematic. To address this issue, various performance adaptation techniques have been proposed, but one common problem of such techniques is that they only adjust application behavior reactively after performance degradation is visible. Thus, proactive adaptation based on accurate short-term, fine-grained network performance prediction can be a preferred alternative that benefits RTC applications. In this study, we show that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station. We develop a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance. PROTEUS successfully predicts the occurrence of packet loss within a 0.5s time window for 98% of the time windows and the occurrence of long one-way delay for 97% of the time windows. We also demonstrate how PROTEUS can be integrated with RTC applications to significantly improve the perceptual quality. In particular, we increase the peak signal-to-noise ratio of a video conferencing application by up to 15dB and reduce the perceptual delay in a gaming application by up to 4s.

120 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
96% related
Wireless ad hoc network
49K papers, 1.1M citations
96% related
Wireless network
122.5K papers, 2.1M citations
95% related
Wireless sensor network
142K papers, 2.4M citations
94% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
93% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023133
2022325
2021694
2020846
20191,033
2018993