scispace - formally typeset
Search or ask a question
Topic

Packet loss

About: Packet loss is a research topic. Over the lifetime, 21235 publications have been published within this topic receiving 302453 citations.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2009
TL;DR: A discrete-time stochastic system, where sensor measurements are sent over a network to the controller, and it is shown that the joint optimization of scheduling and control can be separated into three subproblems: an optimal regulator problem, an estimation problem and an optimal scheduling problem.
Abstract: In this paper, we consider a discrete-time stochastic system, where sensor measurements are sent over a network to the controller. The design objective is a non-classical multicriterion optimization problem for finite horizon, where the cost function consists of the linear quadratic cost reflecting the control performance and a communication cost penalizing information exchange between sensor and controller. It is shown that the joint optimization of scheduling and control can be separated into three subproblems: an optimal regulator problem, an estimation problem and an optimal scheduling problem. The obtained results are extended to TCP-like networks with random packet loss. In the proposed framework, we classify three classes of schedulers, a purely randomized, a deterministic and a state-dependent scheme, and compare their performance by a numerical example.

97 citations

Book ChapterDOI
04 Sep 2001
TL;DR: This paper revisits and challenges the dogma that TCP is an undesirable choice for streaming multimedia, video in particular, and concerns over the marginal benefit of changing TCP's service model, given the presence of congestion avoidance.
Abstract: In this paper, we revisit and challenge the dogma that TCP is an undesirable choice for streaming multimedia, video in particular. For some time, the common view held that neither TCP nor UDP, the Internet's main transport protocols, are adequate for video applications. UDP's service model doesn't provide enough support to the application while TCP's provides too much. Consequently, numerous research works proposed new transport protocols with alternate service-models as more suitable for video. For example, such service models might provide higher reliability than UDP but not the full-reliability of TCP. More recently, study of Internet dynamics has shown that TCP's stature as the predominant protocol persists. Through some combination of accident and design, TCP's congestion avoidance mechanism seems essential to the Internet's scalability and stability. Research on modeling TCP dynamics in order to effectively define the notion of TCP-friendly congestion avoidance is very active. Meanwhile, proposals for video-oriented transport protocols continue to appear, but they now generally include TCP-friendly congestion avoidance. Our concern is over the marginal benefit of changing TCP's service model, given the presence of congestion avoidance. As a position paper, our contribution will not be in the form of final answers, but our hope is to convince the reader of the merit in reexamining the question: do applications need a replacement for TCP in order to do stream ing video?

97 citations

Proceedings ArticleDOI
01 Oct 2001
TL;DR: This work proposes to improve the tradeoff among delay, late loss rate, and speech quality using multi-stream transmission of real-time voice over the Internet, where multiple redundant descriptions of the voice stream are sent over independent network paths.
Abstract: The quality of real-time voice communication over best-effort networks is mainly determined by the delay and loss characteristics observed along the network path. Excessive playout buffering at the receiver is prohibitive and significantly delayed packets have to be discarded and considered as late loss. We propose to improve the tradeoff among delay, late loss rate, and speech quality using multi-stream transmission of real-time voice over the Internet, where multiple redundant descriptions of the voice stream are sent over independent network paths. Scheduling the playout of the received voice packets is based on a novel multi-stream adaptive playout scheduling technique that uses a Lagrangian cost function to trade delay versus loss. Experiments over the Internet suggest largely uncorrelated packet erasure and delay jitter characteristics for different network paths which leads to a noticeable path diversity gain. We observe significant reductions in mean end-to-end latency and loss rates as well as improved speech quality when compared to FEC protected single-path transmission at the same data rate. In addition to our Internet measurements, we analyze the performance of the proposed multi-path voice communication scheme using the ns network simulator for different network topologies, including shared network links.

97 citations

Patent
22 Oct 1998
TL;DR: In this article, a forward error erasure/correction (FEC) code is applied to bytes in the high priority partition and the low priority partition for transmission over the packet network to a receiver/decoder.
Abstract: In order to transmit an inter-frame coded video signal, such as an MPEG-coded video signal, over a packet-based network such as the Internet, the video signal associated with at least one video frame, is split (102, 402) into a high priority partition and a low priority partition. A systematic forward error erasure/correction (FEC) code (108), such as a Reed Solomon (n,k) code, is then applied to bytes in the high priority partition. The forward error/erasure corrected high priority partition bytes and the low priority partition bytes are then combined (110) into n packets for transmission over the packet network to a receiver/decoder. Each of the n transmitted packets contains a combination of both high priority partition data bytes and low priority partition information bytes. In k of those packets the high priority partition data bytes are all high priority partition information bytes and in n-k of those packets all the high priority partition data byte are parity bytes produced by the FEC coding. More specifically, for each high priority partition byte position within the n packets, the forward error/erasure correction code is applied using one high priority partition information byte from the same byte position in each of those k packets to determine n-k parity bytes, which are arranged, one byte per packet, in the n-k packets containing high priority partition parity bytes. If up to n-k packets are lost in transmission over the packet network to the receiver (500, 600), then the high priority partition bytes in such lost packets can be recovered to applying FEC decoding (506) to the high partition bytes in the received packets. The most visually significant information is thus protected against packet loss over the network.

96 citations

Journal ArticleDOI
TL;DR: A combination of deep reinforcement learning (DRL) and the long-short-term memory (LSTM) network is adopted to accelerate the convergence speed of the algorithm and the quality of experience (QoE) is introduced to evaluate the results of UAV sharing.
Abstract: The formation flights of multiple unmanned aerial vehicles (UAV) can improve the success probability of single-machine. Dynamic spectrum interaction solves the problem of the ordered communication of multiple UAVs with limited bandwidth via spectrum interaction between UAVs. By introducing reinforcement learning algorithm, UAVs can continuously obtain the optimal strategy by continuously interacting with the environment. In this paper, two types of UAV formation communication methods are studied. One method allows for information sharing between two UAVs in the same time slot. The other method is the adoption of a dynamic time slot allocation scheme to complete the alternate use of time slots by the UAV to realize information sharing. The quality of experience (QoE) is introduced to evaluate the results of UAV sharing, and the M/G/1 queuing model is used for priority and to evaluate the packet loss of UAV. In terms of algorithms, a combination of deep reinforcement learning (DRL) and the long-short-term memory (LSTM) network is adopted to accelerate the convergence speed of the algorithm. The experimental results show that, compared with the Q-learning and deep Q-network (DQN) methods, the proposed method achieves faster convergence and better performance with respect to the throughput rate.

96 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
96% related
Wireless ad hoc network
49K papers, 1.1M citations
96% related
Wireless network
122.5K papers, 2.1M citations
95% related
Wireless sensor network
142K papers, 2.4M citations
94% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
93% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023133
2022325
2021694
2020846
20191,033
2018993