scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 7278 publications have been published within this topic receiving 115409 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
18 Apr 2015
TL;DR: This work tested local latency in a variety of real-world gaming scenarios and carried out a controlled study focusing on targeting and tracking activities in an FPS game with varying degrees of local latency, showing that local latency is a real and substantial problem -- but games can mitigate the problem with appropriate compensation methods.
Abstract: Real-time games such as first-person shooters (FPS) are sensitive to even small amounts of lag. The effects of net-work latency have been studied, but less is known about local latency, the lag caused by input devices and displays. While local latency is important to gamers, we do not know how it affects aiming performance and whether we can reduce its negative effects. To explore these issues, we tested local latency in a variety of real-world gaming scenarios and carried out a controlled study focusing on targeting and tracking activities in an FPS game with varying degrees of local latency. In addition, we tested the ability of a lag compensation technique (based on aim assistance) to mitigate the negative effects. Our study found local latencies in the real-world range from 23 to 243 ms which cause significant and substantial degradation in performance (even for latencies as low as 41 ms). The study also showed that our compensation technique worked extremely well, reducing the problems caused by lag in the case of targeting, and removing the problem altogether in the case of tracking. Our work shows that local latency is a real and substantial problem -- but games can mitigate the problem with appropriate compensation methods.

68 citations

Patent
28 Nov 2001
TL;DR: In this paper, the authors proposed a method for proposing at least one transmission rate change including calculating a plurality of latency values, computing a derivative-based proposed change from the plurality of delay values, and proposing a rate change selected from the at least 1 derivative based proposed change.
Abstract: A method for proposing at least one transmission rate change including calculating a plurality of latency values, computing at least one derivative-based proposed change from the plurality of latency values, and proposing a rate change selected from the at least one derivative-based proposed change. Also provided is a system (Fig. 1) including a rate controller (410) controlling the transmission rate of data between two stations over a network and a rate reporter (430) in communication with the rate controller (410).

68 citations

Posted Content
TL;DR: A general redundancy strategy is designed that achieves a good latency-cost trade-off for an arbitrary service time distribution and generalizes and extends some results in the analysis of fork-join queues.
Abstract: In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers, and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps understand when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks, and time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e. log of the tail probability is convex) then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e. log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.

68 citations

Proceedings ArticleDOI
23 Oct 2013
TL;DR: For accurate delay measurement, this paper proposes to replace the ping tool with an adaptation of paris-traceroute which supports delay and jitter estimation, without being biased by per-flow network load balancing.
Abstract: Monitoring Internet performance and measuring user quality of experience are drawing increased attention from both research and industry. To match this interest, large-scale measurement infrastructures have been constructed. We believe that this effort must be combined with a critical review and calibrarion of the tools being used to measure performance.In this paper, we analyze the suitability of ping for delay measurement. By performing several experiments on different source and destination pairs, we found cases in which ping gave very poor estimates of delay and jitter as they might be experienced by an application. In those cases, delay was heavily dependent on the flow identifier, even if only one IP path was used. For accurate delay measurement we propose to replace the ping tool with an adaptation of paris-traceroute which supports delay and jitter estimation, without being biased by per-flow network load balancing.

68 citations

Patent
03 Apr 2007
TL;DR: In this paper, a mechanism is disclosed for determining a congestion metric for a path in a network in which a latency value for a particular path may be determined by exchanging latency packets with another component.
Abstract: A mechanism is disclosed for determining a congestion metric for a path in a network. In one implementation, a congestion metric for a path includes one or more latency values and one or more latency variation values. A latency value for a path may be determined by exchanging latency packets with another component. For example, to determine the latency for a particular path, a first component may send a latency request packet to a second component via the particular path. In response, the second component may send a latency response packet back to the first component. Based upon timestamp information in the latency response packet, the latency on the particular path may be determined. From a plurality of such latencies, a latency variation may be determined. Taken individually or together, the latency value(s) and the latency variation value(s) provide an indication of how congested the particular path currently is.

68 citations


Network Information
Related Topics (5)
The Internet
213.2K papers, 3.8M citations
75% related
Node (networking)
158.3K papers, 1.7M citations
75% related
Wireless
133.4K papers, 1.9M citations
74% related
Server
79.5K papers, 1.4M citations
74% related
Network packet
159.7K papers, 2.2M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
2021485
2020529
2019533
2018500
2017405