scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
15 Oct 2003
TL;DR: This paper describes how all the asynchronous overhead can be completely removed by instead running the entire coherence protocol in the requesting processor, and how this technique is applicable to both page-based and fine-grain software shared memory.
Abstract: Software-implementations of shared memory are still far behind the performance of hardwarebased shared memory implementations and are not viable options for most fine-grain sharedmemory applications. The major source for their inefficiency comes from the cost of interruptbased asynchronous protocol processing, not from the actual network latency. As the raw hardware latency of inter-node communication decreases, the asynchronous overhead in the communication becomes more dominant. Elaborate schemes, involving dedicated hardware and/or dedicated protocol processors, have been suggested to cut the overhead. This paper describes how all the asynchronous overhead can be completely removed by instead running the entire coherence protocol in the requesting processor. This not only removes the asynchronous overhead, but also makes use of a processor that otherwise would stall. The technique is applicable to both page-based and fine-grain software shared memory. Our proof-of-concept implementation—DSZOOM-EMU—is a fine-grained software-based shared memory. It demonstrates a protocol-handling overhead below a microsecond for all the actions involved in a remote load operation, to be compared to the fastest implementation to date of around ten microseconds. The all-software protocol is implemented assuming only some basic low-level primitives in the cluster interconnect. Based on a remote atomic and simple remote put/get operations the requesting processor can assume the role of the directory agent, traditionally assumed by a remote protocol agent in the home node in other implementations. The implementation is thread-safe and allows all processors in a node to simultaneously perform remote operations.

13 citations

01 Jan 2004
TL;DR: To the knowledge, this is the first peer-to-peer protocol that is both cheat-proof and maintains the low latency required by interactive, real-time games.
Abstract: In this paper, we describe a new protocol for ordering events in peer-to-peer games that is provably cheat-proof. We describe how we can optimize this protocol to react to changing delays and congestion in the network. We validate our protocol through simulations and demon- strate its feasibility as a real-time, interactive protoco l. To our knowledge, this is the first peer-to-peer protocol that is both cheat-proof and maintains the low latency required by interactive, real-time games.

13 citations

Journal ArticleDOI
Chen Tian1, Bo Li1, Liulan Qin1, Jiaqi Zheng1, Jie Yang1, Wei Wang1, Guihai Chen1, Wanchun Dou1 
TL;DR: P-PFC monitors the derivative of buffer occupation, predicts the happening of PFC trigger in the future, and proactively triggers PFC pause in advance, and the benefit is that buffer usage can be maintained at a low level, hence the tail latency can be controlled.
Abstract: Remote Direct Memory Access(RDMA) technology rapidly changes the landscape of nowadays datacenter applications. Congestion control for RDMA networking is a critical challenge. As an end-to-end layer 3 congestion control mechanism, Datacenter QCN (DCQCN) alleviates the unfairness and head-of-the-line blocking problems of Priority-based Flow Control (PFC). However, a lossless network does not guarantee low latency even with DCQCN enabled. When network congestion happens, switch queues still build-up due to the response latency of end-to-end solutions. In this article, we propose Predictive PFC (P-PFC) to reduce tail latency in RDMA networks. P-PFC monitors the derivative of buffer occupation, predicts the happening of PFC trigger in the future, and proactively triggers PFC pause in advance. The benefit is that buffer usage can be maintained at a low level, hence the tail latency can be controlled. Preliminary evaluation results demonstrate that P-PFC can reduce tail latency by more than half of that in standard PFC in many scenarios, without hurting the throughput and average latency. P-PFC can also protect innocent flows compared with standard PFC according to our experiments. To our best knowledge, this is the first work of using derivative to improve PFC in lossless RDMA networks.

13 citations

Journal ArticleDOI
TL;DR: Smart Hierarchical Network (SHN) is proposed as a reliable Fog dynamic design structure based on Software Defined Artificial Neural Network (SD-ANN), which features congestion-aware neural switch model with embedded predictive receding horizon for intelligent congestion management.
Abstract: Cloud data centers used for High Performance Computing (HPC) with volatile Internet of Things (IoT) devices absolutely requires zero-speed switching/low latency, normalized throughput, network stab...

13 citations

Proceedings ArticleDOI
01 Aug 2019
TL;DR: A heuristic resource efficient latency-aware dynamic MC algorithm is proposed which activates MC selectively such that its benefits are harnessed for critical users, while minimizing the corresponding resource usage.
Abstract: Multi-connectivity (MC) with packet duplication, where the same data packet is duplicated and transmitted from multiple transmitters, is proposed in 5G New Radio as a reliability enhancement feature. However, it is found to be resource inefficient, since radio resources from more than one transmitters are required to serve a single user. Improving the performance enhancement vs. resource utilization tradeoff with MC is therefore a key design challenge. This work proposes a heuristic resource efficient latency-aware dynamic MC algorithm which activates MC selectively such that its benefits are harnessed for critical users, while minimizing the corresponding resource usage. Numerical results indicate that the proposed algorithm can deliver the outage performance gains of legacy MC schemes while requiring up to 45% less resources.

13 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227