scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
01 Jan 1996
TL;DR: The design in this note describes a new ADI that provides lower latency in common cases and is still easy to implement, while retaining many opportunities for customization to any advanced capabilities that the underlying hardware may support.
Abstract: In this paper we describe an abstract device interface (ADI) that may be used to e ciently implement the Message Passing Interface (MPI). After experience with a rst-generation ADI that made certain assumptions about the devices and tradeo s in the design, it has become clear that, particularly on systems with low-latency communication, the rst-generation ADI design imposes too much additional latency. In addition, the rst-generation design is awkward for heterogeneous systems, complex for noncontiguous messaging, and inadequate at error handling. The design in this note describes a new ADI that provides lower latency in common cases and is still easy to implement, while retaining many opportunities for customization to any advanced capabilities that the underlying hardware may support.

39 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a global approximation of flow wave travel time to assess the utility of existing and future low-latency/near-real-time satellite products, with an emphasis on the forthcoming SWOT satellite mission.
Abstract: Earth-orbiting satellites provide valuable observations of upstream river conditions worldwide. These observations can be used in real-time applications like early flood warning systems and reservoir operations, provided they are made available to users with sufficient lead time. Yet the temporal requirements for access to satellite-based river data remain uncharacterized for time-sensitive applications. Here we present a global approximation of flow wave travel time to assess the utility of existing and future low-latency/near-real-time satellite products, with an emphasis on the forthcoming SWOT satellite mission. We apply a kinematic wavemodel to a global hydrography data set and find that global flowwaves traveling at their maximum speed take a median travel time of 6, 4, and 3 days to reach their basin terminus, the next downstream city, and the next downstream dam, respectively. Our findings suggest that a recently proposed ≤2-day data latency for a low-latency SWOT product is potentially useful for real-time river applications. Plain Language Summary Satellites can provide upstream conditions for early flood warning systems, reservoir operations, and other river management applications. This information is most useful for time-sensitive applications if it is made available before an observed upstream flood reaches a downstream point of interest, like a basin outlet, city, or dam. Here we characterize the time it takes floods to travel down Earth’s rivers in an effort to assess the time required for satellite data to be downloaded, processed, and made accessible to users. We find that making satellite data available within a recently proposed ≤2-day time period will make the data potentially useful for flood mitigation and other water management applications.

39 citations

Proceedings ArticleDOI
01 Sep 2006
TL;DR: This paper presents B-MAC+, an enhanced version of a widely adopted MAC protocol, and it is shown that substantial improvements, in terms of network lifetime, can be reached over the original protocol.
Abstract: Applications designed for event driven monitoring represent a challenging class of applications for wireless sensor networks. They are a special kind of monitoring applications, since they usually need low data rates, but also require mechanisms for low latency and asynchronous communication. In this paper we will focus on optimizations at the MAC layer that enable low energy consumption when contention-based protocols are adopted. We present B-MAC+, an enhanced version of a widely adopted MAC protocol, and we show that substantial improvements, in terms of network lifetime, can be reached over the original protocol.

39 citations

Journal ArticleDOI
25 May 2018
TL;DR: An overview of 5G requirements as specified by 3GPP SA1 is presented and basic requirements that are new for 5G are discussed and 5G performance requirements are provided.
Abstract: This paper presents an overview of 5G requirements as specified by 3GPP SA1. The main drivers for 5G were the requirement to provide more capacity and higher data rates and the requirement to support different ‘vertical’ sectors with ultra-reliable and low latency communication. The paper discusses basic requirements that are new for 5G and provides 5G performance requirements. The paper also discusses a number of vertical sectors that have influenced the 5G requirements work (V2X, mission critical, railway communication) and gives an overview of developments in 3GPP SA1 that will likely influence 5G specifications in the future.

39 citations

Journal ArticleDOI
TL;DR: A digital-twin-enabled model-based scheme is proposed to achieve an intelligent clock synchronization for reducing resource consumption associated with distributed synchronization in fast-changing IIoT environments and a significant enhancement on the clock accuracy is accomplished with dramatically reduced communication resource consumption in networks with different packet delay variations.
Abstract: Tight cooperation among distributively connected equipment and infrastructures of an Industrial-Internet-of-Things (IIoT) system hinges on low latency data exchange and accurate time synchronization within sophisticated networks. However, the temperature-induced clock drift in connected industry facilities constitutes a fundamental challenge for conventional synchronization techniques due to dynamic industrial environments. Furthermore, the variation of packet delivery latency in IIoT networks hinders the reliability of time information exchange, leading to deteriorated clock synchronization performance in terms of synchronization accuracy and network resource consumption. In this article, a digital-twin-enabled model-based scheme is proposed to achieve an intelligent clock synchronization for reducing resource consumption associated with distributed synchronization in fast-changing IIoT environments. By leveraging the digital-twin-enabled clock models at remote locations, required interactions among distributed IIoT facilities to achieve synchronization is dramatically reduced. The virtual clock modeling in advance of the clock calibrations helps to characterize each clock so that its behavior under dynamic operating environments is predictable, which is beneficial to avoiding excessive synchronization-related timestamp exchange. An edge-cloud collaborative architecture is also developed to enhance the overall system efficiency during the development of remote digital-twin models. Simulation results demonstrate that the proposed scheme can create an accurate virtual model remotely for each local clock according to the information gathered. Meanwhile, a significant enhancement on the clock accuracy is accomplished with dramatically reduced communication resource consumption in networks with different packet delay variations.

39 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227