Topic
Latency (engineering)
About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.
Papers published on a yearly basis
Papers
More filters
•
02 Jun 2010
TL;DR: In this article, a system and method for low latency sensor network schedules transmissions in the network is described in some embodiments of the technology, information associated with at least two network devices is received.
Abstract: A system and method for low latency sensor network schedules transmissions in the network is described herein. In some embodiments of the technology, information associated with at least two network devices is received. Each network device can be associated with at least one event in a sequence of events. A first schedule entry in a schedule can be determined for each of the at least two network devices based on the received information. At least a part of the schedule can be transmitted to each of the at least two network devices.
39 citations
••
18 Mar 2018TL;DR: It is argued that measure and control of latency based on average values taken at a few time intervals is not enough to assure a required timeliness behavior but that latency jitter needs to be considered when designing experiences for Virtual Reality.
Abstract: Low latency is a fundamental requirement for Virtual Reality (VR) systems to reduce the potential risks of cybersickness and to increase effectiveness, efficiency and user experience. In contrast to the effects of uniform latency degradation, the influence of latency jitter on user experience in VR is not well researched, although today's consumer VR systems are vulnerable in this respect. In this work we report on the impact of latency jitter on cybersickness in HMD-based VR environments. Test subjects are given a search task in Virtual Reality, provoking both head rotation and translation. One group experienced artificially added latency jitter in the tracking data of their head-mounted display. The introduced jitter pattern was a replication of a real-world latency behavior extracted and analyzed from an existing example VR-system. The effects of the introduced latency jitter were measured based on self-reports simulator sickness questionnaire (SSQ) and by taking physiological measurements. We found a significant increase in self-reported simulator sickness. We therefore argue that measure and control of latency based on average values taken at a few time intervals is not enough to assure a required timeliness behavior but that latency jitter needs to be considered when designing experiences for Virtual Reality.
39 citations
••
TL;DR: This article proposes a resource representation scheme, allowing each ED to expose its resource information to the supervisor of the edge node through the mobile EC application programming interfaces proposed by the European Telecommunications Standards Institute.
Abstract: Low-latency IoT applications, such as autonomous vehicles, augmented/virtual reality devices, and security applications, require high computation resources to make decisions on the fly. However, these kinds of applications cannot tolerate offloading their tasks to be processed on a cloud infrastructure due to the experienced latency. Therefore, edge computing (EC) is introduced to enable low latency by moving the tasks processing closer to the users at the edge of the network. The edge of the network is characterized by the heterogeneity of edge devices (EDs) forming it; thus, it is crucial to devise novel solutions that take into account the different physical resources of each ED. In this article, we propose a resource representation scheme, allowing each ED to expose its resource information to the supervisor of the edge node through the mobile EC application programming interfaces proposed by the European Telecommunications Standards Institute. The information about the ED resource is exposed to the supervisor of the edge node each time a resource allocation is required. To this end, we leverage a Lyapunov optimization framework to dynamically allocate resources at the EDs. To test our proposed model, we performed intensive theoretical and experimental simulations on a testbed to validate the proposed scheme and its impact on different system’s parameters. The simulations have shown that our proposed approach outperforms other benchmark approaches and provides low latency and optimal resource consumption.
39 citations
••
38 citations
••
10 Apr 2016TL;DR: This paper identifies the key deficiency of prior solutions and uses this insight to motivate the design of Trinity - a simple, practical yet effective solution that achieves bandwidth guarantees, work conservation and low latency simultaneously in the cloud.
Abstract: Today's cloud is shared among multiple tenants running different applications, and a desirable multi-tenant datacenter network infrastructure should provide bandwidth guarantees for throughput-intensive applications, low latency for latency-sensitive short messages, as well as work conservation to fully utilize the network bandwidth. Despite significant efforts in recent years, none of them can achieve these three properties simultaneously. In this paper, we identify the key deficiency of prior solutions and use this insight to motivate our design of Trinity — a simple, practical yet effective solution that achieves bandwidth guarantees, work conservation and low latency simultaneously in the cloud. We implement Trinity using existing commodity hardwares and demonstrate its superior performance over prior solutions using testbed experiments.
38 citations