scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Posted Content
TL;DR: A terahertz (THz) cellular network is considered to provide high-rate VR services, thus enabling a successful visual perception and a tractable expression of the reliability of the system is derived as a function of the THz network parameters.
Abstract: Guaranteeing ultra reliable low latency communications (URLLC) with high data rates for virtual reality (VR) services is a key challenge to enable a dual VR perception: visual and haptic. In this paper, a terahertz (THz) cellular network is considered to provide high-rate VR services, thus enabling a successful visual perception. For this network, guaranteeing URLLC with high rates requires overcoming the uncertainty stemming from the THz channel. To this end, the achievable reliability and latency of VR services over THz links are characterized. In particular, a novel expression for the probability distribution function of the transmission delay is derived as a function of the system parameters. Subsequently, the end-to-end (E2E) delay distribution that takes into account both processing and transmission delay is found and a tractable expression of the reliability of the system is derived as a function of the THz network parameters such as the molecular absorption loss and noise, the transmitted power, and the distance between the VR user and its respective small base station (SBS). Numerical results show the effects of various system parameters such as the bandwidth and the region of non-negligible interference on the reliability of the system. In particular, the results show that THz can deliver rates up to 16.4 Gbps and a reliability of 99.999% (with a delay threshold of 30 ms) provided that the impact of the molecular absorption on the THz links, which substantially limits the communication range of the SBS, is alleviated by densifying the network accordingly.

34 citations

Journal ArticleDOI
TL;DR: DeepSlicing as mentioned in this paper is a collaborative and adaptive inference system that adapts to various CNNs and supports customized fine-grained scheduling by partitioning both model and data.
Abstract: The booming of Convolutional Neural Networks (CNNs) has empowered lots of computer-vision applications. Due to its stringent requirement for computing resources, substantial research has been conducted on how to optimize its deployment and execution on resource-constrained devices. However, previous works have several weaknesses, including limited support for various CNN structures, fixed scheduling strategies, overlapped computations, high synchronization overheads, etc. In this article, we present DeepSlicing, a collaborative and adaptive inference system that adapts to various CNNs and supports customized flexible fine-grained scheduling. As a built-in functionality, DeepSlicing has supported typical CNNs including GoogLeNet, ResNet, etc. By partitioning both model and data, we also design an efficient scheduler, Proportional Synchronized Scheduler (PSS), which achieves the trade-off between computation and synchronization. Based on PyTorch, we have implemented DeepSlicing on the testbed with real-world edge settings that consists of 8 heterogeneous Raspberry Pi's. The results indicate that DeepSlicing with PSS outperforms the existing systems dramatically, e.g., the inference latency and memory footprint are reduced up to 5.79× and 14.72×, respectively.

34 citations

Book ChapterDOI
26 Sep 2016
TL;DR: The results of the network modelling when the random graph model is used, which generates random graph with a given number of vertexes and creates the minimal spanning tree by using the Prime’s algorithm are presented.
Abstract: Constantly growing number of wireless devices leads to the increasing complexity of maintenance and requirements of mobile access services. Following this, the paper discusses perspectives, challenges and services of 5th generation wireless systems, as well as direct device-to-device communication technology, which can provide energy efficient, high throughput and low latency transmission services between end-users. Due to these expected benefits, the part of network traffic between mobile terminals can be transmitted directly between the terminals via established D2D connection without utilizing an infrastructure link. In order to analyse how frequently can be such direct connectivity implemented, it is important to estimate the probability of D2D communication for arbitrary pair of mobile nodes. In this paper, we present the results of the network modelling when the random graph model is used. The model was implemented as a simulation program in C# which generates random graph with a given number of vertexes and creates the minimal spanning tree (mst) by using the Prime’s algorithm. All our result and practical findings are summarized at the end of this manuscript.

34 citations

Proceedings ArticleDOI
27 Feb 2016
TL;DR: The design and evaluation of an adaptive mutual exclusion scheme (AHMCS lock), which employs several orthogonal strategies---a hierarchical MCS (HMCS) lock for high throughput under high contention, Lamport's fast path approach for low latency under low contention, an adaptation mechanism that employs hysteresis to balance latency and throughput under moderate contention, and hardware transactional memory for lowest latency in the absence of contention.
Abstract: Over the last decade, the growing use of cache-coherent NUMA architectures has spurred the development of numerous locality-preserving mutual exclusion algorithms. NUMA-aware locks such as HCLH, HMCS, and cohort locks exploit locality of reference among nearby threads to deliver high lock throughput under high contention. However, the hierarchical nature of these locality-aware locks increases latency, which reduces the throughput of uncontended or lightly-contended critical sections. To date, no lock design for NUMA systems has delivered both low latency under low contention and high throughput under high contention.In this paper, we describe the design and evaluation of an adaptive mutual exclusion scheme (AHMCS lock), which employs several orthogonal strategies---a hierarchical MCS (HMCS) lock for high throughput under high contention, Lamport's fast path approach for low latency under low contention, an adaptation mechanism that employs hysteresis to balance latency and throughput under moderate contention, and hardware transactional memory for lowest latency in the absence of contention. The result is a top performing lock that has most properties of an ideal mutual exclusion algorithm. AHMCS exploits the strengths of multiple contention management techniques to deliver high performance over a broad range of contention levels. Our empirical evaluations demonstrate the effectiveness of AHMCS over prior art.

34 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the content service provision of information-centric vehicular networks (ICVNs) from the aspect of mobile edge caching, considering the dynamic driving-related context information.
Abstract: In this paper, the content service provision of information-centric vehicular networks (ICVNs) is investigated from the aspect of mobile edge caching, considering the dynamic driving-related context information. To provide up-to-date information with low latency, two schemes are designed for cache update and content delivery at the roadside units (RSUs). The roadside unit centric (RSUC) scheme decouples cache update and content delivery through bandwidth splitting, where the cached content items are updated regularly in a round-robin manner. The request adaptive (ReA) scheme updates the cached content items upon user requests with certain probabilities. The performance of both proposed schemes are analyzed, whereby the average age of information (AoI) and service latency are derived in closed forms. Surprisingly, the AoI-latency trade-off does not always exist, and frequent cache update can degrade both performances. Thus, the RSUC and ReA schemes are further optimized to balance the AoI and latency. Extensive simulations are conducted on SUMO and OMNeT++ simulators, and the results show that the proposed schemes can reduce service latency by up to 80% while guaranteeing content freshness in heavily loaded ICVNs.

34 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227