scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Proceedings ArticleDOI
TL;DR: The proposed approach is expected to achieve the scalability and flexibility supporting more than 10 Tbps link bandwidth and more than 100,000 endpoints with 40 WDM channels and offers direct end-to-end optical paths enabling low latencies with the “speed of light”.
Abstract: This paper discusses how to realize an optical circuit switching interconnect capable of more than 10 Tbps link bandwidth and more than 100,000 end points scalability. To keep continuous performance improvement of datacenters or high performance computers, high capacity and low latency interconnect network is essential. To handle such large bandwidth interconnect networks with low energy consumption, optical switch technologies will become inevitable. This paper firstly examines the scaling of the energy consumption of optical circuit switching networks based on the state of the art silicon photonics switch technology. Secondly to achieve Tbps-class link bandwidth, the WDM transmission technology and a shared WDM light source mechanism named “wavelength bank” are introduced. Due to the shared light source, each optical transceiver does not have to carry individual light sources, which enables simple WDM transceivers with cost-efficient silicon photonics technologies. Then a new optical switch control approach which reduces the control overhead time is discussed. In the proposed approach, the optical data plane itself represents the path destination, which enables a simple distributed-like control procedure. The proposed approach is expected to achieve the scalability and flexibility supporting more than 10 Tbps link bandwidth and more than 100,000 endpoints with 40 WDM channels. The proposed interconnect architecture offers direct end-to-end optical paths enabling low latencies with the “speed of light”. The paper also discusses some of the challenges which should be resolved to practically realize the future large bandwidth optical interconnect networks.

12 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: Analytic and simulation results are used to compare the proposed scheme with its half-duplex (HD) counterpart under the same transmitter establishment criteria to show the achievable gain of FD-D2D scheme in video content delivery, in terms of sum throughput and latency.
Abstract: Growing demand for video services is the main driver for increasing traffic in wireless cellular data networks. Wireless video distribution schemes have recently been proposed to offload data via Device-to-Device (D2D) communications. These offloading schemes increase capacity and reduce end-to-end delay in cellular networks and help to serve the dramatically increasing demand for high quality video. In this paper, we propose a new scheme for video distribution over cellular networks by exploiting full-duplex (FD) D2D communication in two scenarios; scenario one: two nodes exchange their desired video files simultaneously with each other, and scenario two: each node can concurrently transmit to and receive from two different nodes. In the latter case, an intermediate transceiver can serve one or multiple users' file requests whilst capturing its desired file from another device in the vicinity. Analytic and simulation results are used to compare the proposed scheme with its half-duplex (HD) counterpart under the same transmitter establishment criteria to show the achievable gain of FD-D2D scheme in video content delivery, in terms of sum throughput and latency.

12 citations

Patent
22 Jul 2015
TL;DR: In this paper, one or more processors determine the flows of flow tables belonging to delivery devices that are affected as a result of the failure of the first delivery device by accessing an index of flow mappings maintained in cache storage.
Abstract: One or more processors receive a notification of a failure of a first delivery device of a plurality of delivery devices of an OpenFlow network, to deliver a data packet. One or more processors determine the flows of flow tables belonging to delivery devices that are affected as a result of the failure of the first delivery device. The flows are determined by accessing an index of flow mappings maintained in cache storage in which at least one of the affected flows includes a pattern of information fields and actions that match a pattern of information fields and actions of the data packet, and one or more processors send instructions to the delivery devices of the network to perform an asynchronous activity on respective flow tables of the delivery devices that include the flows affected as a result of the failure of the first delivery device.

12 citations

Proceedings ArticleDOI
23 Oct 2007
TL;DR: The proposed optimization significantly reduces the control traffic for low data traffic intensity in the network and increases protocol scalability for large networks without compromising the low latency of proactive protocols.
Abstract: Many applications, such as disaster response and military applications, call for proactive maintenance of links and routes in Mobile Ad hoc NETworks (MANETs) to ensure low latency during data delivery. The goal of this paper is to minimize the wastage of energy in the network due to high control traffic, which restricts the scalability and applicability of such protocols, without trading-off the low latency. We categorize the proactive protocols based on the periodic route and link maintenance operations performed; and analytically derive the optimum periods for these operations in different protocol classes. The analysis takes into account data traffic intensity, link dynamics, application reliability requirements, and the size of the network. The proposed optimization significantly reduces the control traffic for low data traffic intensity in the network and increases protocol scalability for large networks without compromising the low latency of proactive protocols.

12 citations

Proceedings ArticleDOI
10 May 2016
TL;DR: The results show that a high number of edge servers is preferable compared to few larger edge servers to reduce the latency of players, and supports approaches that allow deploying virtual server instances in the back-haul.
Abstract: Free-to-play models, streaming of games and eSports are reasons for online gaming to grow in popularity recently. On the forefront are multiplayer online battle arenas, which gain high popularity by introducing a competitive format that is easy to access and requires cooperation and team play. These games highly rely on fast reaction of the players, which makes latency the key performance indicator of such applications. To obtain low latency, this paper proposes moving game servers close to players towards the edge of the network. The performance of such mechanism highly depends on the geographic distribution of players. By analyzing match histories and statistics, we develop models for the arrival process and location of game requests. This allows us to evaluate the performance of edge server resource migration policies in an event based simulation. Our results show that a high number of edge servers is preferable compared to few larger edge servers to reduce the latency of players. This supports approaches that allow deploying virtual server instances in the back-haul.

12 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227