scispace - formally typeset
Search or ask a question
Topic

Latency (engineering)

About: Latency (engineering) is a research topic. Over the lifetime, 3729 publications have been published within this topic receiving 39210 citations. The topic is also known as: lag.


Papers
More filters
Journal ArticleDOI
TL;DR: This analysis aims to introduce the QKD blocks as a pillar of the quantum-safe security framework of the 5G/B5G-oriented fronthaul infrastructure and finds that for the dark fiber case, secret keys can be distilled at fiber lengths much longer than the maximum fiber fron fourthaul distance corresponding to the round-trip latency barrier.
Abstract: A research contribution focusing on the Quantum Key Distribution (QKD)-enabled solutions assisting in the security framework of an optical 5G fronthaul segment is presented. We thoroughly investigate the integration of a BB84-QKD link, operating at telecom band, delivering quantum keys for the Advanced Encryption Standard (AES)-256 encryption engines of a packetized fronthaul layer interconnecting multiple 5G terminal nodes. Secure Key Rate calculations are studied for both dedicated and shared fiber configurations to identify the attack surface of AES-encrypted data links in each deployment scenario. We also propose a converged fiber-wireless scenario, exploiting a mesh networking extension operated by mmWave wireless links. In addition to the quantum layer performance, emphasis is placed on the strict requirements of 5G-oriented optical edge segments, such as the latency and the availability of quantum keys. We find that for the dark fiber case, secret keys can be distilled at fiber lengths much longer than the maximum fiber fronthaul distance corresponding to the round-trip latency barrier, for both P2P and P2MP topologies. On the contrary, the inelastic Raman scattering makes the simultaneous transmission of quantum and classical signals much more challenging. To counteract the contamination of noise photons, a resilient classical/QKD coexistence scheme is adopted. Motivated by the recent advancements in quantum technology roadmap, our analysis aims to introduce the QKD blocks as a pillar of the quantum-safe security framework of the 5G/B5G-oriented fronthaul infrastructure.

26 citations

Proceedings ArticleDOI
16 Jun 2020
TL;DR: In this article, the authors propose an architectural change, where each long bitline in DRAM and NVM is split into two segments by an isolation transistor, which enables non-uniform accesses within each memory type (i.e., intra-memory asymmetry), leading to performance and reliability trade-offs in the DRAM-NVM hybrid memory.
Abstract: Modern computing systems are embracing hybrid memory comprising of DRAM and non-volatile memory (NVM) to combine the best properties of both memory technologies, achieving low latency, high reliability, and high density. A prominent characteristic of DRAM-NVM hybrid memory is that it has NVM access latency much higher than DRAM access latency. We call this inter-memory asymmetry. We observe that parasitic components on a long bitline are a major source of high latency in both DRAM and NVM, and a significant factor contributing to high-voltage operations in NVM, which impact their reliability. We propose an architectural change, where each long bitline in DRAM and NVM is split into two segments by an isolation transistor. One segment can be accessed with lower latency and operating voltage than the other. By introducing tiers, we enable non-uniform accesses within each memory type (which we call intra-memory asymmetry), leading to performance and reliability trade-offs in DRAM-NVM hybrid memory. We show that our hybrid tiered-memory architecture has a tremendous potential to improve performance and reliability, if exploited by an efficient page management policy at the operating system (OS). Modern OSes are already aware of inter-memory asymmetry. They migrate pages between the two memory types during program execution, starting from an initial allocation of the page to a randomly-selected free physical address in the memory. We extend existing OS awareness in three ways. First, we exploit both inter- and intra-memory asymmetries to allocate and migrate memory pages between the tiers in DRAM and NVM. Second, we improve the OS’s page allocation decisions by predicting the access intensity of a newly-referenced memory page in a program and placing it to a matching tier during its initial allocation. This minimizes page migrations during program execution, lowering the performance overhead. Third, we propose a solution to migrate pages between the tiers of the same memory without transferring data over the memory channel, minimizing channel occupancy and improving performance. Our overall approach, which we call MNEME, to enable and exploit asymmetries in DRAM-NVM hybrid tiered memory improves both performance and reliability for both single-core and multi-programmed workloads.

26 citations

Proceedings ArticleDOI
15 Feb 2018
TL;DR: In this paper, an open-source FPGA-based configurable architecture for arbitrary packet parsing to be used in SDN networks is presented, which is pipelined and entirely modeled using templated \textttC++ classes.
Abstract: Packet parsing is a key step in SDN-aware devices. Packet parsers in SDN networks need to be both reconfigurable and fast, to support the evolving network protocols and the increasing multi-gigabit data rates. The combination of packet processing languages with FPGAs seems to be the perfect match for these requirements. In this work, we develop an open-source FPGA-based configurable architecture for arbitrary packet parsing to be used in SDN networks. We generate low latency and high-speed streaming packet parsers directly from a packet processing program. Our architecture is pipelined and entirely modeled using templated \textttC++ classes. The pipeline layout is derived from a parser graph that corresponds to a P4 code after a series of graph transformation rounds. The RTL code is generated from the \textttC++ description using Xilinx Vivado HLS and synthesized with Xilinx Vivado. Our architecture achieves a \SI100 \giga\bit/\second data rate in a Xilinx Virtex-7 FPGA while reducing the latency by 45% and the LUT usage by 40% compared to the state-of-the-art.

26 citations

Patent
11 Sep 1998
TL;DR: In this paper, a DSP system is provided for performing FFT computations with low latency by parallel processing of complex data points through a plurality of butterfly FFT execution units.
Abstract: A DSP system is provided for performing FFT computations with low latency by parallel processing of complex data points through a plurality of butterfly FFT execution units. The system simplifies the circuitry required by employing a single address generator for all of the memory units coupled to like ports on each execution unit. All RAM's connected to, for example, the A ports of a plurality of DSP's will be addressed by a single address generator. Similarly, all RAM's connected to the B ports of a plurality of DSP's will be addressed by a single address generator. Simple one-port RAM memory is suitable for use with the invention.

26 citations

Proceedings ArticleDOI
19 Apr 2010
TL;DR: This work demonstrates that, by using best-in-class commodity hardware, algorithmic innovations and careful design, it is possible to obtain the performance of custom-designed hardware solutions by providing low latency, high bandwidth and the flexibility of commodity components in a single framework.
Abstract: This paper presents and evaluates the performance of a prototype of an on-line OPRA data feed decoder. Our work demonstrates that, by using best-in-class commodity hardware, algorithmic innovations and careful design, it is possible to obtain the performance of custom-designed hardware solutions. Our prototype system integrates the latest Intel Nehalem processors and Myricom 10 Gigabit Ethernet technologies with an innovative algorithmic design based on the DotStar compilation tool. The resulting system can provide low latency, high bandwidth and the flexibility of commodity components in a single framework, with an end-to-end latency of less then four microseconds and an OPRA feed processing rate of almost 3 million messages per second per core, with a packet payload of only 256 bytes.

26 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
92% related
Server
79.5K papers, 1.4M citations
91% related
Wireless
133.4K papers, 1.9M citations
90% related
Wireless sensor network
142K papers, 2.4M citations
90% related
Wireless network
122.5K papers, 2.1M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021692
2020481
2019389
2018366
2017227