scispace - formally typeset
Search or ask a question

Showing papers on "Latency (engineering) published in 2018"


Posted Content
TL;DR: A generic and effective Temporal Shift Module (TSM) that can achieve the performance of 3D CNN but maintain 2D CNN’s complexity and is extended to online setting, which enables real-time low-latency online video recognition and video object detection.
Abstract: The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: this https URL.

721 citations


Journal ArticleDOI
TL;DR: In an interactive VR gaming arcade case study, it is shown that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.
Abstract: VR is expected to be one of the killer applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to facilitate its wide adoption. In particular, VR requirements in terms of high throughput, low latency, and reliable communication call for innovative solutions and fundamental research cutting across several disciplines. In view of the above, this article discusses the challenges and enablers for ultra-reliable and low-latency VR. Furthermore, in an interactive VR gaming arcade case study, we show that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.

405 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the various sources of end-to-end delay of current wireless networks by taking 4G LTE as an example and propose and evaluate several techniques to reduce the end to end latency from the perspectives of error control coding, signal processing, and radio resource management.
Abstract: Fifth-generation cellular mobile networks are expected to support mission critical URLLC services in addition to enhanced mobile broadband applications. This article first introduces three emerging mission critical applications of URLLC and identifies their requirements on end-to-end latency and reliability. We then investigate the various sources of end-to-end delay of current wireless networks by taking 4G LTE as an example. Then we propose and evaluate several techniques to reduce the end-to-end latency from the perspectives of error control coding, signal processing, and radio resource management. We also briefly discuss other network design approaches with the potential for further latency reduction.

232 citations


Journal ArticleDOI
TL;DR: This work seriously considers the incorporation of global centralized software defined network (SDN) and edge computing (EC) in IIoT with EC and demonstrates that the proposed scheme outperforms the related methods in terms of average time delay, goodput, throughput, PDD, and download time.
Abstract: In recent years, smart factory in the context of Industry 4.0 and industrial Internet of Things (IIoT) has become a hot topic for both academia and industry. In IIoT system, there is an increasing requirement for exchange of data with different delay flows among different smart devices. However, there are few studies on this topic. To overcome the limitations of traditional methods and address the problem, we seriously consider the incorporation of global centralized software defined network (SDN) and edge computing (EC) in IIoT with EC. We propose the adaptive transmission architecture with SDN and EC for IIoT. Then, according to data streams with different latency constrains, the requirements can be divided into two groups: 1) ordinary and 2) emergent stream. In the low-deadline situation, a coarse-grained transmission path algorithm provided by finding all paths that meet the time constrains in hierarchical Internet of Things (IoT). After that, by employing the path difference degree (PDD), an optimum routing path is selected considering the aggregation of time deadline, traffic load balances, and energy consumption. In the high-deadline situation, if the coarse-grained strategy is beyond the situation, a fine-grained scheme is adopted to establish an effective transmission path by an adaptive power method for getting low latency. Finally, the performance of proposed strategy is evaluated by simulation. The results demonstrate that the proposed scheme outperforms the related methods in terms of average time delay, goodput, throughput, PDD, and download time. Thus, the proposed method provides better solution for IIoT data transmission.

204 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms.
Abstract: Bidirectional long short-term memory (BLSTM) acoustic models provide a significant word error rate reduction compared to their unidirectional counterpart, as they model both the past and future temporal contexts. However, it is nontrivial to deploy bidirectional acoustic models for online speech recognition due to an increase in latency. In this letter, we propose the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms. This architecture has been shown to outperform the state-of-the-art low frame rate (LFR) BLSTM models. We further improve these LFR BLSTM acoustic models by operating them at higher frame rates at lower layers and show that the proposed model performs similar to these mixed frame rate BLSTMs. We present results on the Switchboard 300 h LVCSR task and the AMI LVCSR task, in the three microphone conditions.

181 citations


Proceedings ArticleDOI
07 Aug 2018
TL;DR: Homa as discussed by the authors uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism.
Abstract: Homa is a new transport protocol for datacenter networks. It provides exceptionally low latency, especially for workloads with a high volume of very short messages, and it also supports large messages and high network utilization. Homa uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism. Homa also uses controlled overcommitment of receiver downlinks to ensure efficient bandwidth utilization at high load. Our implementation of Homa delivers 99th percentile round-trip times less than 15 μs for short messages on a 10 Gbps network running at 80% load. These latencies are almost 100x lower than the best published measurements of an implementation. In simulations, Homa's latency is roughly equal to pFabric and significantly better than pHost, PIAS, and NDP for almost all message sizes and workloads. Homa can also sustain higher network loads than pFabric, pHost, or PIAS.

159 citations


Proceedings ArticleDOI
TL;DR: Homa as discussed by the authors uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism.
Abstract: Homa is a new transport protocol for datacenter networks. It provides exceptionally low latency, especially for workloads with a high volume of very short messages, and it also supports large messages and high network utilization. Homa uses in-network priority queues to ensure low latency for short messages; priority allocation is managed dynamically by each receiver and integrated with a receiver-driven flow control mechanism. Homa also uses controlled overcommitment of receiver downlinks to ensure efficient bandwidth utilization at high load. Our implementation of Homa delivers 99th percentile round-trip times less than 15{\mu}s for short messages on a 10 Gbps network running at 80% load. These latencies are almost 100x lower than the best published measurements of an implementation. In simulations, Homa's latency is roughly equal to pFabric and significantly better than pHost, PIAS, and NDP for almost all message sizes and workloads. Homa can also sustain higher network loads than pFabric, pHost, or PIAS.

146 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose to use coding to seamlessly distribute coded payload and redundancy data across multiple available communication interfaces, and formulate an optimization problem to find the payload allocation weights that maximize the reliability at specific target latency values.
Abstract: An important ingredient of the future 5G systems will be ultra-reliable low-latency communication (URLLC). A way to offer URLLC without intervention in the baseband/PHY layer design is to use interface diversity and integrate multiple communication interfaces, each interface based on a different technology. In this paper, we propose to use coding to seamlessly distribute coded payload and redundancy data across multiple available communication interfaces. We formulate an optimization problem to find the payload allocation weights that maximize the reliability at specific target latency values. In order to estimate the performance in terms of latency and reliability of such an integrated communication system, we propose an analysis framework that combines traditional reliability models with technology-specific latency probability distributions. Our model is capable to account for failure correlation among interfaces/technologies. By considering different scenarios, we find that the optimized strategies can in some cases significantly outperform strategies based on $k$ -out-of- $n$ erasure codes, where the latter do not account for the characteristics of the different interfaces. The model has been validated through simulation and is supported by experimental results.

135 citations


Posted Content
TL;DR: A broad perspective is provided on the fundamental tradeoffs in URLLC, as well as the principles used in building access protocols, and the importance of the proper statistical methodology for designing and assessing extremely high-reliability levels is touched on.
Abstract: The future connectivity landscape and, notably, the 5G wireless systems will feature Ultra-Reliable Low Latency Communication (URLLC). The coupling of high reliability and low latency requirements in URLLC use cases makes the wireless access design very challenging, in terms of both the protocol design and of the associated transmission techniques. This paper aims to provide a broad perspective on the fundamental tradeoffs in URLLC as well as the principles used in building access protocols. Two specific technologies are considered in the context of URLLC: massive MIMO and multi-connectivity, also termed interface diversity. The paper also touches upon the important question of the proper statistical methodology for designing and assessing extremely high reliability levels.

110 citations


Journal ArticleDOI
TL;DR: A dynamic resource allocation framework that consists of a fast heuristic-based incremental allocation mechanism that dynamically performs resource allocation and a reoptimization algorithm that periodically adjusts allocation to maintain a near-optimal MEC operational cost over time is proposed.
Abstract: Mobile edge-cloud (MEC) aims to support low latency mobile services by bringing remote cloud services nearer to mobile users. However, in order to deal with dynamic workloads, MEC is deployed in a large number of fixed-location micro-clouds, leading to resource wastage during stable/low workload periods. Limiting the number of micro-clouds improves resource utilization and saves operational costs, but faces service performance degradations due to insufficient physical capacity during peak time from nearby micro-clouds. To efficiently support services with low latency requirement under varying workload conditions, we adopt the emerging network function virtualization (NFV)-enabled MEC, which offers new flexibility in hosting MEC services in any virtualized network node, e.g., access points, routers, etc. This flexibility overcomes the limitations imposed by fixed-location solutions, providing new freedom in terms of MEC service-hosting locations. In this paper, we address the questions on where and when to allocate resources as well as how many resources to be allocated among NFV-enabled MECs, such that both the low latency requirements of mobile services and MEC cost efficiency are achieved. We propose a dynamic resource allocation framework that consists of a fast heuristic-based incremental allocation mechanism that dynamically performs resource allocation and a reoptimization algorithm that periodically adjusts allocation to maintain a near-optimal MEC operational cost over time. We show through extensive simulations that our flexible framework always manages to allocate sufficient resources in time to guarantee continuous satisfaction of applications’ low latency requirements. At the same time, our proposal saves up to 33% of cost in comparison to existing fixed-location MEC solutions.

94 citations


Proceedings Article
01 Jan 2018
TL;DR: Salsify is a system for real-time Internet video transmission that achieves 3.9× lower delay and 2.7 dB higher visual quality on average when compared with five existing systems: FaceTime, Hangouts, Skype, and WebRTC with and without scalable video coding.
Abstract: 1 We present Salsify, a system for real-time Internet video 2 transmission that achieves 3.9× lower delay and 2.7 dB 3 higher visual quality on average when compared with 4 five existing systems: FaceTime, Hangouts, Skype, and 5 WebRTC with and without scalable video coding. 6 Salsify achieves these gains through a joint design of 7 the video codec and transport protocol that features a 8 tighter integration between these components. The design 9 includes three major improvements. First, Salsify’s trans10 port protocol is video-aware and accounts for the fact that 11 video encoders send data in bursts rather than “full throt12 tle.” Second, Salsify’s video codec exposes its internal 13 state to the application, allowing it to be saved and re14 stored. Salsify uses this to explore two compression levels 15 for every frame, sending the frame that best matches the 16 network conditions after compression. Third, Salsify com17 bines the video codec’s and transport protocol’s control 18 loops so that both components run in lockstep, and frames 19 are encoded when the network can accommodate them. 20 This improves responsiveness on variable network paths. 21

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, the authors leverage machine learning tools and propose a novel solution for reliability and latency challenges in mmWave MIMO systems, where the base stations learn how to predict that a certain link will experience blockage in the next few time frames using their observations of adopted beamforming vectors.
Abstract: The sensitivity of millimeter wave (mmWave) signals to blockages is a fundamental challenge for mobile mmWave communication systems. The sudden blockage of the line-of-sight (LOS) link between the base station and the mobile user normally leads to disconnecting the communication session, which highly impacts the system reliability. Further, reconnecting the user to another LOS base station incurs high beam training overhead and critical latency problem. In this paper, we leverage machine learning tools and propose a novel solution for these reliability and latency challenges in mmWave MIMO systems. In the developed solution, the base stations learn how to predict that a certain link will experience blockage in the next few time frames using their observations of adopted beamforming vectors. This allows the serving base station to proactively hand-over the user to another base station with highly probable LOS link. Simulation results show that the developed deep learning based strategy successfully predicts blockage/hand-off in close to 95% of the times. This reduces the probability of communication session disconnection, which ensures high reliability and low latency in mobile mmWave systems.

Posted Content
30 Dec 2018
TL;DR: In this paper, a federated edge learning (FEEL) framework is proposed, where edge-server and on-device learning are synchronized to train a model without violating user-data privacy.
Abstract: The popularity of mobile devices results in the availability of enormous data and computational resources at the network edge. To leverage the data and resources, a new machine learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing fast and intelligent services to mobile users. While computing speeds are advancing rapidly, the communication latency is becoming the bottleneck of fast edge learning. To address this issue, this work is focused on designing a low latency multi-access scheme for edge learning. We consider a popular framework, federated edge learning (FEEL), where edge-server and on-device learning are synchronized to train a model without violating user-data privacy. It is proposed that model updates simultaneously transmitted by devices over broadband channels should be analog aggregated "over-the-air" by exploiting the superposition property of a multi-access channel. Thereby, "interference" is harnessed to provide fast implementation of the model aggregation. This results in dramatical latency reduction compared with the traditional orthogonal access (i.e., OFDMA). In this work, the performance of FEEL is characterized targeting a single-cell random network. First, due to power alignment between devices as required for aggregation, a fundamental tradeoff is shown to exist between the update-reliability and the expected update-truncation ratio. This motivates the design of an opportunistic scheduling scheme for FEEL that selects devices within a distance threshold. This scheme is shown using real datasets to yield satisfactory learning performance in the presence of high mobility. Second, both the multi-access latency of the proposed analog aggregation and the OFDMA scheme are analyzed. Their ratio, which quantifies the latency reduction of the former, is proved to scale almost linearly with device population.

Journal ArticleDOI
TL;DR: A new clustering-based reliable low-latency multipath routing (CRLLR) scheme is proposed by employing Ant Colony Optimization (ACO) technique, which outperforms the AQRV and T-AOMDV in terms of overall latency and reliability at the expenses of slightly higher energy consumption.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: Jaguar, a mobile Augmented Reality system that features accurate, low-latency, and large-scale object recognition and flexible, robust, and context-aware tracking, and seamlessly integrates marker-less object tracking offered by the recently released AR development tools is presented.
Abstract: In this paper, we present the design, implementation and evaluation of Jaguar, a mobile Augmented Reality (AR) system that features accurate, low-latency, and large-scale object recognition and flexible, robust, and context-aware tracking. Jaguar pushes the limit of mobile AR's end-to-end latency by leveraging hardware acceleration with GPUs on edge cloud. Another distinctive aspect of Jaguar is that it seamlessly integrates marker-less object tracking offered by the recently released AR development tools (e.g., ARCore and ARKit) into its design. Indeed, some approaches used in Jaguar have been studied before in a standalone manner, e.g., it is known that cloud offloading can significantly decrease the computational latency of AR. However, the question of whether the combination of marker-less tracking, cloud offloading and GPU acceleration would satisfy the desired end-to-end latency of mobile AR (i.e., the interval of camera frames) has not been eloquently addressed yet. We demonstrate via a prototype implementation of our proposed holistic solution that Jaguar reduces the end-to-end latency to ~33 ms. It also achieves accurate six degrees of freedom tracking and 97% recognition accuracy for a dataset with 10,000 images.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed methods can save 40%–70% bandwidth compared with the conventional method that is not aware of burstiness, while guaranteeing the delay and reliability requirements.
Abstract: The Tactile Internet that will enable humans to remotely control objects in real time by tactile sense has recently drawn significant attention from both academic and industrial communities. Ensuring ultra-reliable and low-latency communications with limited bandwidth is crucial for Tactile Internet. Recent studies found that the packet arrival processes in Tactile Internet are very bursty. This observation enables us to design a spectrally efficient resource management protocol to meet the stringent delay and reliability requirements while minimizing the bandwidth usage. In this paper, both model-based and data-driven unsupervised learning methods are applied in classifying the packet arrival process of each user into high or low traffic states, so that we can design efficient bandwidth reservation schemes accordingly. However, when the traffic-state classification is inaccurate, it is very challenging to satisfy the ultra-high reliability requirement. To tackle this problem, we formulate an optimization problem to minimize the reserved bandwidth subject to the delay and reliability requirements by taking into account the classification errors. Simulation results show that the proposed methods can save 40%-70% bandwidth compared with the conventional method that is not aware of burstiness, while guaranteeing the delay and reliability requirements. Our results are further validated by the practical packet arrival processes acquired from experiments using a real tactile hardware device.

Posted Content
30 Dec 2018
TL;DR: In this paper, a federated edge learning (FEEL) framework is proposed, where edge-server and on-device learning are synchronized to train a model without violating user-data privacy.
Abstract: The popularity of mobile devices results in the availability of enormous data and computational resources at the network edge. To leverage the data and resources, a new machine learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing fast and intelligent services to mobile users. While computing speeds are advancing rapidly, the communication latency is becoming the bottleneck of fast edge learning. To address this issue, this work is focused on designing a low latency multi-access scheme for edge learning. We consider a popular framework, federated edge learning (FEEL), where edge-server and on-device learning are synchronized to train a model without violating user-data privacy. It is proposed that model updates simultaneously transmitted by devices over broadband channels should be analog aggregated "over-the-air" by exploiting the superposition property of a multi-access channel. Thereby, "interference" is harnessed to provide fast implementation of the model aggregation. This results in dramatical latency reduction compared with the traditional orthogonal access (i.e., OFDMA). In this work, the performance of FEEL is characterized targeting a single-cell random network. First, due to power alignment between devices as required for aggregation, a fundamental tradeoff is shown to exist between the update-reliability and the expected update-truncation ratio. This motivates the design of an opportunistic scheduling scheme for FEEL that selects devices within a distance threshold. This scheme is shown using real datasets to yield satisfactory learning performance in the presence of high mobility. Second, both the multi-access latency of the proposed analog aggregation and the OFDMA scheme are analyzed. Their ratio, which quantifies the latency reduction of the former, is proved to scale almost linearly with device population.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: This demo shows an implementation of a VR game with the capability to move game servers across the world without service interruption and the Mobile Edge Cloud will play a key role in 5G networking.
Abstract: The future 5G mobile communication network, which aims to provide many more use cases not only for people but also for connecting machines. Some of these applications, like VR/AR and automation, call for low latencies, which cannot be achieved by aggregated data centers. Also, for VR/AR, offloading of computation will be a key element to bring new experiences to mobile devices. In order to fulfill low latency, outsourcing of computation and mobility, the Mobile Edge Cloud will play a key role in 5G networking. This demo shows an implementation of a VR game with the capability to move game servers across the world without service interruption.

Proceedings ArticleDOI
21 Mar 2018
TL;DR: A novel control framework for stochastic optimization is proposed based on the Lyapunov drift-plus-penalty method that enables the system to minimize power, maintain slice isolation, and provide reliable and low latency end-to-end communication for RLL slices.
Abstract: Network slicing is an emerging technique for providing resources to diverse wireless services with heterogeneous quality-of-service needs. However, beyond satisfying end-to-end requirements of network users, network slicing needs to also provide isolation between slices so as to prevent one slice's faults and congestion from affecting other slices. In this paper, the problem of network slicing is studied in the context of a wireless system having a time-varying number of users that require two types of slices: reliable low latency (RLL) and self-managed (capacity limited) slices. To address this problem, a novel control framework for stochastic optimization is proposed based on the Lyapunov drift-plus-penalty method. This new framework enables the system to minimize power, maintain slice isolation, and provide reliable and low latency end-to-end communication for RLL slices. Simulation results show that the proposed approach can maintain the system's reliability while providing effective slice isolation in the event of sudden changes in the network environment.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification, and a hybrid scheme that combines these two strategies is also considered.
Abstract: Low latency is critical for many applications in wireless communications, e.g., vehicle-to-vehicle, multimedia, and industrial control networks. Meanwhile, for the capability of providing multi-gigabits per second rates, millimeter-wave (mm-wave) communication has attracted substantial research interest recently. This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mm-wave systems, the effective capacity, i.e., the service capability with a given quality-of-service requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given sum power budget, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification when high sum power budget and arrival rate are given, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and sum power, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability.

Proceedings ArticleDOI
20 May 2018
TL;DR: The fundamental necessary constraints of anonymous communication (AC) protocols offer a guideline not only for improving existing AC systems but also for designing novel AC protocols with non-traditional bandwidth and latency overhead choices.
Abstract: This work investigates the fundamental constraints of anonymous communication (AC) protocols. We analyze the relationship between bandwidth overhead, latency overhead, and sender anonymity or recipient anonymity against the global passive (network-level) adversary. We confirm the trilemma that an AC protocol can only achieve two out of the following three properties: strong anonymity (i.e., anonymity up to a negligible chance), low bandwidth overhead, and low latency overhead. We further study anonymity against a stronger global passive adversary that can additionally passively compromise some of the AC protocol nodes. For a given number of compromised nodes, we derive necessary constraints between bandwidth and latency overhead whose violation make it impossible for an AC protocol to achieve strong anonymity. We analyze prominent AC protocols from the literature and depict to which extent those satisfy our necessary constraints. Our fundamental necessary constraints offer a guideline not only for improving existing AC systems but also for designing novel AC protocols with non-traditional bandwidth and latency overhead choices.

Journal ArticleDOI
TL;DR: A validation test-bed and experimental results over multiple topologies are presented to demonstrate the scalability and performance improvements achieved by the proposed dynamic control plane management procedures when the controller CPU and/or availability or throughput of in-band control channels becomes bottlenecks.
Abstract: As SDN migrates to wide area networks and 5G core networks, a scalable, highly reliable, low latency distributed control plane becomes a key factor that differentiates operator solutions for network control and management. In order to meet the high reliability and low latency requirements under time-varying volume of control traffic, the distributed control plane, consisting of multiple controllers and a combination of out-of-band and in-band control channels, needs to be managed dynamically. To this effect, we propose a novel programmable distributed control plane architecture with a dynamically managed in-band control network, where in-band mode switches communicate with their controllers over a virtual overlay to the data plane with dynamic topology. We dynamically manage the number of controllers, switches, and control flows assigned to each controller as well as traffic over control channels achieving both controller and control traffic load-balancing. We introduce “control flow table” (rules embedded in the flow table of a switch to manage in-band control flows) in order to implement the proposed distributed dynamic control plane. We propose methods for off-loading congested controllers and congested in-band control channels using control flow tables. A validation test-bed and experimental results over multiple topologies are presented to demonstrate the scalability and performance improvements achieved by the proposed dynamic control plane management procedures when the controller CPU and/or availability or throughput of in-band control channels becomes bottlenecks.

Journal ArticleDOI
TL;DR: In this paper, the authors present a global approximation of flow wave travel time to assess the utility of existing and future low-latency/near-real-time satellite products, with an emphasis on the forthcoming SWOT satellite mission.
Abstract: Earth-orbiting satellites provide valuable observations of upstream river conditions worldwide. These observations can be used in real-time applications like early flood warning systems and reservoir operations, provided they are made available to users with sufficient lead time. Yet the temporal requirements for access to satellite-based river data remain uncharacterized for time-sensitive applications. Here we present a global approximation of flow wave travel time to assess the utility of existing and future low-latency/near-real-time satellite products, with an emphasis on the forthcoming SWOT satellite mission. We apply a kinematic wavemodel to a global hydrography data set and find that global flowwaves traveling at their maximum speed take a median travel time of 6, 4, and 3 days to reach their basin terminus, the next downstream city, and the next downstream dam, respectively. Our findings suggest that a recently proposed ≤2-day data latency for a low-latency SWOT product is potentially useful for real-time river applications. Plain Language Summary Satellites can provide upstream conditions for early flood warning systems, reservoir operations, and other river management applications. This information is most useful for time-sensitive applications if it is made available before an observed upstream flood reaches a downstream point of interest, like a basin outlet, city, or dam. Here we characterize the time it takes floods to travel down Earth’s rivers in an effort to assess the time required for satellite data to be downloaded, processed, and made accessible to users. We find that making satellite data available within a recently proposed ≤2-day time period will make the data potentially useful for flood mitigation and other water management applications.

Journal ArticleDOI
25 May 2018
TL;DR: An overview of 5G requirements as specified by 3GPP SA1 is presented and basic requirements that are new for 5G are discussed and 5G performance requirements are provided.
Abstract: This paper presents an overview of 5G requirements as specified by 3GPP SA1. The main drivers for 5G were the requirement to provide more capacity and higher data rates and the requirement to support different ‘vertical’ sectors with ultra-reliable and low latency communication. The paper discusses basic requirements that are new for 5G and provides 5G performance requirements. The paper also discusses a number of vertical sectors that have influenced the 5G requirements work (V2X, mission critical, railway communication) and gives an overview of developments in 3GPP SA1 that will likely influence 5G specifications in the future.

Proceedings ArticleDOI
18 Mar 2018
TL;DR: It is argued that measure and control of latency based on average values taken at a few time intervals is not enough to assure a required timeliness behavior but that latency jitter needs to be considered when designing experiences for Virtual Reality.
Abstract: Low latency is a fundamental requirement for Virtual Reality (VR) systems to reduce the potential risks of cybersickness and to increase effectiveness, efficiency and user experience. In contrast to the effects of uniform latency degradation, the influence of latency jitter on user experience in VR is not well researched, although today's consumer VR systems are vulnerable in this respect. In this work we report on the impact of latency jitter on cybersickness in HMD-based VR environments. Test subjects are given a search task in Virtual Reality, provoking both head rotation and translation. One group experienced artificially added latency jitter in the tracking data of their head-mounted display. The introduced jitter pattern was a replication of a real-world latency behavior extracted and analyzed from an existing example VR-system. The effects of the introduced latency jitter were measured based on self-reports simulator sickness questionnaire (SSQ) and by taking physiological measurements. We found a significant increase in self-reported simulator sickness. We therefore argue that measure and control of latency based on average values taken at a few time intervals is not enough to assure a required timeliness behavior but that latency jitter needs to be considered when designing experiences for Virtual Reality.

Proceedings ArticleDOI
26 Jun 2018
TL;DR: This paper classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them, and a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
Abstract: Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.

Journal ArticleDOI
TL;DR: A novel cellular core network architecture, SoftBox is proposed, combining software-defined networking and network function virtualization to achieve greater flexibility, efficiency, and scalability compared to today’s cellular core.
Abstract: We propose a novel cellular core network architecture, SoftBox, combining software-defined networking and network function virtualization to achieve greater flexibility, efficiency, and scalability compared to today’s cellular core. Aligned with 5G use cases, SoftBox enables the creation of customized, low latency, and signaling-efficient services on a per user equipment (UE) basis. SoftBox consolidates network policies needed for processing each UE’s data and signaling traffic into a light-weight, in-network, and per-UE agent. We design a number of mobility-aware techniques to further optimize : 1) resource usage of agents; 2) forwarding rules and updates needed for steering a UE’s traffic through its agent; 3) migration costs of agents needed to ensure their proximity to mobile UEs; and 4) complexity of distributing the LTE mobility function on agents. Extensive evaluations demonstrate the scalability, performance, and flexibility of the SoftBox design. For example, basic SoftBox has 86%, 51%, and 87% lower signaling overheads, data plane delay, and CPU core usage, respectively, than two open source EPC systems. Moreover, our optimizations efficiently cut different types of data and control plane loads in the basic SoftBox by 51%–98%.

Journal ArticleDOI
TL;DR: As 5G technology is expected to offer super-broadband mobile services with new features such as low latency for critical applications and low-power operation for Internet of Things (IoT) applications, it will use wireless interfaces for small or spot cells with a new RF resource, the high band.
Abstract: As fifth-generation (5G) technology is expected to offer super-broadband mobile services with new features such as low latency for critical applications and low-power operation for Internet of Things (IoT) applications, it will use wireless interfaces for small or spot cells with a new RF resource, the high band (frequencies above 6 GHz), in addition to the low band (below 6 GHz) on macro and small cells coexisting with conventional mobile technologies [1]-[4]. Another feature of 5G wireless interfaces is more precise control of radio waves in space division through the use of multi-antenna technologies, such as massive multiple-input/multiple-output (MIMO) or beamforming [5].

Journal ArticleDOI
TL;DR: The Hipoλaos switch provides sub-μs latency and high throughput performance by utilizing distributed control and optical feed-forward buffering in a modified Spanke architecture and the architecture’s scalability up to 1024 × 1024 designs is discussed, along with a power consumption analysis and a roadmap toward an integrated version of the switch.
Abstract: The emergence of disaggregation in data center (DC) architectures as a way to increase resource utilization introduces significant challenges to the DC switching infrastructure, which has to ensure high bandwidth and low-latency communication along with high-radix connectivity. This paper examines the network requirements in disaggregated systems and reviews the credentials of state-of-the-art high-radix optical switch architectures. We also demonstrate a novel optical packet switch design, called Hipoλaos, that satisfies these requirements by combining N-port broadcast-and-select and N × N arrayed waveguide grating router-based forwarding schemes in N2-port connectivity configurations. The Hipoλaos switch provides sub-μs latency and high throughput performance by utilizing distributed control and optical feed-forward buffering in a modified Spanke architecture. Feasibility of a 256 port Hipoλaos layout with a four-packet buffering stage is experimentally demonstrated at 10 Gb/s, revealing error-free performance with a mean power penalty value of 2.19 dB. Simulation analysis is carried out for a 256 node system and eight traffic profiles; this analysis reveals a low latency value of only 605 ns, with throughput reaching 85% even when employing only two-packet buffers. Finally, the architecture’s scalability up to 1024 × 1024 designs is discussed, along with a power consumption analysis and a roadmap toward an integrated version of the switch.

Journal ArticleDOI
TL;DR: The flow of data traffic has been vastly increased because of the implementation of the Internet of Things and there is a tradeoff between the level of security and the communication overhead.
Abstract: The flow of data traffic has been vastly increased because of the implementation of the Internet of Things (IoT)-and security is a top need for Internet communication. The encryption-decryption process ensures that data access is restricted to legitimate users. But there is a tradeoff between the level of security and the communication overhead.