scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2018"


Journal ArticleDOI
24 Sep 2018-Nature
TL;DR: Monolithically integrated lithium niobate electro-optic modulators that feature a CMOS-compatible drive voltage, support data rates up to 210 gigabits per second and show an on-chip optical loss of less than 0.5 decibels are demonstrated.
Abstract: Electro-optic modulators translate high-speed electronic signals into the optical domain and are critical components in modern telecommunication networks1,2 and microwave-photonic systems3,4. They are also expected to be building blocks for emerging applications such as quantum photonics5,6 and non-reciprocal optics7,8. All of these applications require chip-scale electro-optic modulators that operate at voltages compatible with complementary metal–oxide–semiconductor (CMOS) technology, have ultra-high electro-optic bandwidths and feature very low optical losses. Integrated modulator platforms based on materials such as silicon, indium phosphide or polymers have not yet been able to meet these requirements simultaneously because of the intrinsic limitations of the materials used. On the other hand, lithium niobate electro-optic modulators, the workhorse of the optoelectronic industry for decades9, have been challenging to integrate on-chip because of difficulties in microstructuring lithium niobate. The current generation of lithium niobate modulators are bulky, expensive, limited in bandwidth and require high drive voltages, and thus are unable to reach the full potential of the material. Here we overcome these limitations and demonstrate monolithically integrated lithium niobate electro-optic modulators that feature a CMOS-compatible drive voltage, support data rates up to 210 gigabits per second and show an on-chip optical loss of less than 0.5 decibels. We achieve this by engineering the microwave and photonic circuits to achieve high electro-optical efficiencies, ultra-low optical losses and group-velocity matching simultaneously. Our scalable modulator devices could provide cost-effective, low-power and ultra-high-speed solutions for next-generation optical communication networks and microwave photonic systems. Furthermore, our approach could lead to large-scale ultra-low-loss photonic circuits that are reconfigurable on a picosecond timescale, enabling a wide range of quantum and classical applications5,10,11 including feed-forward photonic quantum computation. Chip-scale lithium niobate electro-optic modulators that rapidly convert electrical to optical signals and use CMOS-compatible voltages could prove useful in optical communication networks, microwave photonic systems and photonic computation.

1,358 citations


Book
03 Jan 2018
TL;DR: This monograph summarizes many years of research insights in a clear and self-contained way and providest the reader with the necessary knowledge and mathematical toolsto carry out independent research in this area.
Abstract: Massive multiple-input multiple-output MIMO is one of themost promising technologies for the next generation of wirelesscommunication networks because it has the potential to providegame-changing improvements in spectral efficiency SE and energyefficiency EE. This monograph summarizes many years ofresearch insights in a clear and self-contained way and providesthe reader with the necessary knowledge and mathematical toolsto carry out independent research in this area. Starting froma rigorous definition of Massive MIMO, the monograph coversthe important aspects of channel estimation, SE, EE, hardwareefficiency HE, and various practical deployment considerations.From the beginning, a very general, yet tractable, canonical systemmodel with spatial channel correlation is introduced. This modelis used to realistically assess the SE and EE, and is later extendedto also include the impact of hardware impairments. Owing tothis rigorous modeling approach, a lot of classic "wisdom" aboutMassive MIMO, based on too simplistic system models, is shownto be questionable.

1,352 citations


Journal ArticleDOI
TL;DR: The diverse use cases and network requirements of network slicing, the pre-slicing era, considering RAN sharing as well as the end-to-end orchestration and management, encompassing the radio access, transport network and the core network are outlined.
Abstract: Network slicing has been identified as the backbone of the rapidly evolving 5G technology. However, as its consolidation and standardization progress, there are no literatures that comprehensively discuss its key principles, enablers, and research challenges. This paper elaborates network slicing from an end-to-end perspective detailing its historical heritage, principal concepts, enabling technologies and solutions as well as the current standardization efforts. In particular, it overviews the diverse use cases and network requirements of network slicing, the pre-slicing era, considering RAN sharing as well as the end-to-end orchestration and management, encompassing the radio access, transport network and the core network. This paper also provides details of specific slicing solutions for each part of the 5G system. Finally, this paper identifies a number of open research challenges and provides recommendations toward potential solutions.

766 citations


Journal ArticleDOI
TL;DR: This paper builds, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries, and proposes a two-step learning procedure based on the idea of transfer learning that circumvents the challenges of training such a system over actual channels.
Abstract: End-to-end learning of communications systems is a fascinating novel concept that has so far only been validated by simulations for block-based transmissions. It allows learning of transmitter and receiver implementations as deep neural networks (NNs) that are optimized for an arbitrary differentiable end-to-end performance metric, e.g., block error rate (BLER). In this paper, we demonstrate that over-the-air transmissions are possible: We build, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries. We extend the existing ideas toward continuous data transmission, which eases their current restriction to short block lengths but also entails the issue of receiver synchronization. We overcome this problem by introducing a frame synchronization module based on another NN. A comparison of the BLER performance of the “learned” system with that of a practical baseline shows competitive performance close to $\text{1}$ dB, even without extensive hyperparameter tuning. We identify several practical challenges of training such a system over actual channels, in particular, the missing channel gradient, and propose a two-step learning procedure based on the idea of transfer learning that circumvents this issue.

757 citations


Journal ArticleDOI
TL;DR: Focusing on the optical transport and switching layer, aspects of large-scale spatial multiplexing, massive opto-electronic arrays and holistic optics-electronics-DSP integration, as well as optical node architectures for switching and multiplexed of spatial and spectral superchannels are covered.
Abstract: Celebrating the 20th anniversary of Optics Express, this paper reviews the evolution of optical fiber communication systems, and through a look at the previous 20 years attempts to extrapolate fiber-optic technology needs and potential solution paths over the coming 20 years. Well aware that 20-year extrapolations are inherently associated with great uncertainties, we still hope that taking a significantly longer-term view than most texts in this field will provide the reader with a broader perspective and will encourage the much needed out-of-the-box thinking to solve the very significant technology scaling problems ahead of us. Focusing on the optical transport and switching layer, we cover aspects of large-scale spatial multiplexing, massive opto-electronic arrays and holistic optics-electronics-DSP integration, as well as optical node architectures for switching and multiplexing of spatial and spectral superchannels.

498 citations


Journal ArticleDOI
TL;DR: In an interactive VR gaming arcade case study, it is shown that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.
Abstract: VR is expected to be one of the killer applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to facilitate its wide adoption. In particular, VR requirements in terms of high throughput, low latency, and reliable communication call for innovative solutions and fundamental research cutting across several disciplines. In view of the above, this article discusses the challenges and enablers for ultra-reliable and low-latency VR. Furthermore, in an interactive VR gaming arcade case study, we show that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.

405 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that with multicell MMSE precoding/combining and a tiny amount of spatial channel correlation or large-scale fading variations over the array, the capacity increases without bound as the number of antennas increases, even under pilot contamination.
Abstract: The capacity of cellular networks can be improved by the unprecedented array gain and spatial multiplexing offered by Massive MIMO. Since its inception, the coherent interference caused by pilot contamination has been believed to create a finite capacity limit, as the number of antennas goes to infinity. In this paper, we prove that this is incorrect and an artifact from using simplistic channel models and suboptimal precoding/combining schemes. We show that with multicell MMSE precoding/combining and a tiny amount of spatial channel correlation or large-scale fading variations over the array, the capacity increases without bound as the number of antennas increases, even under pilot contamination. More precisely, the result holds when the channel covariance matrices of the contaminating users are asymptotically linearly independent, which is generally the case. If also the diagonals of the covariance matrices are linearly independent, it is sufficient to know these diagonals (and not the full covariance matrices) to achieve an unlimited asymptotic capacity.

358 citations


Journal ArticleDOI
TL;DR: In this article, an end-to-end deep learning-based optimization of optical fiber communication systems is proposed to achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold.
Abstract: In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.

274 citations


Posted Content
TL;DR: A comprehensive survey of the types of consumer UAVs currently available off-the-shelf, the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial BSs, and the cyber-physical security of UAV-assisted cellular communications are surveyed.
Abstract: The rapid growth of consumer Unmanned Aerial Vehicles (UAVs) is creating promising new business opportunities for cellular operators. On the one hand, UAVs can be connected to cellular networks as new types of user equipment, therefore generating significant revenues for the operators that can guarantee their stringent service requirements. On the other hand, UAVs offer the unprecedented opportunity to realize UAV-mounted flying base stations that can dynamically reposition themselves to boost coverage, spectral efficiency, and user quality of experience. Indeed, the standardization bodies are currently exploring possibilities for serving commercial UAVs with cellular networks. Industries are beginning to trial early prototypes of flying base stations or user equipments, while academia is in full swing researching mathematical and algorithmic solutions to address interesting new problems arising from flying nodes in cellular networks. In this article, we provide a comprehensive survey of all of these developments promoting smooth integration of UAVs into cellular networks. Specifically, we survey (i) the types of consumer UAVs currently available off-the-shelf, (ii) the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial base stations, (iii) the challenges and opportunities for assisting cellular communications with UAV-based flying relays and base stations, (iv) the ongoing prototyping and test bed activities, (v) the new regulations being developed to manage the commercial use of UAVs, and (vi) the cyber-physical security of UAV-assisted cellular communications.

243 citations


Proceedings Article
01 Jan 2018
TL;DR: SAND is presented, a new serverless computing system that provides lower latency, better resource efficiency and more elasticity than existing serverless platforms, and introduces two key techniques: 1) application-level sandboxing, and 2) a hierarchical message bus.
Abstract: Serverless computing has emerged as a new cloud computing paradigm, where an application consists of individual functions that can be separately managed and executed. However, existing serverless platforms normally isolate and execute functions in separate containers, and do not exploit the interactions among functions for performance. These practices lead to high startup delays for function executions and inefficient resource usage. This paper presents SAND, a new serverless computing system that provides lower latency, better resource efficiency and more elasticity than existing serverless platforms. To achieve these properties, SAND introduces two key techniques: 1) application-level sandboxing, and 2) a hierarchical message bus. We have implemented and deployed a complete SAND system. Our results show that SAND outperforms the state-of-the-art serverless platforms significantly. For example, in a commonly-used image processing application, SAND achieves a 43% speedup compared to Apache OpenWhisk.

234 citations


Journal ArticleDOI
08 Jan 2018
TL;DR: This paper studies the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems under wearable data by evaluating four variations of deep neural networks based either on fully-connected Deep Neural Networks (DNNs) or Convolutional Neural networks (CNNs).
Abstract: Wearables and mobile devices see the world through the lens of half a dozen low-power sensors, such as, barometers, accelerometers, microphones and proximity detectors. But differences between sensors ranging from sampling rates, discrete and continuous data or even the data type itself make principled approaches to integrating these streams challenging. How, for example, is barometric pressure best combined with an audio sample to infer if a user is in a car, plane or bike? Critically for applications, how successfully sensor devices are able to maximize the information contained across these multi-modal sensor streams often dictates the fidelity at which they can track user behaviors and context changes. This paper studies the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems. Specifically, we focus on four variations of deep neural networks that are based either on fully-connected Deep Neural Networks (DNNs) or Convolutional Neural Networks (CNNs). Two of these architectures follow conventional deep models by performing feature representation learning from a concatenation of sensor types. This classic approach is contrasted with a promising deep model variant characterized by modality-specific partitions of the architecture to maximize intra-modality learning. Our exploration represents the first time these architectures have been evaluated for multimodal deep learning under wearable data -- and for convolutional layers within this architecture, it represents a novel architecture entirely. Experiments show these generic multimodal neural network models compete well with a rich variety of conventional hand-designed shallow methods (including feature extraction and classifier construction) and task-specific modeling pipelines, across a wide-range of sensor types and inference tasks (four different datasets). Although the training and inference overhead of these multimodal deep approaches is in some cases appreciable, we also demonstrate the feasibility of on-device mobile and wearable execution is not a barrier to adoption. This study is carefully constructed to focus on multimodal aspects of wearable data modeling for deep learning by providing a wide range of empirical observations, which we expect to have considerable value in the community. We summarize our observations into a series of practitioner rules-of-thumb and lessons learned that can guide the usage of multimodal deep learning for activity and context detection.

Journal ArticleDOI
TL;DR: In this article, the authors present an overview of different physical and medium access techniques to address the problem of a massive number of access attempts in mMTC and discuss the protocol performance of these solutions in a common evaluation framework.
Abstract: The fifth generation of cellular communication systems is foreseen to enable a multitude of new applications and use cases with very different requirements. A new 5G multi-service air interface needs to enhance broadband performance as well as provide new levels of reliability, latency, and supported number of users. In this paper, we focus on the massive Machine Type Communications (mMTC) service within a multi-service air interface. Specifically, we present an overview of different physical and medium access techniques to address the problem of a massive number of access attempts in mMTC and discuss the protocol performance of these solutions in a common evaluation framework.

Journal ArticleDOI
31 May 2018-Nature
TL;DR: In this paper, the authors demonstrate a reprogrammable photonic chip as a versatile simulation platform for a range of quantum dynamic behaviour in different molecules, including H2CS, SO3, HNCO, HFHF, N4 and P4.
Abstract: Advances in control techniques for vibrational quantum states in molecules present new challenges for modelling such systems, which could be amenable to quantum simulation methods. Here, by exploiting a natural mapping between vibrations in molecules and photons in waveguides, we demonstrate a reprogrammable photonic chip as a versatile simulation platform for a range of quantum dynamic behaviour in different molecules. We begin by simulating the time evolution of vibrational excitations in the harmonic approximation for several four-atom molecules, including H2CS, SO3, HNCO, HFHF, N4 and P4. We then simulate coherent and dephased energy transport in the simplest model of the peptide bond in proteins-N-methylacetamide-and simulate thermal relaxation and the effect of anharmonicities in H2O. Finally, we use multi-photon statistics with a feedback control algorithm to iteratively identify quantum states that increase a particular dissociation pathway of NH3. These methods point to powerful new simulation tools for molecular quantum dynamics and the field of femtochemistry.

Journal ArticleDOI
TL;DR: It is shown that graphene-based integrated photonics could enable ultrahigh spatial bandwidth density, low power consumption for board connectivity and connectivity between data centres, access networks and metropolitan, core, regional and long-haul optical communications.
Abstract: Graphene is an ideal material for optoelectronic applications. Its photonic properties give several advantages and complementarities over Si photonics. For example, graphene enables both electro-absorption and electro-refraction modulation with an electro-optical index change exceeding 10−3. It can be used for optical add–drop multiplexing with voltage control, eliminating the current dissipation used for the thermal detuning of microresonators, and for thermoelectric-based ultrafast optical detectors that generate a voltage without transimpedance amplifiers. Here, we present our vision for graphene-based integrated photonics. We review graphene-based transceivers and compare them with existing technologies. Strategies for improving power consumption, manufacturability and wafer-scale integration are addressed. We outline a roadmap of the technological requirements to meet the demands of the datacom and telecom markets. We show that graphene-based integrated photonics could enable ultrahigh spatial bandwidth density, low power consumption for board connectivity and connectivity between data centres, access networks and metropolitan, core, regional and long-haul optical communications.

Proceedings ArticleDOI
25 Jun 2018
TL;DR: This work extends the idea of end-to-end learning of communications systems through deep neural network (NN)-based autoencoders to orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP) and shows that the proposed scheme can be realized with state-of-the-art deep learning software libraries as transmitter and receiver solely consist of differentiable layers required for gradient-based training.
Abstract: We extend the idea of end-to-end learning of communications systems through deep neural network (NN)-based autoencoders to orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP). Our implementation has the same benefits as a conventional OFDM system, namely single-tap equalization and robustness against sampling synchronization errors, which turned out to be one of the major challenges in previous single-carrier implementations. This enables reliable communication over multipath channels and makes the communication scheme suitable for commodity hardware with imprecise oscillators. We show that the proposed scheme can be realized with state-of-the-art deep learning software libraries as transmitter and receiver solely consist of differentiable layers required for gradient-based training. We compare the performance of the autoencoder-based system against that of a state-of-the-art OFDM baseline over frequency-selective fading channels. Finally, the impact of a non-linear amplifier is investigated and we show that the autoencoder inherently learns how to deal with such hardware impairments.

Journal ArticleDOI
TL;DR: This paper investigates the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations, and introduces and evaluates a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic.
Abstract: Widely deployed cellular networks are an attractive solution to provide large scale radio connectivity to unmanned aerial vehicles. One main prerequisite is that co-existence and optimal performance for both aerial and terrestrial users can be provided. Today’s cellular networks are, however, not designed for aerial coverage, and deployments are primarily optimized to provide good service for terrestrial users. These considerations, in combination with the strict regulatory requirements, lead to extensive research and standardization efforts to ensure that the current cellular networks can enable reliable operation of aerial vehicles in various deployment scenarios. In this paper, we investigate the performance of aerial radio connectivity in a typical rural area network deployment using extensive channel measurements and system simulations. First, we highlight that downlink and uplink radio interference play a key role, and yield relatively poor performance for the aerial traffic, when load is high in the network. Second, we analyze two potential terminal side interference mitigation solutions: interference cancellation and antenna beam selection. We show that each of these can improve the overall, aerial and terrestrial, system performance to a certain degree, with up to 30% throughput gain, and an increase in the reliability of the aerial radio connectivity to over 99%. Further, we introduce and evaluate a novel downlink inter-cell interference coordination mechanism applied to the aerial command and control traffic. Our proposed coordination mechanism is shown to provide the required aerial downlink performance at the cost of 10% capacity degradation in the serving and interfering cells.

Proceedings ArticleDOI
12 Oct 2018
TL;DR: From the system level simulation results in an urban macro environment, it can be observed that effective multi-cell cooperation, more specifically soft combining, can lead to a significant gain in terms of URLLC capacity.
Abstract: The upcoming fifth generation (5G) wireless communication system is expected to support a broad range of newly emerging applications on top of the regular cellular mobile broadband services. One of the key usage scenarios in the scope of 5G is ultra-reliable and low-latency communications (URLLC). Among the active researchers from both academy and industry, one common view is that URLLC will play an essential role in providing connectivity for the new services and applications from vertical domains, such as factory automation, autonomous driving and so on. The most important key performance indicators (KPIs) related to URLLC are latency, reliability and availability. In this paper, after brief discussion on the design challenges related to URLLC use cases, we present an overview of the available technology components from 3GPP Rel-15 and potential ones from Rel-16. In addition, coordinated multi-cell resource allocation methods are studied. From the system level simulation results in an urban macro environment, it can be observed that effective multi-cell cooperation, more specifically soft combining, can lead to a significant gain in terms of URLLC capacity.

Journal ArticleDOI
TL;DR: Energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period are discussed and an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks is implemented.
Abstract: In this paper, we discuss energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period A number of techniques that yield substantial energy savings in core networks were introduced, including (i) the use of improved network components with lower power consumption, (ii) putting idle components into sleep mode, (iii) optically bypassing intermediate routers, (iv) the use of mixed line rates, (v) placing resources for protection into a low power state when idle, (vi) optimization of the network physical topology, and (vii) the optimization of distributed clouds for content distribution and network equipment virtualization These techniques are recommended as the main energy-efficiency improvement measures for 2020 core networks A mixed integer linear programming optimization model combining all the aforementioned techniques was built to minimize energy consumption in the core network We consider group 1 nations' traffic and place this traffic on a US continental network represented by the AT&T network topology The projections of the 2020 equipment power consumption are based on two scenarios: a business as usual (BAU) scenario and a GreenTouch (GT) (ie, BAU + GT) scenario The results show that the 2020 BAU scenario improves the network energy efficiency by a factor of 423 x compared with the 2010 network as a result of the reduction in the network equipment power consumption Considering the 2020 BAU + GT network, the network equipment improvements alone reduce network power by a factor of 20 x compared with the 2010 network Including of all the BAU + GT energy-efficiency techniques yields a total energy efficiency improvement of 315× We have also implemented an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks

Journal ArticleDOI
TL;DR: Simulation results show that the proposed radio resource management scheme can reduce the interference from V 2V communication to CUEs and ensure the latency and reliability requirements of V2V communication.
Abstract: By leveraging direct device-to-device interaction, LTE vehicle-to-vehicle (V2V) communication becomes a promising solution to meet the stringent requirements of vehicular communication. In this paper, we propose jointly optimizing the radio resource, power allocation, and modulation/coding schemes of the V2V communications, in order to guarantee the latency and reliability requirements of vehicular user equipments (VUEs) while maximizing the information rate of cellular user equipment (CUE). To ensure the solvability of this optimization problem, the packet latency constraint is first transformed into a data rate constraint based on random network analysis by adopting the Poisson distribution model for the packet arrival process of each VUE. Then, utilizing the Lagrange dual decomposition and binary search, a resource management algorithm is proposed to find the optimal solution of joint optimization problem with reasonable complexity. Simulation results show that the proposed radio resource management scheme can reduce the interference from V2V communication to CUEs and ensure the latency and reliability requirements of V2V communication.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: In this paper, an end-to-end RL-based autoencoder was proposed to learn a communication system over any type of channel without prior assumptions, including additive white Gaussian noise (AWGN) and Rayleigh block-fading (RBF).
Abstract: The idea of end-to-end learning of communications systems through neural network (NN)-based autoencoders has the shortcoming that it requires a differentiable channel model. We present in this paper a novel learning algorithm which alleviates this problem. The algorithm iterates between supervised training of the receiver and reinforcement learning (RL)-based training of the transmitter. We demonstrate that this approach works as well as fully supervised methods on additive white Gaussian noise (AWGN) and Rayleigh block-fading (RBF) channels. Surprisingly, while our method converges slower on AWGN channels than supervised training, it converges faster on RBF channels. Our results are a first step towards learning of communications systems over any type of channel without prior assumptions.

Journal ArticleDOI
TL;DR: In this paper, the authors use machine learning and AI-assisted trading to predict the short-term evolution of the cryptocurrency market and show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks.
Abstract: Machine learning and AI-assisted trading have attracted growing interest for the past few years. Here, we use this approach to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. We analyse daily data for cryptocurrencies for the period between Nov. 2015 and Apr. 2018. We show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks. Our results show that nontrivial, but ultimately simple, algorithmic mechanisms can help anticipate the short-term evolution of the cryptocurrency market.

Journal ArticleDOI
TL;DR: In this paper, the authors review experimental demonstrations of Kramers-Kronig (KK) based direct detection systems with high per-carrier interface rates, high spectral efficiencies, and ∼100-km reach.
Abstract: In this paper, we review in detail experimental demonstrations of Kramers–Kronig (KK) based direct detection systems with high per-carrier interface rates, high spectral efficiencies, and ∼100-km reach Two realizations of KK-based receivers are summarized, including single-polarization and dual-polarization versions Critical aspects of the KK receiver such as the carrier-to-signal power ratio and receiver bandwidth limitations are discussed We show 220-Gb/s single-diode detection and 4 × 240-Gb/s dual polarization (dual-diode) detection in a WDM system at 53 bits/s/Hz spectral efficiency

Journal ArticleDOI
TL;DR: This article considers challenges and proposes state-of-the-art solutions covering different aspects of the radio interface and system-level simulation results are presented, showing how the proposed techniques can work in harmony in order to fulfill the ambitious latency and reliability requirements of upcoming URLLC applications.
Abstract: URLLC have the potential to enable a new range of applications and services: from wireless control and automation in industrial environments to self-driving vehicles. 5G wireless systems are faced by different challenges for supporting URLLC. Some of the challenges, particularly in the downlink direction, are related to the reliability requirements for both data and control channels, the need for accurate and flexible link adaptation, reducing the processing time of data retransmissions, and the multiplexing of URLLC with other services. This article considers these challenges and proposes state-of-the-art solutions covering different aspects of the radio interface. In addition, system-level simulation results are presented, showing how the proposed techniques can work in harmony in order to fulfill the ambitious latency and reliability requirements of upcoming URLLC applications.

Journal ArticleDOI
TL;DR: Connor is a novel graph encryption scheme that enables approximate CSD querying over encrypted graphs and is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient.
Abstract: Constrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem. In this paper, we propose Connor , a novel graph encryption scheme that enables approximate CSD querying. Connor is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using Connor , a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world data sets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme.

Journal ArticleDOI
TL;DR: ESense-an in-ear multisensory stereo device-for personal-scale behavior analytics could help accelerate the understanding of a wide range of human activities in a nonintrusive manner.
Abstract: The rise of consumer wearables promises to have a profound impact on peoples lives by going beyond counting steps. Wearables such as eSense-an in-ear multisensory stereo device-for personal-scale behavior analytics could help accelerate our understanding of a wide range of human activities in a nonintrusive manner.

Journal ArticleDOI
TL;DR: A joint link adaptation and resource allocation policy is proposed that dynamically adjusts the block error probability of URLLC small payload transmissions in accordance with the instantaneous experienced load per cell as well as what conditions are more appropriate for dynamic multiplexing of UR LLC and eMBB traffic in the upcoming 5G systems.
Abstract: This paper presents solutions for efficient multiplexing of ultra-reliable low-latency communications (URLLC) and enhanced mobile broadband (eMBB) traffic on a shared channel. This scenario presents multiple challenges in terms of radio resource scheduling, link adaptation, and inter-cell interference, which are identified and addressed throughout this paper. We propose a joint link adaptation and resource allocation policy that dynamically adjusts the block error probability of URLLC small payload transmissions in accordance with the instantaneous experienced load per cell. Extensive system-level simulations of the downlink performance showpromising gains of this technique, reducing the URLLC latency from 1.3 to 1 ms at the 99.999% percentile, with less than 10% degradation of the eMBB throughput performance as compared with conventional scheduling policies. Moreover, an exhaustive sensitivity analysis is conducted to determine the URLLC and eMBB performance under different offered loads, URLLC payload sizes, and link adaptation and scheduling strategies. The presented results give valuable insights on the maximum URLLC offered traffic load that can be tolerated while still satisfying the URLLC requirements, as well as what conditions are more appropriate for dynamic multiplexing of URLLC and eMBB traffic in the upcoming 5G systems.

Journal ArticleDOI
TL;DR: It is observed that soft biometrics is a valuable complement to the face modality in unconstrained scenarios, with relative improvements up to 40%/15% in the verification performance when using manual/automatic soft biometricrics estimation.
Abstract: The role of soft biometrics to enhance person recognition systems in unconstrained scenarios has not been extensively studied. Here, we explore the utility of the following modalities: gender, ethnicity, age, glasses, beard, and moustache. We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems (COTS). All experiments are reported using the labeled faces in the wild (LFW) database. First, we study the discrimination capabilities of soft biometrics standalone. Then, experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning. We observe that soft biometrics is a valuable complement to the face modality in unconstrained scenarios, with relative improvements up to 40%/15% in the verification performance when using manual/automatic soft biometrics estimation. Results are reproducible as we make public our manual annotations and COTS outputs of soft biometrics over LFW, as well as the face recognition scores.

Journal ArticleDOI
TL;DR: A contention-based transmission scheme aimed at users with small payloads is proposed to reduce collision probability by considering multiple transmissions for the same packet for reliable reception and achieves target reliability within the latency window.
Abstract: We consider a sporadic ultra-reliable and low latency communications in the uplink 5G cellular systems. Reliable low latency access for randomly emerging packet transmission cannot be guaranteed in current wireless systems. To achieve the goal of low latency and high reliability simultaneously, we propose a contention-based transmission scheme aimed at users with small payloads. We seek to reduce collision probability by considering multiple transmissions for the same packet for reliable reception. We find the optimal number of consecutive multiple transmissions that reduces collisions and achieves target reliability within the latency window. Performance is analyzed with a frame structure planned for 5G cellular systems. Results are compared with default multi-channel slotted ALOHA access scheme.

Journal ArticleDOI
TL;DR: A realistic side-by-side comparison of two network deployments – a present-day cellular infrastructure versus a next-generation massive MIMO system – is presented, bridging the gap between the 3GPP standardization status quo and the more forward-looking research.
Abstract: The purpose of this paper is to bestow the reader with a timely study of UAV cellular communications, bridging the gap between the 3GPP standardization status quo and the more forward-looking research. Special emphasis is placed on the downlink command and control (CC 2) over a 10 MHz bandwidth, and for UAV heights of up to 300 m, massive MIMO networks can support 100 kbps CC and 3) supporting UAV C&C channels can considerably affect the performance of ground users on account of severe pilot contamination, unless suitable power control policies are in place.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: This work proposes a novel coding strategy, named entangled polynomial code, designing intermediate computations at the workers in order to minimize the recovery threshold, and characterize the optimal recovery threshold among all linear coding strategies within a factor of 2 using bilinear complexity.
Abstract: Consider massive matrix multiplication, a problem that underlies many data analytic applications, in a large-scale distributed system comprising a group of workers. We target the stragglers' delay performance bottleneck, which is due to the unpredictable latency in waiting for slowest nodes (or stragglers) to finish their tasks. We propose a novel coding strategy, named entangled polynomial code, designing intermediate computations at the workers in order to minimize the recovery threshold (i.e., the number of workers that we need to wait for in order to compute the final output). We prove the optimality of entangled polynomial code in several cases, and show that it provides order-wise improvement over the conventional schemes for straggler mitigation. Furthermore, we characterize the optimal recovery threshold among all linear coding strategies within a factor of 2 using bilinear complexity, by developing an improved version of the entangled polynomial code.