scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2019"


Journal ArticleDOI
TL;DR: This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment.
Abstract: Future wireless networks are expected to constitute a distributed intelligent wireless communications, sensing, and computing platform, which will have the challenging requirement of interconnecting the physical and digital worlds in a seamless and sustainable manner. Currently, two main factors prevent wireless network operators from building such networks: (1) the lack of control of the wireless environment, whose impact on the radio waves cannot be customized, and (2) the current operation of wireless radios, which consume a lot of power because new signals are generated whenever data has to be transmitted. In this paper, we challenge the usual “more data needs more power and emission of radio waves” status quo, and motivate that future wireless networks necessitate a smart radio environment: a transformative wireless concept, where the environmental objects are coated with artificial thin films of electromagnetic and reconfigurable material (that are referred to as reconfigurable intelligent meta-surfaces), which are capable of sensing the environment and of applying customized transformations to the radio waves. Smart radio environments have the potential to provide future wireless networks with uninterrupted wireless connectivity, and with the capability of transmitting data without generating new signals but recycling existing radio waves. We will discuss, in particular, two major types of reconfigurable intelligent meta-surfaces applied to wireless networks. The first type of meta-surfaces will be embedded into, e.g., walls, and will be directly controlled by the wireless network operators via a software controller in order to shape the radio waves for, e.g., improving the network coverage. The second type of meta-surfaces will be embedded into objects, e.g., smart t-shirts with sensors for health monitoring, and will backscatter the radio waves generated by cellular base stations in order to report their sensed data to mobile phones. These functionalities will enable wireless network operators to offer new services without the emission of additional radio waves, but by recycling those already existing for other purposes. This paper overviews the current research efforts on smart radio environments, the enabling technologies to realize them in practice, the need of new communication-theoretic models for their analysis and design, and the long-term and open research issues to be solved towards their massive deployment. In a nutshell, this paper is focused on discussing how the availability of reconfigurable intelligent meta-surfaces will allow wireless network operators to redesign common and well-known network communication paradigms.

1,504 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive survey of all of these developments promoting smooth integration of UAVs into cellular networks, including the types of consumer UAV currently available off-the-shelf, the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial BSs, challenges and opportunities for assisting cellular communications with UAV-based flying relays and BSs.
Abstract: The rapid growth of consumer unmanned aerial vehicles (UAVs) is creating promising new business opportunities for cellular operators On the one hand, UAVs can be connected to cellular networks as new types of user equipment, therefore generating significant revenues for the operators that can guarantee their stringent service requirements On the other hand, UAVs offer the unprecedented opportunity to realize UAV-mounted flying base stations (BSs) that can dynamically reposition themselves to boost coverage, spectral efficiency, and user quality of experience Indeed, the standardization bodies are currently exploring possibilities for serving commercial UAVs with cellular networks Industries are beginning to trial early prototypes of flying BSs or user equipments, while academia is in full swing researching mathematical and algorithmic solutions to address interesting new problems arising from flying nodes in cellular networks In this paper, we provide a comprehensive survey of all of these developments promoting smooth integration of UAVs into cellular networks Specifically, we survey: 1) the types of consumer UAVs currently available off-the-shelf; 2) the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial BSs; 3) the challenges and opportunities for assisting cellular communications with UAV-based flying relays and BSs; 4) the ongoing prototyping and test bed activities; 5) the new regulations being developed to manage the commercial use of UAVs; and 6) the cyber-physical security of UAV-assisted cellular communications

667 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explain how the first chapter of the massive MIMO research saga has come to an end, while the story has just begun, and outline five new massive antenna array related research directions.

556 citations


Journal ArticleDOI
TL;DR: The Rel-16 features and outlook towards Rel-17 and beyond are discussed and new features to further expand the applicability of the 5G System to new markets and use cases are introduced.
Abstract: The 5G System is being developed and enhanced to provide unparalleled connectivity to connect everyone and everything, everywhere. The first version of the 5G System, based on the Release 15 (“Rel-15”) version of the specifications developed by 3GPP, comprising the 5G Core (5GC) and 5G New Radio (NR) with 5G User Equipment (UE), is currently being deployed commercially throughout the world both at sub-6 GHz and at mmWave frequencies. Concurrently, the second phase of 5G is being standardized by 3GPP in the Release 16 (“Rel-16”) version of the specifications which will be completed by March 2020. While the main focus of Rel-15 was on enhanced mobile broadband services, the focus of Rel-16 is on new features for URLLC (Ultra-Reliable Low Latency Communication) and Industrial IoT, including Time Sensitive Communication (TSC), enhanced Location Services, and support for Non-Public Networks (NPNs). In addition, some crucial new features, such as NR on unlicensed bands (NR-U), Integrated Access & Backhaul (IAB) and NR Vehicle-to-X (V2X), are also being introduced as part of Rel-16, as well as enhancements for massive MIMO, wireless and wireline convergence, the Service Based Architecture (SBA) and Network Slicing. Finally, the number of use cases, types of connectivity and users, and applications running on top of 5G networks, are all expected to increase dramatically, thus motivating additional security features to counter security threats which are expected to increase in number, scale and variety. In this paper, we discuss the Rel-16 features and provide an outlook towards Rel-17 and beyond, covering both new features and enhancements of existing features. 5G Evolution will focus on three main areas: enhancements to features introduced in Rel-15 and Rel-16, features that are needed for operational enhancements, and new features to further expand the applicability of the 5G System to new markets and use cases.

532 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a multi-plane light conversion scheme for large number of spatial modes in a scalable fashion, where the number of phase plates required scales with the dimensionality of the transformation.
Abstract: Multi-plane light conversion is a method of performing spatial basis transformations using cascaded phase plates separated by Fourier transforms or free-space propagation. In general, the number of phase plates required scales with the dimensionality (total number of modes) in the transformation. This is a practical limitation of the technique as it relates to scaling to large mode counts. Firstly, requiring many planes increases the complexity of the optical system itself making it difficult to implement, but also because even a very small loss per plane will grow exponentially as more and more planes are added, causing a theoretically lossless optical system, to be far from lossless in practice. Spatial basis transformations of particular interest are those which take a set of spatial modes which exist in the same or similar space, and transform them into an array of spatially separated spots. Analogous to the operation performed by a diffraction grating in the wavelength domain, or a polarizing beamsplitting in the polarization domain. Decomposing the Laguerre-Gaussian, Hermite-Gaussian or related bases to an array of spots are examples of this and are relevant to many areas of light propagation in free-space and optical fibre. In this paper we present our work on designing multi-plane light conversion devices capable or operating on large numbers of spatial modes in a scalable fashion.

266 citations


Journal ArticleDOI
Junho Cho1, Peter J. Winzer1
TL;DR: The optimal parameters of PCS and FEC are derived that maximize the IR for both ideal and non-ideal PCs, and key assumptions are carefully revisited to avoid plausible pitfalls in practice.
Abstract: We review probabilistic constellation shaping (PCS), which has been a key enabler for several recent record-setting optical fiber communications experiments. PCS provides both fine-grained rate adaptability and energy efficiency (sensitivity) gains. We discuss the reasons for the fundamentally better performance of PCS over other constellation shaping techniques that also achieve rate adaptability, such as time-division hybrid modulation, and examine in detail the impact of sub-optimum shaping and forward error correction (FEC) on PCS systems. As performance metrics for systems with PCS, we compare information-theoretic measures such as mutual information (MI), generalized MI (GMI), and normalized GMI, which enable optimization and quantification of the information rate (IR) that can be achieved by PCS and FEC. We derive the optimal parameters of PCS and FEC that maximize the IR for both ideal and non-ideal PCS and FEC. To avoid plausible pitfalls in practice, we carefully revisit key assumptions that are typically made for ideal PCS and FEC systems.

255 citations


Journal ArticleDOI
TL;DR: A joint model is built to integrate the nonlocal self-similarity of video/hyperspectral frames and the rank minimization approach with the SCI sensing process and an alternating minimization algorithm is developed to solve the non-convex problem of SCI reconstruction.
Abstract: Snapshot compressive imaging (SCI) refers to compressive imaging systems where multiple frames are mapped into a single measurement, with video compressive imaging and hyperspectral compressive imaging as two representative applications. Though exciting results of high-speed videos and hyperspectral images have been demonstrated, the poor reconstruction quality precludes SCI from wide applications. This paper aims to boost the reconstruction quality of SCI via exploiting the high-dimensional structure in the desired signal. We build a joint model to integrate the nonlocal self-similarity of video/hyperspectral frames and the rank minimization approach with the SCI sensing process. Following this, an alternating minimization algorithm is developed to solve this non-convex problem. We further investigate the special structure of the sampling process in SCI to tackle the computational workload and memory issues in SCI reconstruction. Both simulation and real data (captured by four different SCI cameras) results demonstrate that our proposed algorithm leads to significant improvements compared with current state-of-the-art algorithms. We hope our results will encourage the researchers and engineers to pursue further in compressive imaging for real applications.

244 citations


Journal ArticleDOI
TL;DR: In this paper, a segregated network composite of carbon nanotubes with a range of lithium storage materials (for example, silicon, graphite and metal oxide particles) suppresses mechanical instabilities by toughening the composite, allowing the fabrication of high-performance electrodes with thicknesses of up to 800μm.
Abstract: Increasing the energy storage capability of lithium-ion batteries necessitates maximization of their areal capacity. This requires thick electrodes performing at near-theoretical specific capacity. However, achievable electrode thicknesses are restricted by mechanical instabilities, with high-thickness performance limited by the attainable electrode conductivity. Here we show that forming a segregated network composite of carbon nanotubes with a range of lithium storage materials (for example, silicon, graphite and metal oxide particles) suppresses mechanical instabilities by toughening the composite, allowing the fabrication of high-performance electrodes with thicknesses of up to 800 μm. Such composite electrodes display conductivities up to 1 × 104 S m−1 and low charge-transfer resistances, allowing fast charge-delivery and enabling near-theoretical specific capacities, even for thick electrodes. The combination of high thickness and specific capacity leads to areal capacities of up to 45 and 30 mAh cm−2 for anodes and cathodes, respectively. Combining optimized composite anodes and cathodes yields full cells with state-of-the-art areal capacities (29 mAh cm−2) and specific/volumetric energies (480 Wh kg−1 and 1,600 Wh l−1). While thicker battery electrodes are in high demand to maximize energy density, mechanical instability is a major hurdle in their fabrication. Here the authors report that segregated carbon nanotube networks enable thick, high-capacity electrodes for a range of materials including Si and NMC.

233 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a vision for grapheme-based integrated photonics and present a roadmap of the technological requirements to meet the demands of the datacom and telecom markets.
Abstract: Graphene is an ideal material for optoelectronic applications. Its photonic properties give several advantages and complementarities over Si photonics. For example, graphene enables both electro-absorption and electro-refraction modulation with an electro-optical index change exceeding 10$^{-3}$. It can be used for optical add-drop multiplexing with voltage control, eliminating the current dissipation used for the thermal detuning of microresonators, and for thermoelectric-based ultrafast optical detectors that generate a voltage without transimpedance amplifiers. Here, we present our vision for grapheme-based integrated photonics. We review graphene-based transceivers and compare them with existing technologies. Strategies for improving power consumption, manufacturability and wafer-scale integration are addressed. We outline a roadmap of the technological requirements to meet the demands of the datacom and telecom markets. We show that graphene based integrated photonics could enable ultrahigh spatial bandwidth density , low power consumption for board connectivity and connectivity between data centres, access networks and metropolitan, core, regional and long-haul optical communications.

223 citations


Journal ArticleDOI
TL;DR: The results indicate that operating an IoT device in a temperature of −20 °C can shorten its life by about half, and with a 10% improvement in receiver sensitivity, NB-IoT 882 MHz and LoRaWAN can increase coverage by up to 398% and 142%, respectively, without adverse effects on the energy requirements.
Abstract: The rapid growth of Internet-of-Things (IoT) in the current decade has led to the development of a multitude of new access technologies targeted at low-power, wide area networks (LP-WANs). However, this has also created another challenge pertaining to technology selection. This paper reviews the performance of LP-WAN technologies for IoT, including design choices and their implications. We consider Sigfox, LoRaWAN, WavIoT, random phase multiple access (RPMA), narrowband IoT (NB-IoT), as well as LTE-M and assess their performance in terms of signal propagation, coverage and energy conservation. The comparative analyses presented in this paper are based on available data sheets and simulation results. A sensitivity analysis is also conducted to evaluate network performance in response to variations in system design parameters. Results show that each of RPMA, NB-IoT, and LTE-M incurs at least 9 dB additional path loss relative to Sigfox and LoRaWAN. This paper further reveals that with a 10% improvement in receiver sensitivity, NB-IoT 882 MHz and LoRaWAN can increase coverage by up to 398% and 142%, respectively, without adverse effects on the energy requirements. Finally, extreme weather conditions can significantly reduce the active network life of LP-WANs. In particular, the results indicate that operating an IoT device in a temperature of −20 °C can shorten its life by about half; 53% (WavIoT, LoRaWAN, Sigfox, NB-IoT, and RPMA) and 48% in LTE-M compared with environmental temperature of 40 °C.

204 citations


Posted Content
TL;DR: In this paper, the authors explain how the first chapter of the massive MIMO research saga has come to an end, while the story has just begun, and outline five new massive antenna array related research directions.
Abstract: Massive MIMO (multiple-input multiple-output) is no longer a "wild" or "promising" concept for future cellular networks - in 2018 it became a reality. Base stations (BSs) with 64 fully digital transceiver chains were commercially deployed in several countries, the key ingredients of Massive MIMO have made it into the 5G standard, the signal processing methods required to achieve unprecedented spectral efficiency have been developed, and the limitation due to pilot contamination has been resolved. Even the development of fully digital Massive MIMO arrays for mmWave frequencies - once viewed prohibitively complicated and costly - is well underway. In a few years, Massive MIMO with fully digital transceivers will be a mainstream feature at both sub-6 GHz and mmWave frequencies. In this paper, we explain how the first chapter of the Massive MIMO research saga has come to an end, while the story has just begun. The coming wide-scale deployment of BSs with massive antenna arrays opens the door to a brand new world where spatial processing capabilities are omnipresent. In addition to mobile broadband services, the antennas can be used for other communication applications, such as low-power machine-type or ultra-reliable communications, as well as non-communication applications such as radar, sensing and positioning. We outline five new Massive MIMO related research directions: Extremely large aperture arrays, Holographic Massive MIMO, Six-dimensional positioning, Large-scale MIMO radar, and Intelligent Massive MIMO.

Journal ArticleDOI
TL;DR: A novel DT prototype to analyze the requirements of communication in a mission-critical application such as mobile networks supported remote surgery and necessary cybersecurity technologies that will help in developing the DT architecture are developed.
Abstract: The concept of digital twin (DT) has emerged to enable the benefits of future paradigms such as the industrial Internet of Things and Industry 4.0. The idea is to bring every data source and control interface description related to a product or process available through a single interface, for auto-discovery and automated communication establishment. However, designing the architecture of a DT to serve every future application is an ambitious task. Therefore, the prototyping systems for specific applications are required to design the DT incrementally. We developed a novel DT prototype to analyze the requirements of communication in a mission-critical application such as mobile networks supported remote surgery. Such operations require low latency and high levels of security and reliability and therefore are a perfect subject for analyzing DT communication and cybersecurity. The system comprised of a robotic arm and HTC vive virtual reality (VR) system connected over a 4G mobile network. More than 70 test users were employed to assess the system. To address the cybersecurity of the system, we incorporated a network manipulation module to test the effect of network outages and attacks; we studied state of the art practices and their utilization within DTs. The capability of the system for actual remote surgery is limited by capabilities of the VR system and insufficient feedback from the robot. However, simulations and research of remote surgeries could be conducted with the system. As a result, we propose ideas for communication establishment and necessary cybersecurity technologies that will help in developing the DT architecture. Furthermore, we concluded that developing the DT requires cross-disciplinary development in several different engineering fields. Each field makes use of its own tools and methods, which do not always fit together perfectly. This is a potentially major obstacle in the realization of Industry 4.0 and similar concepts.

Posted Content
TL;DR: Three new multiple antenna technologies that can play key roles in beyond 5G networks: cell-free massive MIMO, beamspace massive M IMO, and intelligent reflecting surfaces are surveyed.
Abstract: Multiple antenna technologies have attracted large research interest for several decades and have gradually made their way into mainstream communication systems. Two main benefits are adaptive beamforming gains and spatial multiplexing, leading to high data rates per user and per cell, especially when large antenna arrays are used. Now that multiple antenna technology has become a key component of the fifth-generation (5G) networks, it is time for the research community to look for new multiple antenna applications to meet the immensely higher data rate, reliability, and traffic demands in the beyond 5G era. We need radically new approaches to achieve orders-of-magnitude improvements in these metrics and this will be connected to large technical challenges, many of which are yet to be identified. In this survey paper, we present a survey of three new multiple antenna related research directions that might play a key role in beyond 5G networks: Cell-free massive multiple-input multiple-output (MIMO), beamspace massive MIMO, and intelligent reflecting surfaces. More specifically, the fundamental motivation and key characteristics of these new technologies are introduced. Recent technical progress is also presented. Finally, we provide a list of other prospective future research directions.

Proceedings ArticleDOI
01 Apr 2019
TL;DR: In this paper, the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints is studied.
Abstract: The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network-periphery, in proximity to end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints. We show that this problem generalizes several problems in literature and propose an algorithm that achieves close-to-optimal performance using randomized rounding. Evaluation results demonstrate that our approach can effectively utilize the available resources to maximize the number of requests served by low-latency edge cloud servers.

Journal ArticleDOI
TL;DR: First, energy supply architecture is proposed to satisfy different energy demands of miners in response to different consensus protocols, and the energy allocation as a Stackelberg game and adapt backward induction to achieve an optimal profit strategy for both microgrids and miners in IoT.
Abstract: Currently, blockchain technology has been widely used due to its support of transaction trust and security in next generation society. Using Internet of Things (IoT) to mine makes blockchain more ubiquitous and decentralized, which has become a main development trend of blockchain. However, the limited resources of existing IoT cannot satisfy the high requirements of on-demand energy consumption in the mining process through a decentralized way. To address this, we propose a decentralized on-demand energy supply approach based on microgrids to provide decentralized on-demand energy for mining in IoT devices. First, energy supply architecture is proposed to satisfy different energy demands of miners in response to different consensus protocols. Then, we formulate the energy allocation as a Stackelberg game and adapt backward induction to achieve an optimal profit strategy for both microgrids and miners in IoT. The simulation results show the fairness and incentive of the proposed approach.

Journal ArticleDOI
17 Jul 2019
TL;DR: A novel Spatial-Temporal Attention (STA) approach to tackle the large-scale person reidentification task in videos that fully exploits those discriminative parts of one target person in both spatial and temporal dimensions and results in a 2-D attention score matrix.
Abstract: In this work, we propose a novel Spatial-Temporal Attention (STA) approach to tackle the large-scale person reidentification task in videos. Different from the most existing methods, which simply compute representations of video clips using frame-level aggregation (e.g. average pooling), the proposed STA adopts a more effective way for producing robust clip-level feature representation. Concretely, our STA fully exploits those discriminative parts of one target person in both spatial and temporal dimensions, which results in a 2-D attention score matrix via inter-frame regularization to measure the importances of spatial parts across different frames. Thus, a more robust clip-level feature representation can be generated according to a weighted sum operation guided by the mined 2-D attention score matrix. In this way, the challenging cases for video-based person re-identification such as pose variation and partial occlusion can be well tackled by the STA. We conduct extensive experiments on two large-scale benchmarks, i.e. MARS and DukeMTMCVideoReID. In particular, the mAP reaches 87.7% on MARS, which significantly outperforms the state-of-the-arts with a large margin of more than 11.6%.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The λ-net, which reconstructs hyperspectral images from a single shot measurement, can finish the reconstruction task within sub-seconds instead of hours taken by the most recently proposed DeSCI algorithm, thus speeding up the reconstruction >1000 times.
Abstract: We propose the λ-net, which reconstructs hyperspectral images (e.g., with 24 spectral channels) from a single shot measurement. This task is usually termed snapshot compressive-spectral imaging (SCI), which enjoys low cost, low bandwidth and high-speed sensing rate via capturing the three-dimensional (3D) signal i.e., (x, y, λ), using a 2D snapshot. Though proposed more than a decade ago, the poor quality and low-speed of reconstruction algorithms preclude wide applications of SCI. To address this challenge, in this paper, we develop a dual-stage generative model to reconstruct the desired 3D signal in SCI, dubbed λ-net. Results on both simulation and real datasets demonstrate the significant advantages of λ-net, which leads to >4dB improvement in PSNR for real-mask-in-the-loop simulation data compared to the current state-of-the-art. Furthermore, λ-net can finish the reconstruction task within sub-seconds instead of hours taken by the most recently proposed DeSCI algorithm, thus speeding up the reconstruction >1000 times.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an end-to-end learning algorithm that enables training of communication systems with an unknown channel model or with non-differentiable components by iterating between training the receiver using the true gradient, and training the transmitter using an approximation of the gradient.
Abstract: The idea of end-to-end learning of communication systems through neural network (NN)-based autoencoders has the shortcoming that it requires a differentiable channel model. We present in this paper a novel learning algorithm which alleviates this problem. The algorithm enables training of communication systems with an unknown channel model or with non-differentiable components. It iterates between training of the receiver using the true gradient, and training of the transmitter using an approximation of the gradient. We show that this approach works as well as model-based training for a variety of channels and tasks. Moreover, we demonstrate the algorithm’s practical viability through hardware implementation on software defined radios (SDRs) where it achieves state-of-the-art performance over a coaxial cable and wireless channel.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: A deep tensor ADMM-Net for video SCI systems that provides high-quality decoding in seconds with comparable visual results with the state-of-the-art methods but in much shorter running time is proposed.
Abstract: Snapshot compressive imaging (SCI) systems have been developed to capture high-dimensional ($\ge$ 3) signals using low-dimensional off-the-shelf sensors, \ie, mapping multiple video frames into a single measurement frame. One key module of a SCI system is an accurate decoder that recovers the original video frames. However, existing model-based decoding algorithms require exhaustive parameter tuning with prior knowledge and cannot support practical applications due to the extremely long running time. In this paper, we propose a deep tensor ADMM-Net for video SCI systems that provides high-quality decoding in seconds. Firstly, we start with a standard tensor ADMM algorithm, unfold its inference iterations into a layer-wise structure, and design a deep neural network based on tensor operations. Secondly, instead of relying on a pre-specified sparse representation domain, the network learns the domain of low-rank tensor through stochastic gradient descent. It is worth noting that the proposed deep tensor ADMM-Net has potentially mathematical interpretations. On public video data, the simulation results show the proposed {method} achieves average $0.8 \sim 2.5$ dB improvement in PSNR and $0.07 \sim 0.1$ in SSIM, and $1500\times \sim 3600 \times$ speedups over the state-of-the-art methods. On real data captured by SCI cameras, the experimental results show comparable visual results with the state-of-the-art methods but in much shorter running time.

Posted Content
TL;DR: This work studies the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints and proposes an algorithm that achieves close-to-optimal performance using randomized rounding.
Abstract: The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network-periphery, in proximity to end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints. We show that this problem generalizes several problems in literature and propose an algorithm that achieves close-to-optimal performance using randomized rounding. Evaluation results demonstrate that our approach can effectively utilize the available resources to maximize the number of requests served by low-latency edge cloud servers.

Journal ArticleDOI
TL;DR: This paper describes the design and implementation of a scalable multi-element phased-array system, with built-in self-alignment and self-test, based on an RFIC transceiver chipset manufactured in the TowerJazz 0.18.
Abstract: This paper describes the design and implementation of a scalable $W$ -band phased-array system, with built-in self-alignment and self-test, based on an RFIC transceiver chipset manufactured in the TowerJazz 0.18- $\mu \text{m}$ SiGe BiCMOS technology with $f_{T}/f_{\text {MAX}}$ of 240/270 GHz. The RFIC integrates 24 phase-shifter elements (16TX/8RX or 8TX/16RX) as well as direct up- and down-converters, phase-locked loop with prime-ratio frequency multiplier, analog baseband, beam lookup memory, and diagnostic circuits for performance monitoring. Two organic printed circuit board (PCB) interposers with integrated antenna sub-arrays are designed and co-assembled with the RFIC chipsets to produce a scalable phased-array tile. Tiles are phase-aligned to one another through a daisy-chained local oscillator (LO) synchronization signal. Statistical analysis of the effects of LO misalignment between tiles on beam patterns is presented. Sixteen tiles are combined onto a carrier PCB to create a 384-element (256TX/128RX) phased-array system. A maximum saturated effective isotropic radiated power (EIRP) of 60 dBm (1 kW) is measured at boresight for the 256 transmit elements. Wireless links operating at 90.7 GHz using a 16-QAM constellation at a reduced EIRP of 52 dBm produced data rates beyond 10 Gb/s for an equivalent link distance in excess of 250 m.

Journal ArticleDOI
TL;DR: In this article, the authors present the main objectives and timelines of this new 802.11be amendment, thoroughly describe its main candidate features and enhancements, and cover the important issue of coexistence with other wireless technologies.
Abstract: Wi-Fi technology is continuously innovating to cater to the growing customer demands, driven by the digitalization of everything, in the home as well as in enterprise and hotspot spaces. In this article, we introduce to the wireless community the next generation Wi-Fi, based on IEEE 802.11be Extremely High Throughput (EHT), present the main objectives and timelines of this new 802.11be amendment, thoroughly describe its main candidate features and enhancements, and cover the important issue of coexistence with other wireless technologies. We also provide simulation results to assess the potential throughput gains brought by 802.11be with respect to 802.11ax.

Journal ArticleDOI
TL;DR: Fundamental concepts of urban computing leveraging Location-Based Social Networks data are discussed and a survey of recent urban computing studies that make use of LBSN data is presented.
Abstract: Urban computing is an emerging area of investigation in which researchers study cities using digital data. Location-Based Social Networks (LBSNs) generate one specific type of digital data that offers unprecedented geographic and temporal resolutions. We discuss fundamental concepts of urban computing leveraging LBSN data and present a survey of recent urban computing studies that make use of LBSN data. We also point out the opportunities and challenges that those studies open.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate an equation which can fit capacity versus rate data, outputting three parameters which fully describe rate performance, including the characteristic time associated with charge/discharge, which can be linked by a second equation to physical electrode/electrolyte parameters via various ratelimiting processes.
Abstract: One weakness of batteries is the rapid falloff in charge-storage capacity with increasing charge/discharge rate. Rate performance is related to the timescales associated with charge/ionic motion in both electrode and electrolyte. However, no general fittable model exists to link capacity-rate data to electrode/electrolyte properties. Here we demonstrate an equation which can fit capacity versus rate data, outputting three parameters which fully describe rate performance. Most important is the characteristic time associated with charge/discharge which can be linked by a second equation to physical electrode/electrolyte parameters via various rate-limiting processes. We fit these equations to ~200 data sets, deriving parameters such as diffusion coefficients or electrolyte conductivities. It is possible to show which rate-limiting processes are dominant in a given situation, facilitating rational design and cell optimisation. In addition, this model predicts the upper speed limit for lithium/sodium ion batteries, yielding a value that is consistent with the fastest electrodes in the literature. The authors employ a semi-empirical method to fit published battery capacity-rate data to extract the characteristic time associated with charge/discharge. These characteristic times are consistent with a physical model that can be used to link rate performance to the physical properties of electrodes.

Journal ArticleDOI
Mathieu Chagnon1
TL;DR: It is presented how chromatic dispersion both prevents and facilitates larger bitrate-distance products, and the potential of DSP-enabled direct detection is concluded.
Abstract: Systems modulating, transporting, and detecting lightwaves have evolved tremendously in the past four decades. The first systems, which were relying on intensity modulation with direct detection have little in common with those manufactured today. Not only have optical fibers and electro-optic components drastically improved, systems now employ digital signal processing for its agility and versatility, initially deployed for long-reach communication systems but slowly making its way into systems covering shorter distances. Due to several network transforming reasons, we are now observing needs for massive deployment of fiber-optic transceivers covering distances of only 10 to 100 km but delivering much faster throughputs than those offered by legacy systems targeting these distances while maintaining low cost and power consumption, small foot print and very-low latency. In this paper, we review the evolution of fiber-optic communication systems, from intensity modulation with direct detection to coherent transceivers and digital signal processing-assisted direct detection. We address the main impairments preventing large bitrate-reach products for systems relying on intensity modulation with direct detection. We present a few reasons leading to the recent surge of the short-reach transceiver market segment, especially transceivers covering distances between 10 to 100 km. We summarize a few proposals for this market modulating and recovering an increasing number of degrees of freedom of the lightwave while maintaining self-beating direct detection. We conclude with remarks on the use of coherent technologies to address this market segment.

Proceedings ArticleDOI
21 Oct 2019
TL;DR: An overview of grant-free random access in 5G New Radio is provided, and two reliability-enhancing solutions are presented that result in significant performance gains, in terms of reliability as well as resource efficiency.
Abstract: Ultra-reliable low latency communication requires innovative resource management solutions that can guarantee high reliability at low latency. Grant-free random access, where channel resources are accessed without undergoing assignment through a handshake process, is proposed in 5G New Radio as an important latency reducing solution. However, this comes at an increased likelihood of collisions resulting from uncoordinated channel access. Novel reliability enhancement techniques are therefore needed. This article provides an overview of grant-free random access in 5G New Radio focusing on the ultra-reliable low latency communication service class, and presents two reliability-enhancing solutions. The first proposes retransmissions over shared resources, whereas the second proposal incorporates grant-free transmission with non-orthogonal multiple access where overlapping transmissions are resolved through the use of advanced receivers. Both proposed solutions result in significant performance gains, in terms of reliability as well as resource efficiency. For example, the proposed non-orthogonal multiple access scheme can support a normalized load of more than 1.5 users/slot at packet loss rates of ~ 10−5 a significant improvement over conventional grant-free schemes like slotted-ALOHA.

Journal ArticleDOI
13 Mar 2019
TL;DR: This paper provides a comprehensive overview of the host-based network function virtualization (NFV) ecosystem, covering a broad range of techniques, from low-level hardware acceleration and bump-in-the-wire offloading approaches to high-level software acceleration solutions, including the virtualization technique itself.
Abstract: The ongoing network softwarization trend holds the promise to revolutionize network infrastructures by making them more flexible, reconfigurable, portable, and more adaptive than ever. Still, the migration from hard-coded/hard-wired network functions toward their software-programmable counterparts comes along with the need for tailored optimizations and acceleration techniques so as to avoid or at least mitigate the throughput/latency performance degradation with respect to fixed function network elements. The contribution of this paper is twofold. First, we provide a comprehensive overview of the host-based network function virtualization (NFV) ecosystem, covering a broad range of techniques, from low-level hardware acceleration and bump-in-the-wire offloading approaches to high-level software acceleration solutions, including the virtualization technique itself. Second, we derive guidelines regarding the design, development, and operation of NFV-based deployments that meet the flexibility and scalability requirements of modern communication networks.

Journal ArticleDOI
TL;DR: The basics of the LCOS technology, from the wafer to the driving solutions, the progress over the last decade and the future outlook are reviewed.
Abstract: Existing for almost four decades, liquid crystal on Silicon (LCOS) technology is rapidly growing into photonic applications. We review the basics of the technology, from the wafer to the driving solutions, the progress over the last decade and the future outlook. Furthermore we review the most exciting industrial and scientific applications of the LCOS technology.

Journal ArticleDOI
TL;DR: This paper study current and future wireless networks from the viewpoint of energy efficiency (EE) and sustainability to meet the planned network and service evolution toward, along, and beyond 5G, as also inspired by the findings of the EU Celtic-Plus SooGREEN Project.
Abstract: The heated 5G network deployment race has already begun with the rapid progress in standardization efforts, backed by the current market availability of 5G-enabled network equipment, ongoing 5G spectrum auctions, early launching of non-standalone 5G network services in a few countries, among others. In this paper, we study current and future wireless networks from the viewpoint of energy efficiency (EE) and sustainability to meet the planned network and service evolution toward, along, and beyond 5G, as also inspired by the findings of the EU Celtic-Plus SooGREEN Project. We highlight the opportunities seized by the project efforts to enable and enrich this green nature of the network as compared to existing technologies. In specific, we present innovative means proposed in SooGREEN to monitor and evaluate EE in 5G networks and beyond. Further solutions are presented to reduce energy consumption and carbon footprint in the different network segments. The latter spans proposed virtualized/cloud architectures, efficient polar coding for fronthauling, mobile network powering via renewable energy and smart grid integration, passive cooling, smart sleeping modes in indoor systems, among others. Finally, we shed light on the open opportunities yet to be investigated and leveraged in future developments.

Journal ArticleDOI
TL;DR: In this article, an autoencoding sequence-based transceiver for communication over dispersive channels with intensity modulation and direct detection (IM/DD), designed as a bidirectional deep recurrent neural network (BRNN), was proposed.
Abstract: We propose an autoencoding sequence-based transceiver for communication over dispersive channels with intensity modulation and direct detection (IM/DD), designed as a bidirectional deep recurrent neural network (BRNN). The receiver uses a sliding window technique to allow for efficient data stream estimation. We find that this sliding window BRNN (SBRNN), based on end-to-end deep learning of the communication system, achieves a significant bit-error-rate reduction at all examined distances in comparison to previous block-based autoencoders implemented as feed-forward neural networks (FFNNs), leading to an increase of the transmission distance. We also compare the end-to-end SBRNN with a state-of-the-art IM/DD solution based on two level pulse amplitude modulation with an FFNN receiver, simultaneously processing multiple received symbols and approximating nonlinear Volterra equalization. Our results show that the SBRNN outperforms such systems at both 42 and 84 Gb/s, while training fewer parameters. Our novel SBRNN design aims at tailoring the end-to-end deep learning-based systems for communication over nonlinear channels with memory, such as the optical IM/DD fiber channel.