scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2017"


Journal ArticleDOI
TL;DR: In this article, an end-to-end reconstruction task was proposed to jointly optimize transmitter and receiver components in a single process, which can be extended to networks of multiple transmitters and receivers.
Abstract: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. This paper is concluded with a discussion of open challenges and areas for future investigation.

1,879 citations


Journal ArticleDOI
TL;DR: This paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties, and elaborates further on open research challenges.
Abstract: Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties Finally, this paper overviews the current standardization activities and elaborates further on open research challenges

1,351 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a max-min power control algorithm to ensure uniformly good service throughout the area of coverage in a cell-free massive MIMO system, where each user is served by a dedicated access point.
Abstract: A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs), which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max–min power control algorithms. Max–min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.

1,234 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of mmWave communications for future mobile networks (5G and beyond) is presented, including an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity.
Abstract: Millimeter wave (mmWave) communications have recently attracted large research interest, since the huge available bandwidth can potentially lead to the rates of multiple gigabit per second per user Though mmWave can be readily used in stationary scenarios, such as indoor hotspots or backhaul, it is challenging to use mmWave in mobile networks, where the transmitting/receiving nodes may be moving, channels may have a complicated structure, and the coordination among multiple nodes is difficult To fully exploit the high potential rates of mmWave in mobile networks, lots of technical problems must be addressed This paper presents a comprehensive survey of mmWave communications for future mobile networks (5G and beyond) We first summarize the recent channel measurement campaigns and modeling results Then, we discuss in detail recent progresses in multiple input multiple output transceiver design for mmWave communications After that, we provide an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity Finally, the progresses in the standardization and deployment of mmWave for mobile networks are discussed

887 citations


Book
28 Oct 2017
TL;DR: In this article, the spectral density S y (f) of the function y(t) where the spectrum is considered to be one-sided on a per hertz basis is defined.
Abstract: Consider a signal generator whose instantaneous output voltage V(t) may be written as V(t) = [V 0 + ??(t)] sin [2??v 0 t + s(t)] where V 0 and v 0 are the nominal amplitude and frequency, respectively, of the output. Provided that ??(t) and ??(t) = (d??/(dt) are sufficiently small for all time t, one may define the fractional instantaneous frequency deviation from nominal by the relation y(t) - ??(t)/2??v o A proposed definition for the measure of frequency stability is the spectral density S y (f) of the function y(t) where the spectrum is considered to be one sided on a per hertz basis. An alternative definition for the measure of stability is the infinite time average of the sample variance of two adjacent averages of y(t); that is, if y k = 1/t ??? tk+r = y(t k ) y(t) dt where ?? is the averaging period, t k+1 = t k + T, k = 0, 1, 2 ..., t 0 is arbitrary, and T is the time interval between the beginnings of two successive measurements of average frequency; then the second measure of stability is ?? y 2(??) ??? (y k+1 - y k )2/2 where denotes infinite time average and where T = ??. In practice, data records are of finite length and the infinite time averages implied in the definitions are normally not available; thus estimates for the two measures must be used. Estimates of S y (f) would be obtained from suitable averages either in the time domain or the frequency domain.

725 citations


Journal ArticleDOI
TL;DR: Cell-free Massive MIMO is shown to provide five- to ten-fold improvement in 95%-likely per-user throughput over small-cell operation and a near-optimal power control algorithm is developed that is considerably simpler than exact max–min power control.
Abstract: Cell-free Massive multiple-input multiple-output (MIMO) comprises a large number of distributed low-cost low-power single antenna access points (APs) connected to a network controller. The number of AP antennas is significantly larger than the number of users. The system is not partitioned into cells and each user is served by all APs simultaneously. The simplest linear precoding schemes are conjugate beamforming and zero-forcing. Max–min power control provides equal throughput to all users and is considered in this paper. Surprisingly, under max–min power control, most APs are found to transmit at less than full power. The zero-forcing precoder significantly outperforms conjugate beamforming. For zero-forcing, a near-optimal power control algorithm is developed that is considerably simpler than exact max–min power control. An alternative to cell-free systems is small-cell operation in which each user is served by only one AP for which power optimization algorithms are also developed. Cell-free Massive MIMO is shown to provide five- to ten-fold improvement in 95%-likely per-user throughput over small-cell operation.

561 citations


Posted Content
TL;DR: A fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process is developed.
Abstract: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. The paper is concluded with a discussion of open challenges and areas for future investigation.

509 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations.
Abstract: We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.

457 citations


Proceedings Article
04 Dec 2017
TL;DR: This work considers a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices, and proposes a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes to efficiently deal with straggling workers.
Abstract: We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to optimally deal with straggling workers. The proposed strategy, named as polynomial codes, achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. This is the first code that achieves the optimal utilization of redundancy for tolerating stragglers or failures in distributed matrix multiplication. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics including computation latency and communication load. Moreover, we extend this code to distributed convolution and show its order-wise optimality.

413 citations


Journal ArticleDOI
TL;DR: The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency to ensure ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.
Abstract: This article proposes an approach to enhance users' experience of video streaming in the context of smart cities. The proposed approach relies on the concept of MEC as a key factor in enhancing QoS. It sustains QoS by ensuring that applications/services follow the mobility of users, realizing the "Follow Me Edge" concept. The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency. Considering its application in smart city scenarios, the proposed scheme represents an important solution for reducing core network traffic and ensuring ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.

351 citations


Journal ArticleDOI
TL;DR: The identification of fundamental scaling disparities between the technologies used to generate and process data and those used to transport data could lead to the data transport network falling behind its required capabilities by a factor of approximately 4 every five years, leading to an optical network capacity crunch.
Abstract: Based on a variety of long-term network traffic data from different geographies and applications, in addition to long-term scaling trends of key information and communication technologies, we identify fundamental scaling disparities between the technologies used to generate and process data and those used to transport data. These disparities could lead to the data transport network falling behind its required capabilities by a factor of approximately 4 every five years. By 2024, we predict the need for 10-Tb/s optical interfaces working in 1-Pb/s optical transport systems. To satisfy these needs, multiplexing in both wavelength and space in the form of a wavelength-division multiplexing × space-division multiplexing matrix will be required. We estimate the characteristics of such systems and outline their target specifications, which reveals the need for very significant research progress in multiple areas, from system and network architectures to digital signal processing to integrated arrayed device designs, in order to avoid an optical network capacity crunch.

Journal ArticleDOI
TL;DR: The results show that path loss exponent decreases as the UAV moves up, approximating freespace propagation for horizontal ranges up to tens of kilometers at UAV heights around 100 m.
Abstract: The main goal of this letter is to obtain models for path loss exponents and shadowing for the radio channel between airborne unmanned aerial vehicles (UAVs) and cellular networks In this pursuit, field measurements were conducted in live LTE networks at the 800 MHz frequency band, using a commercial UAV Our results show that path loss exponent decreases as the UAV moves up, approximating freespace propagation for horizontal ranges up to tens of kilometers at UAV heights around 100 m Our findings support the need of height-dependent parameters for describing the propagation channel for UAVs at different heights

Journal ArticleDOI
20 Mar 2017
TL;DR: The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels.
Abstract: Fiber-optic communication systems are nowadays facing serious challenges due to the fast growing demand on capacity from various new applications and services. It is now well recognized that nonlinear effects limit the spectral efficiency and transmission reach of modern fiber-optic communications. Nonlinearity compensation is therefore widely believed to be of paramount importance for increasing the capacity of future optical networks. Recently, there has been steadily growing interest in the application of a powerful mathematical tool—the nonlinear Fourier transform (NFT)—in the development of fundamentally novel nonlinearity mitigation tools for fiber-optic channels. It has been recognized that, within this paradigm, the nonlinear crosstalk due to the Kerr effect is effectively absent, and fiber nonlinearity due to the Kerr effect can enter as a constructive element rather than a degrading factor. The novelty and the mathematical complexity of the NFT, the versatility of the proposed system designs, and the lack of a unified vision of an optimal NFT-type communication system, however, constitute significant difficulties for communication researchers. In this paper, we therefore survey the existing approaches in a common framework and review the progress in this area with a focus on practical implementation aspects. First, an overview of existing key algorithms for the efficacious computation of the direct and inverse NFT is given, and the issues of accuracy and numerical complexity are elucidated. We then describe different approaches for the utilization of the NFT in practical transmission schemes. After that we discuss the differences, advantages, and challenges of various recently emerged system designs employing the NFT, as well as the spectral efficiency estimates available up-to-date. With many practical implementation aspects still being open, our mini-review is aimed at helping researchers assess the perspectives, understand the bottlenecks, and envision the development paths in the upcoming NFT-based transmission technologies.

Posted Content
TL;DR: The metric normalized validation error (NVE) is introduced in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

Proceedings ArticleDOI
22 Mar 2017
TL;DR: In this paper, the authors revisited the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes, and showed that neural networks can learn a form of decoding algorithm, rather than only a simple classifier.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

Journal ArticleDOI
Yvan Pointurier1
TL;DR: Techniques that the network designer can use in order to increase the capacity of optical networks, extend their life, and decrease deployment cost (CAPEX) or total cost of ownership over their life duration are reviewed.
Abstract: We review margins used in optical networks and review a formerly proposed margin taxonomy. For each category of margins, we review techniques that the network designer can use in order to increase the capacity of optical networks, extend their life, and decrease deployment cost (CAPEX) or total cost of ownership over their life duration. Green field (new network deployments) and brown field techniques (used after initial network deployment) are discussed. The technology needed to leverage the margins and achieve the aforementioned gains are also reviewed, along with the associated challenges.

Journal ArticleDOI
TL;DR: A fundamentally different approach is needed, in which the cache contents are used as side information for coded communication over the shared link, and it is proposed and proved that it is close to optimal.
Abstract: We consider a network consisting of a file server connected through a shared link to a number of users, each equipped with a cache. Knowing the popularity distribution of the files, the goal is to optimally populate the caches, such as to minimize the expected load of the shared link. For a single cache, it is well known that storing the most popular files is optimal in this setting. However, we show here that this is no longer the case for multiple caches. Indeed, caching only the most popular files can be highly suboptimal. Instead, a fundamentally different approach is needed, in which the cache contents are used as side information for coded communication over the shared link. We propose such a coded caching scheme and prove that it is close to optimal.

Journal ArticleDOI
TL;DR: A particular pattern for cache placement that maximizes the overall gains of cache-aided transmit and receive interference cancellations is developed and presented, leading to an upper bound on the linear one-shot sum- doF of the network, which is within a factor of 2 of the achievable sum-DoF.
Abstract: We consider a system, comprising a library of $N$ files (e.g., movies) and a wireless network with a $K_{T}$ transmitters, each equipped with a local cache of size of $M_{T}$ files and a $K_{R}$ receivers, each equipped with a local cache of size of $M_{R}$ files. Each receiver will ask for one of the $N$ files in the library, which needs to be delivered. The objective is to design the cache placement (without prior knowledge of receivers’ future requests) and the communication scheme to maximize the throughput of the delivery. In this setting, we show that the sum degrees-of-freedom (sum-DoF) of $\min \left \{{\frac {K_{T} M_{T}+K_{R} M_{R}}{N},K_{R}}\right \}$ is achievable, and this is within a factor of 2 of the optimum, under uncoded prefetching and one-shot linear delivery schemes. This result shows that (i) the one-shot sum-DoF scales linearly with the aggregate cache size in the network (i.e., the cumulative memory available at all nodes ), (ii) the transmitters’ caches and receivers’ caches contribute equally in the one-shot sum-DoF, and (iii) caching can offer a throughput gain that scales linearly with the size of the network. To prove the result, we propose an achievable scheme that exploits the redundancy of the content at transmitter’s caches to cooperatively zero-force some outgoing interference, and availability of the unintended content at the receiver’s caches to cancel (subtract) some of the incoming interference. We develop a particular pattern for cache placement that maximizes the overall gains of cache-aided transmit and receive interference cancellations. For the converse, we present an integer optimization problem which minimizes the number of communication blocks needed to deliver any set of requested files to the receivers. We then provide a lower bound on the value of this optimization problem, hence leading to an upper bound on the linear one-shot sum-DoF of the network, which is within a factor of 2 of the achievable sum-DoF.

Journal ArticleDOI
TL;DR: In this article, the authors considered the canonical shared link caching network and provided a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving orderoptimal rates.
Abstract: We consider the canonical shared link caching network formed by a source node, hosting a library of $m$ information messages (files), connected via a noiseless multicast link to $n$ user nodes, each equipped with a cache of size $M$ files. Users request files independently at random according to an a-priori known demand distribution q. A coding scheme for this network consists of two phases: cache placement and delivery. The cache placement is a mapping of the library files onto the user caches that can be optimized as a function of the demand statistics, but is agnostic of the actual demand realization. After the user demands are revealed, during the delivery phase the source sends a codeword (function of the library files, cache placement, and demands) to the users, such that each user retrieves its requested file with arbitrarily high probability. The goal is to minimize the average transmission length of the delivery phase, referred to as rate (expressed in channel symbols per file). In the case of deterministic demands, the optimal min-max rate has been characterized within a constant multiplicative factor, independent of the network parameters. The case of random demands was previously addressed by applying the order-optimal min-max scheme separately within groups of files requested with similar probability. However, no complete characterization of order-optimality was previously provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and, for the special yet relevant case of a Zipf demand distribution, we provide a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving order-optimal rates. We present also numerical results that confirm the superiority of our scheme with respect to previously proposed schemes for the same setting.

Journal ArticleDOI
TL;DR: In this article, a survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance, identifying the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios.
Abstract: A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today’s digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks.

Proceedings ArticleDOI
04 Jun 2017
TL;DR: The coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark and the conclusion is that the 95 %-tile uplink failure rate for outdoor users is below 5 % for all technologies.
Abstract: In this paper the coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark. Using the existing Telenor cellular site grid it is shown that the four technologies have more than 99 % outdoor coverage, while GPRS is challenged for indoor coverage. Furthermore, the study analyzes the capacity of the four technologies assuming a traffic growth from 1 to 10 IoT device per user. The conclusion is that the 95 %-tile uplink failure rate for outdoor users is below 5 % for all technologies. For indoor users only NB-IoT provides uplink and downlink connectivity with less than 5 % failure rate, while SigFox is able to provide an unacknowledged uplink data service with about 12 % failure rate. Both GPRS and LoRa struggle to provide sufficient indoor coverage and capacity.

Journal ArticleDOI
TL;DR: This tutorial paper surveys the photonic switching hardware solutions in support of evolving optical networking solutions enabling capacity expansion based on the proposed approaches and presents the first cost comparisons, to the knowledge, of the different approaches in an effort to quantify such tradeoffs.
Abstract: As traffic volumes carried by optical networks continue to grow by tens of percent year over year, we are rapidly approaching the capacity limit of the conventional communication band within a single-mode fiber. New measures such as elastic optical networking, spectral extension to multi-bands, and spatial expansion to additional fiber overlays or new fiber types are all being considered as potential solutions, whether near term or far. In this tutorial paper, we survey the photonic switching hardware solutions in support of evolving optical networking solutions enabling capacity expansion based on the proposed approaches. We also suggest how reconfigurable add/drop multiplexing nodes will evolve under these scenarios and gauge their properties and relative cost scalings. We identify that the switching technologies continue to evolve and offer network operators the required flexibility in routing information channels in both the spectral and spatial domains. New wavelength-selective switch designs can now support greater resolution, increased functionality and packing density, as well as operation with multiple input and output ports. Various switching constraints can be applied, such as routing of complete spatial superchannels, in an effort to reduce the network cost and simplify the routing protocols and managed pathway count. However, such constraints also reduce the transport efficiency when the network is only partially loaded, and may incur fragmentation. System tradeoffs between switching granularity and implementation complexity and cost will have to be carefully considered for future high-capacity SDM–WDM optical networks. In this work, we present the first cost comparisons, to our knowledge, of the different approaches in an effort to quantify such tradeoffs.

Journal ArticleDOI
TL;DR: The need for wide-area M2M wireless networks, especially for short data packet communication to support a very large number of IoT devices, is discussed, and recommendations for how future 5G networks should be designed for efficient wide- area M1M communications are recommended.
Abstract: The deployment of Internet of Things (IoT) devices and services is accelerating, aided by ubiquitous wireless connectivity, declining communication costs, and the emergence of cloud platforms. Most major mobile network operators view machine-to-machine (M2M) communication networks for supporting IoT as a significant source of new revenue. In this article, we discuss the need for wide-area M2M wireless networks, especially for short data packet communication to support a very large number of IoT devices. We first present a brief overview of current and emerging technologies for supporting wide area M2M, and then using communication theory principles, discuss the fundamental challenges and potential solutions for these networks, highlighting tradeoffs and strategies for random and scheduled access. We conclude with recommendations for how future 5G networks should be designed for efficient wide-area M2M communications.

Proceedings ArticleDOI
22 Feb 2017
TL;DR: This work partitions the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block, and shows the degradation through partitioning and compares the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.
Abstract: The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.

Journal ArticleDOI
TL;DR: This work reviews the various methods to create a radiomap and examines the various aspects such as the density of access points (APs) and impact of an outdated signature map which affect the performance of fingerprinting localization.

Journal ArticleDOI
TL;DR: This department describes phone, watch, and embedded prototypes that can locally run large-scale deep networks processing audio, images, and inertial sensor data and vastly reduce conventional inference-time overhead of deep models.
Abstract: This department provides an overview the progress the authors have made to the emerging area of embedded and mobile forms of on-device deep learning. Their work addresses two core technical questions. First, how should deep learning principles and algorithms be applied to sensor inference problems that are central to this class of computing? Second, what is required for current and future deep learning innovations to be efficiently integrated into a variety of mobile resource-constrained systems? Toward answering such questions, the authors describe phone, watch, and embedded prototypes that can locally run large-scale deep networks processing audio, images, and inertial sensor data. These prototypes are enabled with a variety of algorithmic and system-level innovations that vastly reduce conventional inference-time overhead of deep models.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the transformational role of coding in fog computing for leveraging such redundancy to substantially reduce the bandwidth consumption and latency of computing, and discuss two recently proposed coding concepts, minimum bandwidth codes and minimum latency codes.
Abstract: Redundancy is abundant in fog networks (i.e., many computing and storage points) and grows linearly with network size. We demonstrate the transformational role of coding in fog computing for leveraging such redundancy to substantially reduce the bandwidth consumption and latency of computing. In particular, we discuss two recently proposed coding concepts, minimum bandwidth codes and minimum latency codes, and illustrate their impacts on fog computing. We also review a unified coding framework that includes the above two coding techniques as special cases, and enables a trade-off between computation latency and communication load to optimize system performance. At the end, we will discuss several open problems and future research directions.

Journal ArticleDOI
TL;DR: This work presents an uncooled, mid-infrared photodetector, where the pyroelectric response of a LiNbO3 crystal is transduced with high gain into resistivity modulation for graphene, leading to TCRs up to 900% K−1 and the ability to resolve temperature variations down to 15 μK.
Abstract: There is a growing number of applications demanding highly sensitive photodetectors in the mid-infrared. Thermal photodetectors, such as bolometers, have emerged as the technology of choice, because they do not need cooling. The performance of a bolometer is linked to its temperature coefficient of resistance (TCR, ∼2–4% K−1 for state-of-the-art materials). Graphene is ideally suited for optoelectronic applications, with a variety of reported photodetectors ranging from visible to THz frequencies. For the mid-infrared, graphene-based detectors with TCRs ∼4–11% K−1 have been demonstrated. Here we present an uncooled, mid-infrared photodetector, where the pyroelectric response of a LiNbO3 crystal is transduced with high gain (up to 200) into resistivity modulation for graphene. This is achieved by fabricating a floating metallic structure that concentrates the pyroelectric charge on the top-gate capacitor of the graphene channel, leading to TCRs up to 900% K−1, and the ability to resolve temperature variations down to 15 μK. There is emerging interest in photodetectors in the mid-infrared driven by increasing need to monitor the environment for security and healthcare purposes. Sassiet al. show a thermal photodetector, based on the coupling between graphene and a pyroelectric crystal, which shows high temperature sensitivity.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: It is shown that the normalized generalized mutual information represents an excellent forward error correction (FEC) threshold for uniform as well as for probabilistically shaped QAM and hence allows to accurately predict post-FEC performance from measured pre-Fec data.
Abstract: We show that the normalized generalized mutual information represents an excellent forward error correction (FEC) threshold for uniform as well as for probabilistically shaped QAM and hence allows to accurately predict post-FEC performance from measured pre-FEC data.

Journal ArticleDOI
TL;DR: In this article, the authors show that a transmission performance beyond the conventional Kerr nonlinearity limit can be achieved by encoding all the available degrees of freedom and nonlinearly multiplexing signals in the so-called nonlinear Fourier spectrum, which evolves linearly along the fibre link.
Abstract: Current optical fibre transmission systems rely on modulation, coding and multiplexing techniques that were originally developed for linear communication channels. However, linear transmission techniques are not fully compatible with a transmission medium with a nonlinear response, which is the case for an optical fibre. As a consequence, the Kerr nonlinearity in fibre imposes a limit on the performance and the achievable transmission rate of the conventional optical fibre communication systems. Here we show that a transmission performance beyond the conventional Kerr nonlinearity limit can be achieved by encoding all the available degrees of freedom and nonlinearly multiplexing signals in the so-called nonlinear Fourier spectrum, which evolves linearly along the fibre link. This result strongly motivates a fundamental paradigm shift in modulation, coding and signal-processing techniques for optical fibre transmission technology. The Kerr nonlinearity limit for optical fibre communications is surpassed by using nonlinear multiplexing.